Economics of Regulation and Antitrust, fifth edition (The MIT Press) [5 ed.] 0262038064, 9780262038065

A thoroughly revised and updated edition of the leading textbook on government and business policy, presenting the key p

526 57 37MB

English Pages 1000 [878] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title Page
Copyright
Contents
Preface to the Fifth Edition
1. Introduction
The Rationale for Regulation and Antitrust Policies
Antitrust Regulation
The Changing Character of Antitrust Issues
Reasoning behind Antitrust Regulations
Economic Regulation
Development of Economic Regulation
Factors in Setting Rate Regulations
Health, Safety, and Environmental Regulation
Role of the Courts
Criteria for Assessment
Questions and Problems
Recommended Reading
Appendix
2. The Making of a Regulation
State versus Federal Regulation: The Federalism Debate
Advantages of Federalism
Advantages of National Regulations
Product Labeling Example
Overlap of State and Federal Regulations
The Character of the Rulemaking Process
Chronology of New Regulations
Nature of the Regulatory Oversight Process
The Nixon and Ford Administrations
The Carter Administration
The Reagan Administration
The Bush Administration
The Clinton Administration
The George W. Bush Administration
The Obama Administration
The Trump Administration
Regulatory Reform Legislation
Benefit-Cost Analysis
Discounting Deferred Effects
Present Value
The Criteria Applied in the Oversight Process
Regulatory Success Stories
Promotion of Cost-Effective Regulation
Distortion of Benefit and Cost Estimates
Regulatory Role of Price and Quality
Impact of the Oversight Process
Trends in Major Regulations
The Costs and Benefits of Major Regulations
Alternative Measures of the Scale of Regulation
The Character of Regulatory Oversight Actions
Judicial Review of Regulatory Impact Analyses
What Do Regulators Maximize?
Capture Theory
Other Theories of Influence Patterns
Comprehensive Models of Regulatory Objectives
Conclusion
Questions and Problems
Appendix
Trends in Regulatory Agency Budgets and Staff
I: Antitrust
3. Introduction to Antitrust
Competition and Welfare
Welfare Tools
Monopoly versus Competition Example
Is the Compensation Principle Compelling?
Some Complications
X-Inefficiency
Monopoly-Induced Waste
Estimates of the Welfare Loss from Monopoly
Innovation: Monopoly versus Competition
Industrial Organization
Defining the Market and Market Power
Structure
Conduct
Performance
Government
Antitrust
Purpose and Design of Antitrust Laws
U.S. Federal Antitrust Law and Enforcement
Global Competition Law and Enforcement
Summary of the Chapter and Overview of Part I
Questions and Problems
Appendix: Excerpts from Antitrust Statutes
United States
European Union
China
4. Oligopoly, Collusion, and Antitrust
Game Theory
Example 1: Advertising Competition
Example 2: Compatibility of Standards
The Strategic Form of a Game
Nash Equilibrium
Oligopoly Theory
The Cournot Model
Other Models of Oligopoly
Product Differentiation
Collusion
A Theory of Collusion
Challenges to Collusion
Case Studies of Collusion
Railroads and Price Wars Due to Imperfect Monitoring
Nasdaq Market Makers and Price Transparency
Toy Stores and Hub-and-Spoke Collusion
Antitrust Law and Enforcement with Respect to Price Fixing
Fundamental Features of the Law
The Concept of an Agreement
Parallelism Plus
Legal Procedures
Enforcement Policy
Summary
Questions and Problems
Appendix
Game Theory: Formal Definitions
5. Market Structure and Dynamic Competition
Market Structure
Concentration
Entry Conditions
Sources of Concentration
Dynamic Competition
Limit Pricing
Investment in Cost-Reducing Capital
Raising Rivals’ Costs
Preemption and Brand Proliferation
Summary
Questions and Problems
6. Horizontal Mergers
Antitrust Laws and Merger Trends
The Effects of Horizontal Mergers
Why Firms Merge
Welfare Analysis
Merger Law and Enforcement
Merger Evaluation: Activity and Procedures
Development of Merger Law and Policy
Practices for Evaluating a Merger
International Issues
Summary
Questions and Problems
7. Vertical Mergers and Vertical Restraints
Vertical Mergers
Benefits
Anticompetitive Effects
Fixed Proportions
Variable Proportions
Antitrust Law and Policy
Cases
Vertical Restraints
Exclusive Dealing
Tying
Manufacturer-Retailer Restraints
Summary
Questions and Problems
8. Monopolization and Price Discrimination
Establishing Monopolization Claims
Measuring Monopoly Power
Assessing Intent to Monopolize
Development of Antitrust Case Law
1890–1940: Standard Oil and United States Steel
1945–1970: Alcoa and United Shoe Machinery
1970 to Present: Kodak, IBM, Microsoft, and Others
Predatory Pricing
Theories of Predatory Pricing
Efficiency Rationales
Antitrust Policy
Refusal to Deal and the Essential Facilities Doctrine
Essential Facilities Doctrine
Intellectual Property Rights
Kodak and Monopoly Power in Aftermarkets
Price Discrimination and the Robinson-Patman Act
Theory of Price Discrimination
Cases
Summary
Questions and Problems
9. Antitrust in the New Economy
Economic Fundamentals of the New Economy
Antitrust Issues in the New Economy
Network Effects
Markets with Network Effects
Microsoft
Big Data
Two-Sided Platforms
Prices at a Two-Sided Platform
Challenges in Antitrust Analysis
Industries with Rapid and Disruptive Innovation
Segal-Whinston Model
Summary
Questions and Problems
II: Economic Regulation
10. Introduction to Economic Regulation
What Is Economic Regulation?
Reasons for Regulation
Instruments of Regulation
Brief History of Economic Regulation
Formative Stages
Trends in Regulation
The Regulatory Process
Overview of the Regulatory Process
Regulatory Legislation
Independent Regulatory Commissions
Regulatory Procedures
Theory of Regulation
Normative Analysis as a Positive Theory
Capture Theory
Economic Theory of Regulation
Taxation by Regulation
Summary and Overview of Part II
Appendix
A Theory of Interest Group Competition
Questions and Problems
11. Alternatives to Regulation in the Market: Public Enterprise and Franchise Bidding, with an Application to Cable Television
Public Enterprise
Basic Elements of Franchise Bidding
Information Advantages of Franchise Bidding
Potential Drawbacks to Franchise Bidding
Contractual Arrangements for the Post-bidding Stage
Assessment of Franchise Bidding
Early Regulation of Cable Television
Cable Television as a Natural Monopoly
Technological Background
Economies of Density and Scale
Franchising Process
Assessment of Franchise Bidding
Rate Regulation
The Limits of Government Regulation
Competition among Suppliers of Video Services
Summary
Questions and Problems
12. Optimal Pricing
Subadditivity and Multiproduct Monopoly
Optimal Pricing Policies
Optimal Pricing of a Single Service
Linear Pricing
Nonlinear Pricing
Optimal Pricing of Multiple Services
Ramsey Pricing
Non-Ramsey Pricing of Telephone Services
Optimal Pricing in Two-Sided Markets
Rate Structure
FDC Pricing
Avoiding Inefficient Entry
Time of Use Pricing
Costs of Power Production
TOU Pricing Model
Summary
Questions and Problems
13. Incentive Regulation
Traditional Rate of Return Regulation
Rate Hearings
Averch-Johnson Effect
Prudence Reviews and Cost Disallowances
Regulatory Lag
Incentive Regulation
Price Cap Regulation
Earnings Sharing
Performance Based Regulation in the Electricity Sector
Regulatory Options
Yardstick Regulation
Summary
Questions and Problems
14. Dynamic Issues in Natural Monopoly Regulation: Telecommunications
Basis for Natural Monopoly Regulation
Sources of Natural Monopoly Transformation
Demand Side
Cost Side
Regulatory Response
Asymmetric Regulation and Cream-Skimming
Interstate Telecommunications Market
Regulatory Background
Transformation of a Natural Monopoly
Regulatory Policy in the Microwave Era
Regulated Monopoly to Regulated Competition
Regulated Competition to Unregulated Competition
Telecommunications Act of 1996
Net Neutrality
Internet Structure
The Meaning of Net Neutrality
Rationale for Net Neutrality
Summary
Questions and Problems
15. Regulation of Potentially Competitive Markets: Theory and Estimation Methods
Theory of Price and Entry/Exit Regulation
Direct Effects of Price and Entry/Exit Regulation: The Competitive Model
Direct Effects of Price and Entry/Exit Regulation: The Imperfectly Competitive Model
Indirect Effects of Price and Entry Regulation
Indirect Effects of Price and Exit Regulation
Regulation and Innovation
Methods for Estimating the Effects of Regulation
Overview of Estimation Methods
Intertemporal Approach
Intermarket Approach
Counterfactual Approach
Measuring the Effects of Price and Entry Restrictions: Taxicab Regulation
Regulatory History
Entry Restrictions
Value of a Medallion
The Rise of Ride Sharing
Summary
Questions and Problems
16. Economic Regulation of Transportation: Surface Freight and Airlines
Transportation Industry
Surface Freight Transportation
Regulatory History
Why Was Regulation Imposed?
Regulatory Practices
Effects of Regulation
Lessons from Regulation
Airlines
Regulatory History
Description of Regulatory Practices
Effects of Regulation
Competition and Antitrust Policy after Deregulation
Measuring Concentration in the Airline Industry
Anticompetitive Nonprice Practices
Anticompetitive Pricing Practices
Lessons from Regulation and Deregulation
Summary
Questions and Problems
17. Economic Regulation in the Energy Sector
Regulation in the Electricity Sector
Historical, Technological, and Regulatory Background
Restructuring in California
California Energy Crisis, 2000–2001
Effects of Restructuring in the Electricity Sector
Distributed Generation
Future Regulation in the Electricity Sector
Economic Regulation in the Oil Sector
Industry Background
Effects of Price Ceilings
Rationale for Restricting Domestic Oil Production
Restricting Oil Imports
Crude Oil Price Controls
Summary
Questions and Problems
18. Regulation in the Financial Sector
Role of the Financial Sector
Rationale for Regulation in the Financial Sector
Bank Runs
Bank Runs in Practice
Government Intervention and Regulation
Deposit Insurance
Restrictions on Banks’ Investments
Reserve Requirements
Limiting Competition among Banks
Ongoing Monitoring of Bank Activities
Historic Legislation in the Financial Sector
Federal Reserve Act of 1913
Banking Acts of 1933 (Glass-Steagall) and 1935
Gramm-Leach-Bliley Act of 1999
Depository Institutions Deregulation and Monetary Control Act of 1980
Securities and Exchange Act of 1934
Sarbanes-Oxley Act of 2002
Causes of the Great Recession
Rising Housing Prices
The Bubble Bursts
Crisis in the Financial Sector
Regulatory Reform: The Dodd-Frank Act of 2010
Limiting Systemic Risk to Avoid Future Financial Shocks
Banking Reform
Ending Bailouts of Firms That Are “Too Big to Fail”
Reducing Risks Posed by Securities
New Requirements and Oversight of Securities Rating Agencies
Increased Transparency and Accountability in Securities Markets
Summary
Questions and Problems
III: Health, Safety, and Environmental Regulation
19. Introduction to Health, Safety, and Environmental Regulation
The Emergence of Health, Safety, and Environmental Regulation
Risk in Perspective
Measuring Mortality Risks
Infeasibility of a No-Risk Society
Homeland Security
Wealth and Risk
Policy Evaluation
Regulatory Standards
Benefit-Cost Analysis
Role of Heterogeneity
Role of Political Factors
Summary and Overview of Part III
Questions and Problems
Recommended Reading
20. Valuing Life and Other Nonmonetary Benefits
Policy Evaluation Principles
Willingness-to-Pay versus Other Approaches
Variations in the Value of a Statistical Life
Labor Market Model
Empirical Estimates of the Value of a Statistical Life
Value of Risks to Life for Regulatory Policies
Survey Approaches to Valuing Policy Effects
Valuation of Air Quality
Supplementary Nature of the Survey Approach
Sensitivity Analysis and Cost Effectiveness
Risk-Risk Analysis
Summary
Questions and Problems
21. Environmental Regulation
The Coase Theorem for Externalities
The Coase Theorem as a Bargaining Game
Pollution Example
Long-Run Efficiency Concerns
Transaction Costs and Other Problems
Smoking Externalities
Special Features of Environmental Contexts
Siting Nuclear Wastes
Selecting the Optimal Policy: Standards versus Fines
Setting the Pollution Tax
Role of Heterogeneity
Standard Setting under Uncertainty
Pollution Taxes
Prices versus Quantities
Market Trading Policies
Netting
Offsets
Bubbles
Banking
The Expanding Role of Market Approaches
Cap and Trade in Action: The SO2 Allowance Trading System
Global Warming and Irreversible Environmental Effects
Policy Options for Addressing Global Warming
Social Cost of Carbon
Assessing the Merits of Global Warming Policies
How Should We React to Uncertainty?
Multiperson Decisions and Group Externalities
The Prisoner’s Dilemma
The N-Person Prisoner’s Dilemma
Applications of the Prisoner’s Dilemma
Enforcement and Performance of Environmental Regulation
Enforcement Options and Consequences
Hazardous Wastes
Contingent Valuation for the Exxon Valdez Oil Spill
Senior Discount for the Value of a Statistical Life
Evaluating Performance
Summary
Questions and Problems
22. Product Safety
Emergence of Product Safety Regulations
Current Safety Decisions
Consumer Complaints
Factors Affecting Producer and Consumer Actions
Product Performance and Consumer Actions
Changing Emphasis of Product Regulation
Premanufacturing Screening: The Case of Pharmaceuticals
Weighing the Significance of Side Effects
Drug Approval Strategies
Accelerated Drug Approval Process
Behavioral Response to Product Safety Regulation
Consumer’s Potential for Muting Safety Device Benefits
The Lulling Effect
Effect of Consumer’s Perception of Safety Device Efficacy
Costs of Product Safety Regulation: The Automobile Industry Case
Trends in Motor Vehicle and Home Accident Deaths
Accident Rate Influences
The Decline of Accident Rates
The Rise of Product Liability
Negligence Standard
Strict Liability Standard
Tracing Accident Costs and Causes
The Ford Pinto Case
Escalation of Damages
Risk Information and Hazard Warnings
Self-Certification of Safe Products
Government Determination of Safety
Alternatives to Direct Command and Control Regulation
Regulation through Litigation
Breast Implant Litigation and Regulation
Summary
Questions and Problems
23. Regulation of Workplace Health and Safety
Potential for Inefficiencies
How Markets Can Promote Safety
Compensating Wage Differential Theory
Risk Information
On-the-Job Experience and Worker Quit Rates
Inadequacies in the Market
Informational Problems and Irrationalities
Segmented Markets and the Failure of Compensating Differentials
Externalities
OSHA’s Regulatory Approach
Setting OSHA Standard Levels
The Nature of OSHA Standards
Reform of OSHA Standards
Regulatory Reform Initiatives
Changes in OSHA Standards
Chemical Labeling
Economic Role of Hazard Warnings
Effective Hazard Warnings
Innovations in OSHA Regulation
OSHA’s Enforcement Strategy
Inspection Policies
Trivial Violations
OSHA Penalties
Enforcement Targeting
Impact of OSHA Enforcement on Worker Safety
OSHA Regulations in Different Situations
OSHA and Other Factors Affecting Injuries
Determining OSHA’s Impact on Safety
Mixed Opinions Regarding OSHA’s Impact
Role of Workers’ Compensation
Summary
Questions and Problems
24. Behavioral Economics and Regulatory Policy
Prospect Theory: Loss Aversion and Reference Dependence Effects
Prospect Theory: Irrationality and Biases in Risk Perception
Role of Risk Ambiguity
Examples of Uncertainty and Conservatism
Intertemporal Irrationalities
Energy Regulations and the Energy Efficiency Gap
Making Decisions
Behavioral Nudges
The Behavioral Transfer Test
Summary
Questions and Problems
Index
Recommend Papers

Economics of Regulation and Antitrust, fifth edition (The MIT Press) [5 ed.]
 0262038064, 9780262038065

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

ECONOMICS OF REGULATION AND ANTITRUST Fifth Edition W. Kip Viscusi Joseph E. Harrington, Jr. David E. M. Sappington

The MIT Press Cambridge, Massachusetts London, England

© 2018 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. This book was set in Times Roman by Westchester Publishing Services. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Names: Viscusi, W. Kip, author. | Harrington, Joseph Emmett, 1957- author. | Sappington, David Edward Michael, author. Title: Economics of regulation and antitrust / W. Kip Viscusi, Joseph E. Harrington, Jr., and David E. M. Sappington. Description: Fifth edition. | Cambridge, MA : MIT Press, [2018] | Includes bibliographical references and index. Identifiers: LCCN 2017056198 | ISBN 9780262038065 (hardcover : alk. paper) Subjects: LCSH: Industrial policy–United States. | Trade regulation–United States. | Antitrust law–United States. Classification: LCC HD3616.U47 V57 2018 | DDC 338.973–dc23 LC record available at https://lccn.loc.gov/2017056198

d_r0

Contents

Preface to the Fifth Edition 1      Introduction The Rationale for Regulation and Antitrust Policies Antitrust Regulation The Changing Character of Antitrust Issues Reasoning behind Antitrust Regulations Economic Regulation Development of Economic Regulation Factors in Setting Rate Regulations Health, Safety, and Environmental Regulation Role of the Courts Criteria for Assessment Questions and Problems Recommended Reading Appendix 2      The Making of a Regulation State versus Federal Regulation: The Federalism Debate Advantages of Federalism Advantages of National Regulations Product Labeling Example Overlap of State and Federal Regulations The Character of the Rulemaking Process Chronology of New Regulations Nature of the Regulatory Oversight Process The Nixon and Ford Administrations The Carter Administration The Reagan Administration The Bush Administration The Clinton Administration

The George W. Bush Administration The Obama Administration The Trump Administration Regulatory Reform Legislation Benefit-Cost Analysis Discounting Deferred Effects Present Value The Criteria Applied in the Oversight Process Regulatory Success Stories Promotion of Cost-Effective Regulation Distortion of Benefit and Cost Estimates Regulatory Role of Price and Quality Impact of the Oversight Process Trends in Major Regulations The Costs and Benefits of Major Regulations Alternative Measures of the Scale of Regulation The Character of Regulatory Oversight Actions Judicial Review of Regulatory Impact Analyses What Do Regulators Maximize? Capture Theory Other Theories of Influence Patterns Comprehensive Models of Regulatory Objectives Conclusion Questions and Problems Appendix Trends in Regulatory Agency Budgets and Staff I      ANTITRUST 3      Introduction to Antitrust Competition and Welfare Welfare Tools Monopoly versus Competition Example Is the Compensation Principle Compelling? Some Complications X-Inefficiency Monopoly-Induced Waste Estimates of the Welfare Loss from Monopoly

Innovation: Monopoly versus Competition Industrial Organization Defining the Market and Market Power Structure Conduct Performance Government Antitrust Purpose and Design of Antitrust Laws U.S. Federal Antitrust Law and Enforcement Global Competition Law and Enforcement Summary of the Chapter and Overview of Part I Questions and Problems Appendix: Excerpts from Antitrust Statutes United States European Union China 4      Oligopoly, Collusion, and Antitrust Game Theory Example 1: Advertising Competition Example 2: Compatibility of Standards The Strategic Form of a Game Nash Equilibrium Oligopoly Theory The Cournot Model Other Models of Oligopoly Product Differentiation Collusion A Theory of Collusion Challenges to Collusion Case Studies of Collusion Railroads and Price Wars Due to Imperfect Monitoring Nasdaq Market Makers and Price Transparency Toy Stores and Hub-and-Spoke Collusion Antitrust Law and Enforcement with Respect to Price Fixing Fundamental Features of the Law The Concept of an Agreement

Parallelism Plus Legal Procedures Enforcement Policy Summary Questions and Problems Appendix Game Theory: Formal Definitions 5      Market Structure and Dynamic Competition Market Structure Concentration Entry Conditions Sources of Concentration Dynamic Competition Limit Pricing Investment in Cost-Reducing Capital Raising Rivals’ Costs Preemption and Brand Proliferation Summary Questions and Problems 6      Horizontal Mergers Antitrust Laws and Merger Trends The Effects of Horizontal Mergers Why Firms Merge Welfare Analysis Merger Law and Enforcement Merger Evaluation: Activity and Procedures Development of Merger Law and Policy Practices for Evaluating a Merger International Issues Summary Questions and Problems 7      Vertical Mergers and Vertical Restraints Vertical Mergers Benefits Anticompetitive Effects Fixed Proportions

Variable Proportions Antitrust Law and Policy Cases Vertical Restraints Exclusive Dealing Tying Manufacturer-Retailer Restraints Summary Questions and Problems 8      Monopolization and Price Discrimination Establishing Monopolization Claims Measuring Monopoly Power Assessing Intent to Monopolize Development of Antitrust Case Law 1890–1940: Standard Oil and United States Steel 1945–1970: Alcoa and United Shoe Machinery 1970 to Present: Kodak, IBM, Microsoft, and Others Predatory Pricing Theories of Predatory Pricing Efficiency Rationales Antitrust Policy Refusal to Deal and the Essential Facilities Doctrine Essential Facilities Doctrine Intellectual Property Rights Kodak and Monopoly Power in Aftermarkets Price Discrimination and the Robinson-Patman Act Theory of Price Discrimination Cases Summary Questions and Problems 9      Antitrust in the New Economy Economic Fundamentals of the New Economy Antitrust Issues in the New Economy Network Effects Markets with Network Effects Microsoft Big Data

Two-Sided Platforms Prices at a Two-Sided Platform Challenges in Antitrust Analysis Industries with Rapid and Disruptive Innovation Segal-Whinston Model Summary Questions and Problems II    ECONOMIC REGULATION 10    Introduction to Economic Regulation What Is Economic Regulation? Reasons for Regulation Instruments of Regulation Brief History of Economic Regulation Formative Stages Trends in Regulation The Regulatory Process Overview of the Regulatory Process Regulatory Legislation Independent Regulatory Commissions Regulatory Procedures Theory of Regulation Normative Analysis as a Positive Theory Capture Theory Economic Theory of Regulation Taxation by Regulation Summary and Overview of Part II Appendix A Theory of Interest Group Competition Questions and Problems 11    Alternatives to Regulation in the Market: Public Enterprise and Franchise Bidding, with an Application to Cable Television Public Enterprise Basic Elements of Franchise Bidding Information Advantages of Franchise Bidding Potential Drawbacks to Franchise Bidding Contractual Arrangements for the Post-bidding Stage

Assessment of Franchise Bidding Early Regulation of Cable Television Cable Television as a Natural Monopoly Technological Background Economies of Density and Scale Franchising Process Assessment of Franchise Bidding Rate Regulation The Limits of Government Regulation Competition among Suppliers of Video Services Summary Questions and Problems 12    Optimal Pricing Subadditivity and Multiproduct Monopoly Optimal Pricing Policies Optimal Pricing of a Single Service Linear Pricing Nonlinear Pricing Optimal Pricing of Multiple Services Ramsey Pricing Non-Ramsey Pricing of Telephone Services Optimal Pricing in Two-Sided Markets Rate Structure FDC Pricing Avoiding Inefficient Entry Time of Use Pricing Costs of Power Production TOU Pricing Model Summary Questions and Problems 13    Incentive Regulation Traditional Rate of Return Regulation Rate Hearings Averch-Johnson Effect Prudence Reviews and Cost Disallowances Regulatory Lag Incentive Regulation

Price Cap Regulation Earnings Sharing Performance Based Regulation in the Electricity Sector Regulatory Options Yardstick Regulation Summary Questions and Problems 14    Dynamic Issues in Natural Monopoly Regulation: Telecommunications Basis for Natural Monopoly Regulation Sources of Natural Monopoly Transformation Demand Side Cost Side Regulatory Response Asymmetric Regulation and Cream-Skimming Interstate Telecommunications Market Regulatory Background Transformation of a Natural Monopoly Regulatory Policy in the Microwave Era Regulated Monopoly to Regulated Competition Regulated Competition to Unregulated Competition Telecommunications Act of 1996 Net Neutrality Internet Structure The Meaning of Net Neutrality Rationale for Net Neutrality Summary Questions and Problems 15    Regulation of Potentially Competitive Markets: Theory and Estimation Methods Theory of Price and Entry/Exit Regulation Direct Effects of Price and Entry/Exit Regulation: The Competitive Model Direct Effects of Price and Entry/Exit Regulation: The Imperfectly Competitive Model Indirect Effects of Price and Entry Regulation Indirect Effects of Price and Exit Regulation Regulation and Innovation Methods for Estimating the Effects of Regulation Overview of Estimation Methods Intertemporal Approach

Intermarket Approach Counterfactual Approach Measuring the Effects of Price and Entry Restrictions: Taxicab Regulation Regulatory History Entry Restrictions Value of a Medallion The Rise of Ride Sharing Summary Questions and Problems 16    Economic Regulation of Transportation: Surface Freight and Airlines Transportation Industry Surface Freight Transportation Regulatory History Why Was Regulation Imposed? Regulatory Practices Effects of Regulation Lessons from Regulation Airlines Regulatory History Description of Regulatory Practices Effects of Regulation Competition and Antitrust Policy after Deregulation Measuring Concentration in the Airline Industry Anticompetitive Nonprice Practices Anticompetitive Pricing Practices Lessons from Regulation and Deregulation Summary Questions and Problems 17    Economic Regulation in the Energy Sector Regulation in the Electricity Sector Historical, Technological, and Regulatory Background Restructuring in California California Energy Crisis, 2000–2001 Effects of Restructuring in the Electricity Sector Distributed Generation Future Regulation in the Electricity Sector Economic Regulation in the Oil Sector

Industry Background Effects of Price Ceilings Rationale for Restricting Domestic Oil Production Restricting Oil Imports Crude Oil Price Controls Summary Questions and Problems 18    Regulation in the Financial Sector Role of the Financial Sector Rationale for Regulation in the Financial Sector Bank Runs Bank Runs in Practice Government Intervention and Regulation Deposit Insurance Restrictions on Banks’ Investments Reserve Requirements Limiting Competition among Banks Ongoing Monitoring of Bank Activities Historic Legislation in the Financial Sector Federal Reserve Act of 1913 Banking Acts of 1933 (Glass-Steagall) and 1935 Gramm-Leach-Bliley Act of 1999 Depository Institutions Deregulation and Monetary Control Act of 1980 Securities and Exchange Act of 1934 Sarbanes-Oxley Act of 2002 Causes of the Great Recession Rising Housing Prices The Bubble Bursts Crisis in the Financial Sector Regulatory Reform: The Dodd-Frank Act of 2010 Limiting Systemic Risk to Avoid Future Financial Shocks Banking Reform Ending Bailouts of Firms That Are “Too Big to Fail” Reducing Risks Posed by Securities New Requirements and Oversight of Securities Rating Agencies Increased Transparency and Accountability in Securities Markets Summary

Questions and Problems III  HEALTH, SAFETY, AND ENVIRONMENTAL REGULATION 19    Introduction to Health, Safety, and Environmental Regulation The Emergence of Health, Safety, and Environmental Regulation Risk in Perspective Measuring Mortality Risks Infeasibility of a No-Risk Society Homeland Security Wealth and Risk Policy Evaluation Regulatory Standards Benefit-Cost Analysis Role of Heterogeneity Role of Political Factors Summary and Overview of Part III Questions and Problems Recommended Reading 20    Valuing Life and Other Nonmonetary Benefits Policy Evaluation Principles Willingness-to-Pay versus Other Approaches Variations in the Value of a Statistical Life Labor Market Model Empirical Estimates of the Value of a Statistical Life Value of Risks to Life for Regulatory Policies Survey Approaches to Valuing Policy Effects Valuation of Air Quality Supplementary Nature of the Survey Approach Sensitivity Analysis and Cost Effectiveness Risk-Risk Analysis Summary Questions and Problems 21    Environmental Regulation The Coase Theorem for Externalities The Coase Theorem as a Bargaining Game Pollution Example

Long-Run Efficiency Concerns Transaction Costs and Other Problems Smoking Externalities Special Features of Environmental Contexts Siting Nuclear Wastes Selecting the Optimal Policy: Standards versus Fines Setting the Pollution Tax Role of Heterogeneity Standard Setting under Uncertainty Pollution Taxes Prices versus Quantities Market Trading Policies Netting Offsets Bubbles Banking The Expanding Role of Market Approaches Cap and Trade in Action: The SO2 Allowance Trading System Global Warming and Irreversible Environmental Effects Policy Options for Addressing Global Warming Social Cost of Carbon Assessing the Merits of Global Warming Policies How Should We React to Uncertainty? Multiperson Decisions and Group Externalities The Prisoner’s Dilemma The N-Person Prisoner’s Dilemma Applications of the Prisoner’s Dilemma Enforcement and Performance of Environmental Regulation Enforcement Options and Consequences Hazardous Wastes Contingent Valuation for the Exxon Valdez Oil Spill Senior Discount for the Value of a Statistical Life Evaluating Performance Summary Questions and Problems 22    Product Safety Emergence of Product Safety Regulations

Current Safety Decisions Consumer Complaints Factors Affecting Producer and Consumer Actions Product Performance and Consumer Actions Changing Emphasis of Product Regulation Premanufacturing Screening: The Case of Pharmaceuticals Weighing the Significance of Side Effects Drug Approval Strategies Accelerated Drug Approval Process Behavioral Response to Product Safety Regulation Consumer’s Potential for Muting Safety Device Benefits The Lulling Effect Effect of Consumer’s Perception of Safety Device Efficacy Costs of Product Safety Regulation: The Automobile Industry Case Trends in Motor Vehicle and Home Accident Deaths Accident Rate Influences The Decline of Accident Rates The Rise of Product Liability Negligence Standard Strict Liability Standard Tracing Accident Costs and Causes The Ford Pinto Case Escalation of Damages Risk Information and Hazard Warnings Self-Certification of Safe Products Government Determination of Safety Alternatives to Direct Command and Control Regulation Regulation through Litigation Breast Implant Litigation and Regulation Summary Questions and Problems 23    Regulation of Workplace Health and Safety Potential for Inefficiencies How Markets Can Promote Safety Compensating Wage Differential Theory Risk Information On-the-Job Experience and Worker Quit Rates

Inadequacies in the Market Informational Problems and Irrationalities Segmented Markets and the Failure of Compensating Differentials Externalities OSHA’s Regulatory Approach Setting OSHA Standard Levels The Nature of OSHA Standards Reform of OSHA Standards Regulatory Reform Initiatives Changes in OSHA Standards Chemical Labeling Economic Role of Hazard Warnings Effective Hazard Warnings Innovations in OSHA Regulation OSHA’s Enforcement Strategy Inspection Policies Trivial Violations OSHA Penalties Enforcement Targeting Impact of OSHA Enforcement on Worker Safety OSHA Regulations in Different Situations OSHA and Other Factors Affecting Injuries Determining OSHA’s Impact on Safety Mixed Opinions Regarding OSHA’s Impact Role of Workers’ Compensation Summary Questions and Problems 24    Behavioral Economics and Regulatory Policy Prospect Theory: Loss Aversion and Reference Dependence Effects Prospect Theory: Irrationality and Biases in Risk Perception Role of Risk Ambiguity Examples of Uncertainty and Conservatism Intertemporal Irrationalities Energy Regulations and the Energy Efficiency Gap Making Decisions Behavioral Nudges The Behavioral Transfer Test

Summary Questions and Problems Index

List of Illustrations Figure 2.1  The Regulatory Management Process Figure 2.2  Benefit-Cost Analysis of Environmental Quality Control Figure 2.3  Marginal Analysis of Environmental Policies Figure 2.4  Number of Final Economically Significant Rules Published by “Presidential Year” Figure 2.5  Federal Register Pages Published, 1936–2014 Figure 2.6  Trends in Code of Federal Regulation Pages, 1950–2014 Figure 3.1  Demand and Supply Curves in the Determination of Economic Surplus Figure 3.2  Monopoly versus Competition Figure 3.3  Real Family Income as a Percentage of 1973 Level Figure 3.4  Economies of Scale and Natural Monopoly Figure 3.5  Equilibrium of a Monopolistic Competitor Figure 3.6  Incentives to Innovate in Monopoly and Competition: Minor Innovation Case Figure 3.7  Incentives to Innovate in Monopoly and Competition: Major Innovation Case Figure 3.8  The Structure-Conduct-Performance Paradigm of Industrial Organization Figure 3.9  Product Differentiation in the Beer Market Figure 3.10  Growth in Processor Performance Compared to the VAX 11/780 Benchmark, 1978–2010 Figure 3.11  Welfare Effects of the Merger of Standard Propane and ICG Propane Figure 3.12  Benefits (A2) and Costs (A1) to Society of Merger Figure 3.13  Number of Countries with Competition Laws Figure 4.1  Payoff Matrix for Two Competing Firms: Advertising Figure 4.2  Payoff Matrix for Two Competing Firms: Beta versus VHS Figure 4.3  Monopoly Solution Figure 4.4  Cournot Model: Profit Maximization by Firm 1 Figure 4.5  Cournot Model: Firm 1’s Best Reply Function Figure 4.6  The Cournot Solution Figure 4.7  Differentiated Products Price Game Figure 4.8  Profits from Colluding versus Cheating Figure 4.9  Preferred Collusive Prices of Duopolists Sharing the Market Equally Figure 4.10  Selecting a Stable Collusive Outcome When Firms Have Different Costs Figure 4.11  List and Contract Prices of Citric Acid, 1987–1997

Figure 4.12  Cartel Pricing of Rail Rates for Shipment of Grain from Chicago to the Atlantic Seaboard, 1880–1886 Figure 4.13  Daily Average Inside Spreads for Microsoft, January 1, 1993–July 29, 1994 Figure 4.14  Sustaining Collusion: Number of Firms and Discount Rate Figure 4.15  Hub-and-Spoke Collusion in the Toy Manufacturing and Retailing Industries Figure 4.16  Calculating Damages in a Price-Fixing Case Figure 4.17  The Corporate Leniency Game Figure 5.1  Concentration Curves for Industries X and Y Figure 5.2  Effect of the Cost of Entry on the Free-Entry Equilibrium Number of Firms Figure 5.3  Scale Economies as a Barrier to Entry? Figure 5.4  Residual Demand Curve under the Bain-Sylos Postulate Figure 5.5  Determination of the Limit Price: Bain-Sylos Model Figure 5.6  Effect of the Preentry Output on the Postentry Equilibrium with Adjustment Costs Figure 5.7  Airline Route with a Potential Entrant Figure 5.8  Incumbent Firm’s Marginal Cost Curve Figure 5.9  Modified Dixit Game Figure 6.1  Characteristics of the Six U.S. Merger Waves Figure 6.2  Value of Assets as a Percentage of Gross Domestic Product, 1895–1920 and 1968–2001 Figure 6.3  Average Retail Prices of Flagship Brand Twelve-Packs Figure 6.4  Merger among Multimarket Firms Figure 6.5  Analysis of Cost Savings from a Merger Figure 6.6  Social Benefits (A2) and Costs (A1) of a Horizontal Merger—Perfect Competition in Premerger Stage Figure 6.7  Social Benefits (B2) and Costs (B1) of a Horizontal Merger—Imperfect Competition in Premerger Stage Figure 6.8  Geographic Market Definition Figure 6.9  Merger Enhances the Incentive to Raise Price Figure 6.10  Upward Pricing Pressure Figure 7.1  Successive Monopoly: Premerger and Postmerger Figure 7.2  Vertical Monopolization with Fixed-Proportions Production Figure 7.3  Potential Cost Savings, MN, from Vertical Integration with Variable-Proportions Production Figure 7.4  (a) Pre-Vertical Integration; (b) Post-Vertical Integration Figure 7.5  Enforcement Actions in Vertical Mergers by Presidential Administration, 1994–2015 Figure 7.6  Effect of Exclusive Dealing on Profit and Welfare Figure 7.7  Potential Profit Not Captured through Single Monopoly Price Figure 7.8  Demand for Copying Services with Consumer Surpluses for Zero-Price Case Figure 7.9  Tying Solution: Maximizing Profit Figure 7.10  An Explanation for RPM: Shifting Out Demand

Figure 8.1  Monopoly Equilibrium Figure 8.2  Vertically Related Stages of Aluminum Industry Figure 8.3  (a) Pre-entry and (b) Postentry Situations with Normal Competition Figure 8.4  Postentry Situation with Possible Predation Figure 8.5  Profit Pattern under Predatory Pricing Figure 8.6  Region Showing Predatory Prices under ATC Rule That Are Not Predatory under AreedaTurner Rule Figure 8.7  (a) Net Surplus to a Consumer from Equipment and Service (b) Change in Demand for Service in Response to a Change in the Price of Service Figure 8.8  Price Discrimination That Decreases Total Surplus Figure 8.9  Price Discrimination That Increases Total Surplus Figure 9.1  Network Externalities Figure 9.2  Demand for a Product with Network Externalities Figure 9.3  Firm 1’s Current and Future Profits Figure 9.4  Optimal Price of Firm 1 Depending on Firms’ Installed Bases Figure 9.5  Average Direction of Future Installed Bases Depending on Firms’ Current Installed Bases Figure 9.6  Microsoft’s Share of the Browser Market (Three-Month Moving Average of Usage by ISP Category) Figure 9.7  Feedback Loop with Big Data in Online Services Figure 9.8  User Groups Interact through a Two-Sided Platform Figure 9.9  (a) Buyer’s Value from the Platform (b) Seller’s Value from the Platform Figure 9.10  (a) Change in Buyer’s Value from the Platform (b) Change in Seller’s Value from the Platform Figure 9.11  Higher Marginal Revenue for User Group 1 from Network Effects on User Group 2 Figure 9.12  Benefit and Supply of Innovation Figure 9.13  Change in Innovation Rate and Innovation Prize Figure 10.1  Cost Curves of a Natural Monopoly Figure 10.2  Temporary Natural Monopoly Figure 10.3  Number of Economic Regulatory Legislative Acts Figure 10.4  Optimal Regulatory Policy: Peltzman Model Figure 10.5  Political Equilibrium: Becker Model Figure 10.6  Deregulation of Restrictions on Intrastate Bank Branching Figure 11.1  Prices Set by a Public Enterprise and a Profit-Maximizing Firm Figure 11.2  Franchise Bidding Using a Modified English Auction Figure 11.3  Franchise Bidding under a Proportional Franchise Fee Figure 11.4  Franchise Bidding at Renewal Time Figure 11.5  Physical Design of a Cable System Figure 11.6  Average Total Cost for Cable Television Given Fixed Plant Size (1982 dollars) Figure 11.7  Average Total Cost for Cable Television Given Fixed Market Penetration (1982 dollars)

Figure 11.8  Real Cable Service Rates, 1984–1995 Figure 11.9  Welfare Analysis of Higher Cable Rates and More Cable Channels Figure 12.1  Economies of Scale, up to Output Q′ Figure 12.2  Minimum Average Cost Curve for Two Firms, AC2 Figure 12.3  Sustainable Natural Monopoly, up to Output Q0 Figure 12.4  Marginal Cost Pricing Can Cause Financial Losses Figure 12.5  Natural Monopoly with Costs Exceeding Benefits Figure 12.6  Welfare Loss with Average Cost Pricing Figure 12.7  Multipart Tariff for Local Telephone Service Figure 12.8  (a) Proportionate Price Increase versus (b) Ramsey Pricing Figure 12.9  Ramsey Pricing for (a) Product X and (b) Product Y Figure 12.10  Average Daily Load Curve for Electricity Figure 12.11  Short-Run Marginal Cost Curve for Electricity Supply Figure 12.12  Time-of-Use Pricing Figure 12.13  Deadweight Losses Due to Nonpeak Pricing Figure 12.14  Time-of-Use Pricing with Shared Capacity Costs Figure 13.1  Averch-Johnson Effect versus Least-Cost Production Figure 13.2  California Public Utilities Commission’s Earnings Sharing Plan Figure 14.1  Natural Monopoly Figure 14.2  Efficient Market Structure with a Flat-Bottomed Average Cost Curve Figure 14.3  Effect of Change in Fixed Costs on the Efficient Market Structure Figure 14.4  Effect of Change in Variable Costs on the Efficient Market Structure Figure 14.5  Cross-Subsidization and Cream-Skimming Figure 14.6  Potential Transformation of a Natural Monopoly Figure 14.7  Economies of Scale for Interstate Telecommunication Services Figure 14.8  Structure of the Internet Figure 15.1  Effects of Price and Entry Regulation: Competitive Model Figure 15.2  Second-Best Effects of Price Regulation on Productive Efficiency Figure 15.3  Effects of Price and Entry Regulation: Imperfectly Competitive Model Figure 15.4  Effects of Price Regulation on Nonprice Competition Figure 15.5  Cross-Subsidization Figure 15.6  Effects of Regulatory Lag on Adoption of Innovations Figure 15.7  Average Commission Rates on the New York Stock Exchange, 1975–1980 Figure 15.8  Effects of a Usury Ceiling Figure 15.9  Price of a New York City Taxicab Medallion, 1960–2014 Figure 15.10  Welfare Gains from Surge Pricing Figure 15.11  Driver Supply Increases to Match Spike in Demand

Figure 15.12  Signs of Surge Pricing in Action, March 21, 2015 Figure 15.13  Signs of a Surge Pricing Disruption on New Year’s Eve, January 1, 2015 Figure 16.1  Average Real Revenue per Ton-Mile for Railroads Figure 16.2  Average Real Revenue per Ton-Mile for Trucking Figure 16.3  Performance in the U.S Railroad Industry, before and after the Staggers Act Figure 16.4  Bankruptcies among Trucking Firms Figure 16.5  Percentage Change in Airfares by Distance, Adjusted for Inflation Figure 16.6  Load Factors and Distance Figure 16.7  A Simple Hub-and-Spoke System Figure 16.8  Adoption of the Hub-and-Spoke System by Western Airlines Figure 16.9  Number of Fatal Accidents in the U.S. Airline Industry, 1950–2014 Figure 16.10  Domestic Competition at the Route Level Figure 16.11  Evaluation of Postentry Pricing for Predation Figure 16.12  Price Premiums at the Ten Largest Hub Airports, 1986–2010 Figure 17.1  California Wholesale Day-Ahead Price of Electricity, April 1998–April 2002 Figure 17.2  Strategic Withholding of Electricity Figure 17.3  Status of Electricity Restructuring by U.S. State, September 2010 Figure 17.4  U.S. Average Retail Electricity Prices and Natural Gas Prices, 1990–2012 Figure 17.5  Distributed Generation of Electricity around the World Figure 17.6  Effects of a Price Ceiling Figure 17.7  Effects of a Price Ceiling with Random Allocation Figure 17.8  Effect of Oil Prorationing on the Extraction Rate Figure 17.9  Equilibrium under the Mandatory Oil Import Program Figure 17.10  Equilibrium under Crude Oil Price Controls Figure 18.1  U.S. Housing Prices, 2002–2009 Figure 18.2  Default Rates on First Mortgages, 2004–2011 Figure 18.3  Real U.S. Gross Domestic Product, 2007–2011 Figure 19.1  Civil Liberties/Terrorism Loss Tradeoff Figure 19.2  Age-Adjusted Death Rates, United States, 1910–2014, by Class of Injury Figure 20.1  Worker’s Constant Expected Utility Locus for Wages and Risk Figure 20.2  Derivation of Market Offer Curve Figure 20.3  Equilibrium in the Market for Risky Jobs Figure 21.1  Changes in Water Usage as a Function of Pollution Figure 21.2  Technology-Based Standard Setting Figure 21.3  Market Equilibrium versus Social Optimum Figure 21.4  Differences in Control Technologies and Efficiency of Pollution Outcome Figure 21.5  Standard Setting with Uncertain Compliance Costs

Figure 21.6  Setting the Optimal Pollution Penalty Figure 21.7  Regulation When the Absolute Value of the Slope of the Marginal Cost Curve Exceeds the Absolute Value of the Slope of the Marginal Benefit Curve Figure 21.8  Regulation When the Absolute Value of the Slope of the Marginal Benefit Curve Exceeds the Absolute Value of the Slope of the Marginal Cost Curve Figure 21.9  EPA Bubble Policy Standard for Total Emissions Figure 21.10  U.S. Sulfur Dioxide Allowance Prices, 1994–2010 Figure 21.11  Establishing the Optimal Global Warming Policy Figure 21.12  Irreversible Environmental Decisions Figure 21.13  The Multiperson Prisoner’s Dilemma Figure 22.1  Product Safety Decision Process Figure 22.2  FDA Risk Balancing Figure 22.3  Cost Tradeoffs in Premarket Testing Figure 22.4  Relationship of Driving Intensity to the Regulatory Regime Figure 22.5  Choice of Driving Speed Figure 22.6  Safety Mechanisms and the Choice of Precautions Figure 23.1  The Policy Frontier for Job Safety Figure 23.2  Determination of Market Levels of Safety Figure 23.3  Market Offer Curves for Immigrants and Native U.S. Workers Figure 23.4  OSHA Standard Setting versus Efficient Standard Setting Figure 23.5  Payoffs for Safety Investments for a Marginal Firm Figure 23.6  Payoffs for Safety Investment for a Heterogeneous Group of Firms Figure 23.7  Death Rate Trends for Job-Related Accidents Figure 23.8  Statistical Tests for the Effect of OSHA Figure 24.1  Preferences Based on the Prospect Theory Model Figure 24.2  Perceived versus Actual Mortality Risks Figure 24.3  USDA’s MyPyramid Figure 24.4  USDA’s MyPlate Figure 24.5  Environmental Protection Agency–Department of Transportation Fuel Economy Label

List of Tables Table A  Suggested Course Outlines Table 2.1  Discounting Example Table 2.2  Benefits and Costs of Major Rules, 2005–2014 Table 2.3  Total Benefits and Costs of Major Rules, by Agency, 2005–2014 Table 2.4  Spending Summary for Federal Agencies (Fiscal Years, Millions of 2009 Dollars in Outlays) Table 2.5  Types of Action Taken by the OMB Regulatory Oversight Process on Agency Rules, 1985–2015

(Percent of Proposed Rules) Table A.1  Staffing Summary for Federal Regulatory Agencies, Selected Years* Table A.2  Spending on Federal Regulatory Activity, by Agency, Selected Fiscal Years (Millions of 2015 Dollars) Table 3.1  U.S. Brewing Companies, 1947–2005 Table 4.1  Sales Quotas for the Lysine Cartel (tons per year) Table 4.2  Highest Cartel Fines, United States Table 4.3  Highest Cartel Fines, European Union Table 5.1  Percentage of Sales Accounted for by the Five Leading Firms in Industries X and Y Table 5.2  Concentration of Selected Industries Table 5.3  Minimum Efficient Scale of Plants and Firms as Percentage of U.S. National Market, 1967 Table 6.1  Market Shares and HHI based on Revenue, United States Table 6.2  Percentage Cost Reduction Sufficient to Offset Percentage Price Increases for Selected Values of Elasticity of Demand Table 6.3  Transactions under the Hart-Scott-Rodino Premerger Notification Program, 2002–2015 Table 6.4  Reduction in Marginal Cost for Upward Pricing Pressure to Equal Zero (%) Table 6.5  Average Price Differential for Different Office Superstore Market Structures Table 6.6  Premerger Market for French Bottled Water (million liters) Table 6.7  Postmerger Scenarios in the Market for French Bottled Water (million liters) Table 6.8  Chronology of the Boeing–McDonnell Douglas Merger Table 7.1  Consumer Valuations and Profit of an Integrated Monopolist Table 7.2  Cable Program Service Ownership Interests, 1996a Table 7.3  Worldwide Intel and AMD Market Shares in x86 PCs, 2002–2006 (%) Table 8.1  Timeline for Eastman Kodak Co. v. Image Technical Services, Inc. Table 9.1  Examples of Two-Sided Platforms Table 10.1  Cost and Time of Government Requirements for Entry: Best Twelve and Worst Twelve Countries Table 10.2  Major Economic Regulatory Legislation, 1887–1940 Table 10.3  Major Economic Deregulatory Initiatives, 1971–1989 Table 10.4  Major Federal Economic Regulatory and Deregulatory Initiatives, 1990–2010 Table 10.5  Major Federal Economic Regulatory Commissions Table 11.1  Cable Television Industry Growth, 1970–2014 Table 11.2  Politically Imposed Costs of Franchise Monopoly (dollars per month per subscriber) Table 11.3  Deviation between Terms of Initial Contract and Renewal Contract Table 11.4  Multichannel Video Subscribers, as of December 2013 Table 12.1 Table 13.1  Number of U.S. States Employing the Identified Form of Regulation, various years Table 14.1  Internet Usage, as of June 2016

Table 14.2  Percentage of U.S. Households by Type of Internet Subscription, 2013 Table 15.1  Profit and Welfare under Cournot Competition Table 15.2  Commission Rate, Cost, and Profit on $40 Stock by Order Size, 1968 Table 16.1  Modal Shares of Intercity Freight Ton-Miles, Selected Years, 1929–2010 (%) Table 16.2  Market Shares of Sampled Manufactured Goods (%) Table 16.3  Index of Rail Car Loadings of Various Types of Traffic (1978 = 100) Table 16.4  Overall Productivity Growth for U.S. and Canadian Railroads (average annual %) Table 16.5  Differential between Intrastate Fares in California and Interstate Fares, 1972 (¢) Table 16.6  Comparison of Interstate and Intrastate Fare Levels in Selected Texas Markets, December 1, 1975 ($) Table 16.7  Consumers’ Willingness to Pay for Better Airline Service ($/Round Trip) Table 16.8  Weighted Average Change in Fares and Frequency from Deregulation, 1977 Table 16.9  Annual Welfare Gains from Deregulation ($ billion) Table 17.1  Sample Electricity Bill in California, 1998 Table 17.2  Welfare Losses from Oil Price Controls, 1975–1980 (millions of 1980 dollars) Table 19.1  Causes of Death in the United States, 2013 Table 19.2  Rank Orders of Mortality Risks for Major Conditions Table 19.3  Risks That Increase the Annual Death Risk by One in 1 Million Table 19.4  Attitude toward Use of Terrorism Risk Profiles Table 19.5  Cost-per-Life Values for Arsenic Regulation Table 19.6  Industry Cost Variations for the OSHA Noise Standard Table 19.7  Determinant of Patterns of Congressional Voting on Environmental Issues Table 19.8  Factors Affecting Voting Patterns for Strip-Mining Regulation Table 20.1  Relation of Survey Responses to the Value of a Statistical Life Table 20.2  Average Worker Fatality Rates by Occupation, 2014 Table 20.3  Labor Market Estimates of the Value of a Statistical Life, Various Countries Table 20.4  Selected Values of Statistical Life Used by U.S. Regulatory Agencies Table 20.5  Costs of Various Risk-Reducing Regulations per Expected Life Saved Table 20.6  Cost-Effectiveness Measures for Hazard Communication Standard Table 21.1  Coase Theorem Bargaining Game Table 21.2  Property Rights Assignment and the Bargaining Outcome Table 21.3  External Insurance Costs per Pack of Cigarettes with Tar Adjustments Table 21.4  Summary of Superfund Cost Effectiveness Table 21.5  Age Group Effects on the Benefits from the Clear Skies Initiative Table 21.6  National Pollution Emission Trends Table 22.1  Type I and Type II Errors for New Drug Applications Table 22.2  The Reagan Administration’s Auto Reform Package Table 22.3  Trends in General Liability Insurance Premiums

Table 22.4  Summary of Social Costs for Product Safety Table 22.5  Ford Estimates of Benefits and Costs for Gas Tank Design Table 22.6  Group Standings on Proposal to Impose Limits on Damages Awards for Pain and Suffering Table 22.7  Markets with Imperfect Information: Lemon Markets for Risky Cars Table 22.8  Perceived Risks for Smokers of Cigarettes and of E-Cigarettes Table 22.9  Effects of Drain Opener Labels on Precaution Taking (%) Table 22.10  Timeline of Critical Breast Implant Events Table 23.1  Workers’ Response to Chemical Labeling Table 24.1  Framing Effects for Policies for 600 Infected People Table 24.2  EPA’s Costs, Benefits, and Net Benefits of the CAFE Rule

Preface to the Fifth Edition

Regulation and antitrust are key elements of government policy. Regulatory and antitrust policies affect virtually all aspects of our lives, ranging from the quality of the air we breathe to the prices we pay for a vast variety of commodities. Regulatory and antitrust policies have undergone substantial change in recent decades. In particular, substantial deregulation has been implemented in several industries, as the forces of competition have been given expanded reign to determine industry outcomes. However, some forms of regulation have expanded, including regulations pertaining to homeland security and the environment. The vibrancy of regulatory and antitrust policies is reflected in recent economic events. Substantial deregulation has occurred in portions of the transportation and energy sectors, but the specter of increased “net neutrality” regulation prevails in the communications sector. The challenges posed by climate change also ensure that this regulatory issue will be on the policy agenda throughout this century and beyond. The intensity of prosecution of cartels has remained high in the United States and has expanded globally. However, U.S. competition authorities allowed horizontal mergers that have increased concentration in such industries as airlines, beer, financial services, hospitals, and petroleum. The U.S. Supreme Court continued its expansion of the rule of reason with the Leegin decision and altered the landscape for the private enforcement of antitrust laws with its Twombly decision. In addition, the growing presence of the New Economy has raised new antitrust issues, as exemplified by investigations into Google’s operations. The emerging character of regulatory and antitrust policies has been accompanied by an intellectually vibrant economic literature. Using such modern tools as game theory, economists have developed important means to assess when government intervention in the marketplace is warranted. Our understanding of when government action to limit apparent market power is appropriate has changed considerably in recent decades. Economists have also developed new methodologies to deal with emerging health, safety, and environmental regulations. For many years, regulatory efforts in these areas were fairly limited, and the economic literature addressing these issues was similarly underdeveloped. This book conveys the basic principles that should guide economic regulation in this and other areas, as well as the most salient aspects of these policies. While the principles are often illustrated with examples from U.S. experience, the principles apply quite generally. Traditional textbooks on business and government activity typically focus on the details of regulatory and antitrust policies and explain how the policies operate in practice. Economics of Regulation and Antitrust has taken a different approach in the past and continues to do so in the present edition. Rather than start with the institutions of regulatory and antitrust policies, we begin by assessing the relevant underlying economic issues. We assess the particular market failures that can justify a role for government intervention. We explain how economic theory can illuminate the key elements of market activity, the appropriate role for government action, and the best form of government intervention. We also examine what formal empirical analyses of economic behavior and the effects of government intervention indicate about the direction that

this intervention should take. To provide the most up-to-date assessment of these important issues, we base our analysis on new developments in economic theory and empirical analysis that have been specifically devised to further our understanding of the optimal design of regulatory and antitrust policies. Although this book emphasizes economic principles, it also provides substantial institutional detail. This text includes extensive case studies of major areas of regulatory and antitrust policies, including detailed reviews of government policy regarding specific mergers and regulations in the communications, transportation, energy, and financial sectors. We discuss essential aspects of these regulations and their performance, but our intent is not simply to provide a list of case names, government rules, and other institutional details. Instead our goal is to provide both relevant observations about current antitrust and regulatory policy today and pertinent insights regarding the economic tools that can be employed to analyze the implications of regulatory and antitrust policies decades from now. Future policies may differ substantially from those that prevail today. It is the economic framework we employ to analyze these issues that will be of lasting value. Much has happened in regulation and antitrust since the fourth edition of this book was published thirteen years ago. The addition of David Sappington as a coauthor, who replaces the late John Vernon, is indicative of the fresh perspective that is incorporated in the fifth edition. We remain indebted to John Vernon, who played a leadership role in launching this book and in contributing to the previous editions. While this new edition incorporates major revisions and additional topics, we have sought to maintain the same expositional approach throughout the book that has led previous editions to be so well received. The regulatory overview material in chapters 1 and 2 now include treatment of regulatory oversight efforts through the beginning of the Trump administration as well as analyses of regulatory oversight and regulatory agency statistics through the end of the Obama administration. Part I of this book has undergone extensive revision. The updates and additions include coverage of global competition law and enforcement with a particular focus on the activities of the European Commission. Chapter 4 now includes investigations into price fixing in the markets for lysine and text messaging, and the use of hub-and-spoke collusion in the London toy retailing market. There is also a new section on U.S. legal procedures that includes discussion of the key Twombly decision by the Supreme Court. Empirical analyses of limit pricing in airlines and strategic capacity expansion in gambling casinos have been added to the material on dynamic competition (chapter 5). Chapter 6 on horizontal mergers provides extensive treatment of upward pricing pressure, which is a recent advance for evaluating the unilateral price effects of horizontal mergers. We have expanded coverage of coordinated effects with attention to mergers in the U.S. beer industry and the market for French bottled water. In chapter 7 the Supreme Court’s landmark Leegin decision concerning resale price maintenance is reviewed and “contracts that reference rivals,” which represent a partial form of exclusive dealing. Chapter 7 also covers vertical mergers for which two mergers have been added: Comcast–NBC Universal and General Electric–Honeywell. In the section on intellectual property rights in chapter 8, reverse payments and pay-to-delay in the pharmaceutical industry has been added. Finally, the most significant addition to part I is chapter 9: “Antitrust in the New Economy.” After discussing the fundamentals of New Economy industries and the primary antitrust concerns that they pose, there is an extensive economic analysis of markets with network effects along with a discussion of the three Microsoft cases. After covering markets with Big Data, an in-depth analysis of two-sided platforms is provided with a look into the antitrust investigations of Google by the U.S. Federal Trade Commission and the European Commission. Chapter 9 concludes with an economic analysis of markets with rapid and drastic innovation, which pose a serious challenge to creating an effective antitrust policy. Part II, dealing with economic regulation, has also been substantially restructured and rewritten. Chapter

11 combines and updates the discussions of public enterprise and franchise bidding. Chapter 12 provides a self-contained treatment of the optimal design of regulated pricing structures. The chapter includes discussions of Ramsey pricing, time-of-use pricing, nonlinear pricing, and pricing in two-sided markets. Chapter 13 is devoted exclusively to incentive regulation, noting its implementation in practice and its potential advantages and disadvantages relative to traditional rate of return regulation. Chapter 13 also explains the potential merit of affording the regulated enterprise a choice among regulatory policies. Chapter 14 expands the earlier analysis of economic regulation in transitioning markets with a new discussion of issues related to net neutrality. Chapter 15 extends the earlier treatment of regulation in potentially competitive markets in part by assessing the impact of ride sharing on the taxicab industry. Chapter 16 provides updated treatments of regulation in the surface freight and airline industries. Chapter 17 provides a refocused analysis of regulation in the energy sector, concentrating more on recent developments in the electricity sector and less on historical activity in the natural gas and oil sectors. Part II concludes with a new chapter on regulation in the financial sector. Chapter 18 includes a discussion of the causes of the financial crisis of 2007–2008 and the policies that have been implemented to limit the likelihood of similar crises in the future—policies that include legislation like the Dodd-Frank Act of 2010. Part III, on social regulation, begins in chapter 19 with an overview of the economic frameworks used to analyze these regulations, including an expanded treatment of homeland security regulations. Chapter 20 examines the value of a statistical life, which is the principal driver of benefits of health, safety, and environmental regulations. The present edition includes a detailed summary of the value of a statistical life that different federal agencies have used over the past three decades. The most pronounced changes can be found in the discussion of environmental regulation in chapter 21; extensive changes are appropriate because new environmental regulation continues to be the most costly regulatory effort in the developed world. Chapter 21 now includes additional methodological treatment of the use of prices versus quantities to address environmental problems. The most important policy-related addition is the expanded treatment of climate change policies, including an exploration of the social cost of carbon, the Stern Review of climate change risks, and critiques of the Stern Review. The treatment of product safety regulation in chapter 22 now includes an analysis of the role of safety and efficacy in the Food and Drug Administration’s approval of new drugs. The chapter’s discussion of offsetting behavioral responses to regulations, such as seat belts, also draws on more recent empirical evidence regarding the likely magnitude of the effect. Chapter 23 now includes a discussion of the OSHA enforcement strategy through the Obama administration as well as a new comprehensive treatment of the empirical studies of the impact of occupational safety regulation on job risks. Chapter 24 on behavioral economics is an entirely new addition to the book. Regulatory agencies in the United States and the United Kingdom are increasingly relying on insights from behavioral economics to identify market failures and, in some cases, to assist in the design of regulatory strategies, such as behavioral nudges. This chapter provides an introduction to the principal concerns that have played a role in regulatory policies. Examples where such issues have influenced policy analyses are economic analyses of the energy efficiency gap and the dominant role of behavioral factors in the justification of fuel economy standards. Chapter 24 also explores criteria for how behavioral insights should be incorporated in regulatory analyses. To fully appreciate the analysis in this book, successful completion of an introductory price theory course is recommended. With this background, students will find most of the theoretical and empirical material in the book to be readily accessible. In some cases, the discussion advances to a level at which some background in intermediate microeconomic theory is desirable. However, these more difficult sections can be skipped. The presentation of the more advanced material is self-contained, does not involve the use of calculus, and is incorporated in chapters in such a way that it can easily be omitted by an instructor who

prefers a different course emphasis. Earlier editions of this book have been used to teach undergraduate, business, law, and public policy students. The entire book is seldom covered in any one course. To ensure that the book is compatible with a wide variety of interests and instructional needs, it has been structured so it can be readily employed in many different contexts. Organization of the Book Economics of Regulation and Antitrust consists of two introductory chapters, followed by three parts. The initial chapters set the stage for the remainder of the book. They introduce some of the overriding issues, such as ascertaining the objectives of government regulators and considering the appropriate division of labor between the states and the federal government. The following three parts of the book present the core of the analytical material. Part I focuses on antitrust policy, part II deals with economic regulation, and part III focuses on social regulation. Each of these parts is structured in a similar manner. The first chapter of each part provides an overview of the key economic issues and the pertinent methodology that will be employed. We discuss the principal market failures in this context, and how economic analysis is used to address them. The first chapter in each part is best viewed as essential reading. The instructor can then select which of the subsequent case studies to discuss in detail. Chapters that require the student to have read another chapter in that part, other than the introductory chapter, are noted in the following paragraphs. Otherwise, chapters in a part can be assigned in whatever order the instructor wishes. The instructor can also choose not to cover every chapter. Part I, which focuses on antitrust policy, includes a healthy dose of the analytical tools of modern industrial organization. Chapter 3 is an introductory overview of industrial organization, antitrust policy, and welfare analysis. It offers essential background for chapters 4–9. Chapter 4, on oligopoly and collusive pricing, is novel in introducing oligopoly through a game-theoretic approach and then relating the theoretical models to antitrust cases. The discussion of market structure, entry, and dynamic competition (chapter 5) is mostly analytical; it can be skipped by instructors under time pressure in courses with a primary focus on antitrust cases. The remaining four chapters—horizontal mergers (chapter 6), vertical mergers and restraints (chapter 7), monopolization and price discrimination (chapter 8), and antitrust for New Economy industries (chapter 9)—are stand-alone chapters that can be assigned or not, depending on the instructor’s preference. Part II addresses the role of economic regulation. As evidenced by the dozen or so case studies in this part, economic regulation has played an integral role in the U.S. economy. Although there has been substantial deregulation of airlines, trucking, and long-distance telephone services over time, the debate over appropriate regulatory policy and potential reregulation remains active in many industries, including the communications and energy sectors. Chapter 10 provides an overview of economic regulation, explaining its historical development and summarizing key elements of current regulatory practice. The chapter also reviews the efforts of social scientists to explain the incidence and the nature of government regulation in the marketplace. Chapter 11 briefly explores two alternatives to regulation in the marketplace—public enterprise and franchise bidding. The chapter notes that although these alternatives can play a useful role in certain settings, they are seldom effective substitutes for regulation in the marketplace. Chapter 12 reviews the optimal design of price regulation. This chapter is useful for a solid understanding

of all subsequent chapters in part II. Chapter 13 explores the design and implementation of incentive regulation as an alternative to traditional rate of return regulation. Chapter 14 reviews the development of regulation in the communications sector as an example of how regulatory policy changes as industry technology changes. Chapter 15 discusses methodologies for measuring the impact of regulatory policy. The remaining chapters in part II review regulatory policy in three distinct sectors. Chapter 16 considers the transportation sector, focusing on the surface freight and airline industries. Chapter 17 analyzes energy regulation, focusing on the electricity sector. Chapter 18 reviews regulation in the financial sector, particularly in the aftermath of the financial crisis of 2007–2008. Part III focuses on the new forms of risk and environmental regulation that have emerged primarily since the 1980s. Chapter 19 introduces the principal methodological issues, including market failures (such as externalities and inadequate risk information), the primary economic test of benefit-cost analysis that applies in this area, and the rather daunting task that economists face in assigning dollar values to outcomes (such as a five-degree temperature change caused by global warming). The task of assigning market prices to outcomes that, by their very nature, are not traded in efficient markets is the focus of chapter 20. The primary case study concentrates on how economists attempt to assign a dollar value to risks to human life, which illustrates how economists have attempted to assess the pertinent tradeoffs that should be used when evaluating government policies. The next three chapters deal with various types of social regulation policies, including environmental protection regulation (chapter 21), product safety regulation (chapter 22), and occupational safety regulation (chapter 23). Chapter 22 presents the greatest variety of social regulation issues that have been of long-term interest to researchers in industrial organization and in law and economics, while chapter 21’s treatment of environmental regulation addresses the most costly new regulatory policies. A major strength of all these chapters is that they confront the current policy issues now under debate, including such topics as global warming, the role of product liability law, and the social consequences of smoking. The entirely new chapter 24 on insights from behavioral economics provides an introduction to behavioral analyses of risk beliefs, preferences, and economic behavior, drawing on prospect theory and related models. It also presents a review of examples that go well beyond student experiments in which behavioral factors have played an important role. Policy examples, such as informational efforts and the role of behavioral concerns in guiding energy regulation, play a prominent role in this discussion. Suggested Course Outlines An intensive one-year course could cover this entire book. However, in most cases, instructors will be using the book in a context in which it is not feasible to cover all the material. Here, we suggest combinations of chapters in this book that might be employed in courses with different emphases. Table A identifies seven different course approaches and the pertinent chapters that can be assigned for each one. The first type of course is the balanced one-quarter course. Such a course would include the introductory material in chapters 1 and 2 as general background; chapters 3, 4, and either 6, 7, 8, or 9 from part I; chapters 10 and 12 from part II; and chapters 19–21 and 24 from part III. Table A Suggested Course Outlines

The second course approach is a conventional antitrust course. It would place the greatest reliance on part I of the book, which includes chapters 3–9 and chapter 17. Instructors who wish to provide a broader perspective on some of the other topics in regulation might augment these chapters with the indicated chapters for the one-quarter course. A course focusing on economic regulation would include primarily the introductory section and part II of the book, or chapters 1–2, part of 3, 10–18, 22, and 24. Similarly, a course focusing on social regulation would include the introductory section and part III of the book, or chapters 1–2, part of 3, and 19–24. In situations in which we have taught such narrowly defined courses, we have often found it useful to include the material from the balanced one-quarter course as well, to give the student a broader perspective on the most salient economic issues in other areas of government intervention. Given the advanced treatment of industrial organization in part I, this book could also be used in a policyoriented course on industrial organization. Chapters 3–5 provide the theoretical foundation in industrial organization, and chapter 9 covers recent theoretical developments (along with emerging antitrust issues). An instructor could select from the remaining chapters to cover a variety of policy issues. A suggestion is to use chapter 8 (its coverage of monopolization practices follows up on the theory of dynamic competition in chapter 5), chapters 10, 13, and 16–18 (to examine how different types of economic regulatory structures can affect competition), and chapters 22 and 24 (to assess efforts such as product quality regulation). The institutional course outline in table A pertains to courses, particularly those in business schools that wish to have a more institutional focus. For these courses, the objective is to focus on the empirical aspects of government regulation and antitrust policies, as well as the character of these policies. Moreover, these courses would require no advanced undergraduate economic methods. The recommended chapters are 1–4, 6–8, 9–12, and 19–24. The final course outline is a professional school survey, such as the one-semester course in law schools, where there is a mix of students’ economics backgrounds. Many chapters are included in their entirety: 1–3, 10, 11, and 19–24. That course also includes all but the most technical material of chapters 4, 6–8, and 12. Much of the remaining chapters is also included in the course: one case study such as the taxicab industry from chapter 15, one example such as airlines from chapter 16, and the basics of regulation in the financial sector from chapter 18. Thus, many of the subsections of chapters are self-contained entities, so that instructors need not sacrifice substantive topics if the backgrounds or interests of students render it infeasible to cover an entire chapter.

Acknowledgments We thank Emily Taber and Laura Keeler at MIT Press for their enthusiastic support and advice. Each of the authors has benefited enormously from editing or research assistance at his institution. At Vanderbilt Law School, we thank Sarah Dalton for all aspects related to manuscript production and Nicholas M. Marquiss for research assistance. At the Wharton School of the University of Pennsylvania, we thank Jackson Burke and Ben Rosa for research assistance. At the University of Florida, we thank Theresa Dinh, Zachary Jones, Michael Law, and especially Christine Thomas for research assistance. Many individuals have provided helpful comments and advice as we prepared the fifth edition of this book. These individuals include: Subhajyoti Bandyopadhyay, University of Florida Sanford Berg, University of Florida William Bomberger, University of Florida David P. Brown, University of Alberta David T. Brown, University of Florida Ted Kury, University of Florida Dennis Weisman, Kansas State University

1 Introduction

The government acts in many ways. The most familiar role of the government is the subject of public finance courses. The government raises money in taxes and then spends this money through various expenditure efforts. In addition, the government also regulates the behavior of firms and individuals. The legal system of the United States is perhaps the most comprehensive example of the mechanism by which this regulation takes place. This book is concerned with government regulation of the behavior of both firms and individuals in the context of issues classified as regulation and antitrust. Regulation of firms involves much more than attempting to deal with monopoly power in the traditional textbook sense. The setting of prices for public utilities, the control of pollution emitted in the firm’s production process, and the allocation of radio broadcast bands are all among the contexts in which government regulation plays a prominent role in influencing firm behavior. The behavior of individuals has also come under increasing regulatory scrutiny. In some cases, decisions are regulated directly, such as smoking restrictions and the requirement to wear seat belts. In addition, individuals are affected by regulations that influence either market prices or the mix of products that are available. Product safety standards, for example, serve to eliminate the high-risk end of the product-quality spectrum. The menu of products available to consumers and the jobs available to workers are subject to substantial regulatory influence. In some instances, regulatory efforts expand rather than restrict consumers’ range of options, as in the case of the U.S. Department of Justice’s opposition to the merger of two major cable companies, Comcast and Time Warner, in 2015. To assess the pervasiveness of these efforts, consider a day in the life of the typical American worker. That worker awakes in the morning to the sound of the clock radio, where the radio stations and the wavelengths they broadcast on are regulated by the Federal Communications Commission. Sitting down to breakfast, the worker is greeted by the label on the cereal box, whose content is strictly regulated by the Federal Trade Commission and the Food and Drug Administration to avoid misleading consumers about the health benefits of breakfast cereals. The orange juice from concentrate can also no longer be labeled “fresh,” courtesy of a 1991 Federal Trade Commission action. The milk poured on the cereal is also regulated in a variety of ways, with perhaps the most important being through U.S. Department of Agriculture price supports (milk marketing orders). More recently, there has been substantial concern with the health risk characteristics of milk in terms of the presence of hormones (bovine somatotropin), which has been the object of substantial regulatory debate. If one chooses to add organic fruit to the cereal, it is reassuring to know that the Environmental Protection Agency (EPA) stringently regulates the pesticides that can be used on domestic produce and that the U.S. Department of Agriculture has established guidelines for what qualifies as organic. Unfortunately, imported produce that may have been drenched in pesticides is not

inspected with great frequency. Before leaving for work, our typical American checks e-mail messages using an Internet browser that is free of traffic discrimination and paid prioritization, thanks to the Federal Communication Commission’s strong net neutrality rule in 2015. While doing so, the worker may take prescription medicine manufactured by GlaxoSmithKline, but it is less likely to be for an off-label usage of the drug, since the company paid $3 billion in fines levied by the Food and Drug Administration in 2012 for off-label promotions. Heading to work, our regulated individual climbs into a Japanese car that was successful in not violating any import quotas. The worker will be safer en route to work than in earlier years, thanks to extensive safety regulations by the National Highway Traffic Safety Administration. The fuel used by the car is also less environmentally damaging than would have been the case in the absence of U.S. Department of Transportation fuel economy standards and in the absence of EPA gasoline lead standards. The car will be more expensive as well, due to these efforts. Once on the job, the worker is protected against many of the hazards of work by occupational safety and health regulations. If injured, the worker will be insured through workers’ compensation benefits that the worker has in effect paid for through lower wages. A host of U.S. Department of Labor regulations, as well as Equal Employment Opportunity Commission stipulations, ensure that the worker will not be unduly discriminated against or sexually harassed during the course of employment. Our worker’s phone calls are billed at telephone rates that formerly were dictated by rigid regulations but now are influenced by market forces. Visiting business associates travel on planes whose availability and fares have been greatly influenced by regulatory changes. The safe arrival of these associates is due in part to the continued vigilance of the Federal Aviation Administration, the Department of Homeland Security, the Transportation Safety Administration, and the safety incentives created by tort liability lawsuits following airplane crashes. Even when our individual escapes from work for an evening of relaxation and recreation, government regulations remain present. Relying on a ride sharing service such as Uber or Lyft for transportation to a restaurant for dinner is feasible in the United States, where the sharing economy is permitted, but not in some countries that have banned such services. While eating dinner at the restaurant, in most states one needn’t be concerned about being exposed to environmental tobacco smoke, given the prevalence of restrictions on smoking behavior. Recreational activities are also safer than they might otherwise be because the U.S. Consumer Product Safety Commission monitors and regulates the safety of a wide range of sports equipment, from all-terrain vehicles to baseball helmets. While shopping over the weekend, the worker is asked by a political activist to sign a petition to force the local power company to reduce electricity rates. Lower electricity prices will surely save the worker money in the short run, but the worker wonders whether lower prices will deter this regulated monopoly from performing better in the future. Although some deregulation has taken place in the past decade, the scope of government regulation remains quite broad. The role of regulation in American society remains pervasive. Various forms of government regulation touch almost every aspect of our activities and consumption patterns. The widespread impact of regulation is not unexpected, inasmuch as this represents a very potent mechanism by which the government can influence market outcomes. The appendix to this chapter contains an extensive list of government agencies involved with regulation and antirust, and the abbreviations used for these agencies.

The Rationale for Regulation and Antitrust Policies If we existed in a world that functioned in accordance with the perfect competition paradigm, there would be little need for antitrust policies and other regulatory efforts. All markets would consist of a large number of sellers of a product, and consumers would be fully informed of the product’s implications. Moreover, no externalities would be present in this idealized economy, as all effects would be internalized by the buyers and sellers of a particular product. Unfortunately, economic reality seldom adheres very closely to the textbook model of perfect competition. Many industries are dominated by a small number of large firms. In some instances, principally the public utilities, there may even be a monopoly. Consumers who use hazardous products and workers who accept risky employment may not fully understand the consequences of their actions. Much of the behavioral economics literature is devoted to documenting these failures of individual rationality and also proposing the use of such mechanisms as behavioral nudges to foster better decisions. Widespread externalities also affect the air we breathe, the water we drink, and the viability of the planet for future generations. The government has two types of mechanisms at its disposal to address these departures from the perfectly competitive model. The first mechanism is to establish incentives through a pricing mechanism. We can impose a tax on various kinds of activities in order to decrease their attractiveness. There is some attempt to have taxes that are product specific, as in the case of alcohol taxes and cigarette taxes, but the notion has largely been that we should be taxing products perceived as luxuries. The tax on cars that fail to meet fuel economy standards, known as the gas-guzzler tax, perhaps best represents the notion of utilizing the price mechanism to influence economic behavior. Gasoline taxes, which remain below their socially optimal level, serve a similar function. An alternative to taxes is to try to control behavior directly. We make this effort in the field of antitrust, when the government takes explicit action to block mergers that might threaten the competitive character of a market. In the area of utility regulation, a complex web of regulations prevents public utilities from charging excessive rates for their electricity, which is a commodity for which the electric companies have a captive market. Much health, safety, and environmental regulation similarly specifies the technological requirements that must be met or the pollution standards that cannot be exceeded. Consequently, this book is concerned primarily with various forms of government action that limit behavior related to the kinds of market failures discussed earlier. Not all market failures stem from actions by firms. In some cases, individuals also may be contributing to the market failures. If we dispose of our hazardous waste products in a reckless manner, then government regulation will be needed to influence our activities. Although the preponderance of regulatory policies is directed at business, the scope of regulation is sufficiently comprehensive to include all economic actors. Antitrust Regulation The first of the three parts of the book deals with antitrust policy. Beginning with the post–Civil War era, there has been substantial concern about antitrust issues. This attention was stimulated by a belief that consumers were vulnerable to the market power of monopolies. Because of the potential economic losses that result from monopolies, some states enacted antitrust laws at the end of the nineteenth century. The U.S. Congress also was particularly active in this area in the early part of the twentieth century, and many of the

most important pieces of legislation governing the current antitrust policy date back to that time. The major federal statute continues to be the 1890 Sherman Act. The Changing Character of Antitrust Issues The scope of antitrust issues is quite broad. It encompasses the traditional concerns with a monopoly, but these issues are less prominent now than they once were. Several decades ago, major topics of debate concerned whether IBM, AT&T, General Motors, and other major firms had become too powerful and too dominant in their markets. Debates such as these would seem quaint today—perhaps useful as an exercise in an economic history course. Today these once-dominant companies are now humbled giants, weakened by the effects of foreign competition. In many respects, we have a global market rather than a U.S. market for many products, so some of the earlier concerns about monopolies have been muted. Indeed, in the 1980s we even witnessed a merger that would have been unthinkable three decades earlier. The merger of General Electric with RCA created a powerful electronics corporation of unprecedented size. The rationale for the merger was that a large scale was necessary to support the innovation needed to meet the threat of foreign competition. The competitive threat was certainly real. Even after the merger, competition was so great that General Electric has exited the market for almost all consumer products. Whereas several decades ago these companies produced the great majority of all electronics items used in the United States, by the 1990s it was difficult to find a TV for which most of the components are not made in China, Japan, South Korea, or Mexico. In much the same vein, there no longer seems to be a prominent concern regarding the market power of Microsoft. The network externalities that give rise to Microsoft’s influence are quite different from the nature of the market power of General Motors, which formerly made more reliable and more stylish automobiles. Such companies as Google, Apple, and Facebook have also emerged as dominant economic forces, in some cases by also exploiting the influence of network externalities. Regulators throughout the world are now taking a look at the market impacts of Amazon, a company that did not even exist a quarter of a century ago. The current structure of antitrust policies is diverse in character and impact. The overall intent of these policies has not changed markedly over the past century. Their intent is to limit the role of market power that might result from substantial concentration in a particular industry. What has changed is that the concerns have shifted from the rise of single monopolies to mergers, leveraged buyouts, and other financial transactions that combine and restructure corporations in a manner that might fundamentally influence market behavior. Reasoning behind Antitrust Regulations The major concern with monopoly and similar kinds of concentration is not that being big is necessarily undesirable. However, because of the control over the price exerted by a monopoly, there are economic efficiency losses to society. Product quality and diversity may also be affected. Society could potentially be better off if limitations were imposed on the operation of a monopoly or a similar kind of concentrated industry. Economic research has greatly changed how we think about monopolies. For example, one major consideration is not simply how big a firm currently is and what its current market influence is, but rather the extent to which there is a possible entry from a competitor. If firms fear the prospect of such entry, which has been characterized through the theory of contestable markets, then the behavior of a monopolist

will be influenced in a manner that will promote more responsible behavior. One of the reasons concentrated industries emerge is that some firms may have exclusive rights to some invention or may have been responsible for a technological change that has transformed the industry. CocaCola and Pepsi-Cola are much more successful soft drink products than their generic counterparts because of their perceived superior taste. If their formulas were public and could be generally replicated, then their market influence would wane considerably. Once a firm has achieved a monopolistic position, perhaps in part due to past innovation, we want it to continue to be dynamic in terms of its innovative efforts. A substantial controversy has long been waged by economists as to whether monopoly promotes or deters innovation. Will a monopolist, in effect, rest on its laurels and not have any incentive to innovate because of the lack of market pressure, or will monopolists be spurred on by the prospect of capturing all of the gains from innovation that a monopoly can obtain, whereas a firm in a perfectly competitive market would lose some benefits of innovation as its innovation is copied by the competitors? We will explore the relative merits of these arguments and the dynamics of monopolies but will not draw any general conclusions indicating the desirability of monopolies. The relative merits of monopolistic power tend to vary across market contexts. Economic Regulation In many situations where natural monopolies have emerged, for reasons of economic efficiency, it is desirable to have a monopolistic market structure. Nevertheless, these economic giants must be tamed so that they will not charge excessive prices. We do not wish to incur all the efficiency and equity problems that arise as a result of a monopoly. Prominent examples include public utilities. It does not make sense to have a large number of small firms providing households with electricity, providing public transportation systems, or laying phone lines and cable TV lines. However, we also do not wish to give single firms free reign in these markets because the interests of a monopoly will not best advance the interests of society as a whole. What’s good for General Motors is not necessarily good for America. Other kinds of regulation affect energy prices and minimum wage levels. In some instances, the focus of economic regulation is to control product price. This may be indirectly through profit regulation by, for example, limiting public utilities to a particular rate of return. In other cases, complex rules govern prices, as in the case of U.S. energy regulations and long-distance telephone rate regulation. Development of Economic Regulation The genesis of these various kinds of economic regulation can be traced back to the late 1800s, as in the case of antitrust. Before the turn of the century, the U.S. Congress had created the Interstate Commerce Commission to regulate railroad rates, and the early twentieth century saw a surge in the number of regulatory agencies in the transportation, communication, and securities fields. It was during that period, for example, that the U.S. Congress established the Federal Communications Commission and the Securities and Exchange Commission. In the case of antitrust policy, the main thrust of these efforts has been to prevent the development of the kinds of market concentration that threaten the competitive role of markets. In contrast, economic regulation generally recognizes that market concentration not only is inevitable but in many cases is also a superior structure for the particular market. The intent is then to place limits on the performance of the firms in this market so as to limit the losses that might be inflicted. Factors in Setting Rate Regulations

Establishing a rate structure that will provide efficient incentives for all parties is not a trivial undertaking. Consider the case of an electric power company. The objective is not to minimize the rate to consumers, inasmuch as very low rates may affect the desirability of staying in business for the electric company. In addition, it may affect the quality of the product being provided in terms of whether power is provided at off-peak times or whether power outages are remedied quickly. A series of complex issues affects the role of the dynamics of the investment process in technological improvements. We want the electric power company to innovate, so that it will be able to provide cheaper power in the future. However, if we capture all the gains from innovation and give them to the consumers through lower prices, then the firm has no incentive to undertake the innovation. We cannot rely on market competition to force them to take such action, for there is little competition in this market structure. Thus we must strike a delicate balance between providing sufficient incentives for firms to undertake cost-reducing actions while at the same time ensuring that the prices for consumers are not excessive. Key concerns that have arisen with respect to economic regulation pertain to the differing roles of marginal costs and fixed costs. When the electric company provides service to your house or apartment, specific identifiable costs can be attributed to the product that is delivered to you—the marginal costs. However, the electric company also incurs substantial fixed costs in terms of its plant and equipment that also must be covered. How should the electric company allocate these fixed costs? Should it simply divide them equally among the total number of customers? Should it allocate the costs proportionally to the total bills that the customers have? Should it distinguish among different groups depending on how sensitive they are to price? If businesses are less price-sensitive than are consumers, should the major share of these costs be borne by firms or by individual consumers? Over the past several decades, economists have developed a very sophisticated series of frameworks for addressing these issues. The overall object of these analyses is to determine how we can best structure the price and incentive schemes for these firms so that we protect the interests of electricity customers while at the same time provide incentives and a reasonable return to the firms involved. In the case of both antitrust and economic regulation, it is seldom possible to replicate an efficient market perfectly. There is generally some departure from the perfect competition situation that cannot be glossed over or rectified, even through the most imaginative and complex pricing scheme. However, by applying economic tools to these issues, we can obtain a much more sensible market situation than would emerge if there were no regulation whatsoever. It is also noteworthy that economic analysis often plays a critical role in such policy discussions. Economic analyses based on the models discussed in this book frequently provide the basis for ratemaking decisions for public utilities. A prominent regulatory economist, Alfred E. Kahn, was responsible for the deregulation of the airlines, in large part because of his belief that competition would benefit consumers and create a more viable market structure than the previous system, in which airline market entry and fares were dictated by a government bureaucracy. In contrast, economic analysis often does not play such a central role in the operation of a perfectly competitive market. The paradigmatic firm in a competitive market is a small enterprise operating in a sea of other small enterprises. Firms in this market do not routinely draw demand curves, marginal revenue curves, and marginal cost curves. Yet few economists are disturbed by this failure to apply economic tools explicitly, as economists since the time of Milton Friedman have argued that they implicitly apply the laws of economics, much as the billiard player applies the laws of geometry, even though she may not have had any formal training in the subject. In the case of economic regulation, the application of economic reasoning is quite explicit. Economists play a prominent role in these regulatory agencies. Much of the policy debate turns on economic analyses and consideration of the merits of the kinds

of economic issues that we will address in this book. Health, Safety, and Environmental Regulation The newest form of regulation is the focus of part III of the book. In the 1970s the U.S. Congress created a host of agencies concerned with regulating health, safety, and environmental quality. These new regulatory agencies included the Consumer Product Safety Commission, the Occupational Safety and Health Administration, the Environmental Protection Agency, the Nuclear Regulatory Commission, and the National Highway Traffic Safety Administration. Although these forms of regulation are often referred to as being social regulation policies, the exact dividing line between economic regulations and social regulations is unclear. As a result, we will use the more specific designation of health, safety, and environmental regulation to encompass these forms of (social) regulation. The impetus for health, safety, and environmental regulations is twofold. First, substantial externalities often result from economic behavior. The operation of businesses often generates air pollution, water pollution, and toxic waste. Individual consumption decisions are also the source of externalities; for example, the fuel we burn in our cars gives rise to air pollution. Informational issues also play a salient role. Because of the peculiar nature of information as an economic commodity, it is more efficient for the government to be the producer of much information and to disseminate the information broadly. Individual firms, for example, will not have the same kind of incentives to do scientific research unless they can reap the benefits of the information. As a result, it is largely through the efforts of government agencies that society has funded research into the implications of various kinds of hazards, so that we can assess their consequences and determine the degree to which they should be regulated. Many government policies in the safety and environmental area deal with aspects of market behavior that by their very nature do not involve voluntary bargains. We all suffer the effects of air pollution from local power plants, but we did not agree to consume this air pollution. No transaction ever took place, and we are not explicitly compensated for these losses. In the absence of such a market transaction, we do not have explicit estimates of the price. No specific price has been set for the loss in visibility, or for that matter the various kinds of health effects and materials damages that will result from air pollution. Thus the first task that must be undertaken is to assess the worth of these various kinds of policies, inasmuch as the benefit values do not necessarily emerge from market behavior. A case study that is explored in part III is how we attach a value to risk of death, which is perhaps the most difficult and most sensitive of these fundamental tradeoffs that we face. The three dimensions of health, safety, and environmental regulation arise with respect to risks in our environment, risks in the workplace, and risks from the products we consume. Most of our regulatory influence over these risks is through direct government regulation. Several federal agencies promulgate detailed requirements on workplace technologies as well as overall performance requirements. Role of the Courts An increasingly prominent player in this regulatory area has been the courts. In the case of antitrust regulations, the courts have been enforcing laws passed by Congress. But in the case of these social regulations, the obligations that courts have been assessing pertain to the common-law requirements that have developed through decades of judicial decisions and precedents regarding how various kinds of accidents and other externalities are handled.

In many instances, the incentives generated by the courts dwarf those created by regulatory agencies. The court awards for asbestos-related claims have been so substantial that the asbestos industry in the United States has been all but eliminated by the financial burdens. Liability costs have led the pharmaceutical industry largely to abandon research on contraceptive devices, and many vaccines have also been withdrawn from the market because of high liability burdens. Visitors at motels will notice that hardly any motels have diving boards—a consequence of the added liability insurance costs associated with this form of recreational equipment. The 1998 settlement of the state attorneys’ general cigarette lawsuits for over $200 billion launched a new phenomenon of regulation through litigation. There has been a steadily increasing reliance on the courts to foster changes in products, including lead paint, guns, cigarettes, breast implants, and fast food. The lines between regulation and litigation have become blurred, making it increasingly important to understand the broader set of social institutions that create incentives that serve to regulate behavior. To understand the role of the government in the context of this type of regulation, one must assess not only how the regulatory agencies function but what doctrines govern the behavior of the courts. These matters are also addressed in part III. Criteria for Assessment Ideally, the purpose of antitrust and regulation policies is to foster improvements judged in efficiency terms. We should move closer to the perfectly competitive ideal than we would have in the absence of this type of intervention. The objective is to increase the efficiency with which the economy operates, recognizing that we may fall short of the goal of replicating a perfectly competitive market. Nevertheless we can achieve substantial improvements over what would prevail in the absence of such government intervention. Put somewhat differently, our task is to maximize the net benefits of these regulations to society. Such a concern requires that we assess both the benefits and the costs of these regulatory policies and attempt to maximize their difference. If all groups in society are treated symmetrically, then this benefit-cost calculus represents a straightforward maximization of economic efficiency. Alternatively, we might choose to weight the benefits to the disadvantaged differently or make other kinds of distinctions, in which case we can incorporate a broader range of concerns than efficiency alone. For those not persuaded of the primacy of efficiency-based policy objectives, economics can nevertheless play an important role. Understanding how regulations function in our market economy will help illuminate who wins and who loses from regulatory policies, and to what extent. Economic analyses of corporate mergers, for example, can trace through the effects on prices, corporate profits, and consumer welfare in a manner that will promote more informed regulatory policies irrespective of one’s policy viewpoint. Although maximizing economic efficiency or some other laudable social objective may be touted by economists as our goal, in practice it is not what the regulators choose to maximize. Regulators respond to a variety of political constituencies. Indeed, in many instances the same kinds of market failures that led to the regulation also may influence the regulations that are undertaken. As a society, for example, we overreact to low-probability risks that have been called to our attention. We fear the latest highly publicized carcinogen, and we cancel our vacation plans after a terrorist attack. These same kinds of reactions to risk also create pressures for regulatory agencies to take action against these hazards. Moreover, even when government agencies do not suffer from irrationality or from irrational pressures, they will not necessarily maximize social welfare. The actions taken by government agencies will influence the fortunes of firms and particular groups in society in substantial ways. The granting of a cable TV

franchise may make one a millionaire, and restrictions on foreign competition will greatly boost the fortune of firms in highly competitive international markets. There is a strong private interest in regulatory outcomes, and we will explore the economic foundations and mechanisms by which this private interest becomes manifest. The net result of these private interests is that regulatory policies frequently do not perform in the manner that economists would expect in an ideal world. As Nobel laureate George Stigler demonstrated, economic regulation often advances private interests, such as increasing the profits of the industry being regulated. The apparent objective is not always to maximize social welfare but rather to provide transfers among particular groups in society. Moreover, these transfers may be provided in an inefficient way, so that regulatory policies may fall far short of our ideal. The successive disappointments with regulatory policy have given rise to the terminology “government failure” to represent the governmental counterpart of market failure. In much the same way as markets may fail because some idealized assumptions fail to hold, the government too may fail. Our task is not always to replace a situation of market failure with government action, because governmental intervention may not yield a superior outcome. We should always assess whether the particular kinds of intervention that have been chosen will actually enhance market performance and improve our welfare to as great an extent as possible. As we examine the various forms of regulation, we will consider the merits of the regulation as well as the test that we should use in assessing their adequacy. Questions and Problems 1. Why should the government intervene in situations of market failure? Should the government intervene if a market is fully efficient in the sense of being perfectly competitive? What additional rationales are present if there is an inadequacy in the market? 2. Discuss some of the kinds of instances in which the government has an advantage in terms of informational capabilities as well as superior expertise to make decisions that consumers would not have. 3. Economists frequently use the yardstick of economic efficiency when judging the merits of alternative policies. What value judgments are implicit in the economic efficiency doctrine?

Recommended Reading Two classics in regulatory economics are Alfred E. Kahn, The Economics of Regulation: Principles and Institutions (Cambridge, MA: MIT Press, 1988); and George J. Stigler, The Citizen and the State: Essays on Regulation (Chicago: University of Chicago Press, 1975). For additional background on the legal issues, see Phillip Areeda, Louis Kaplow, and Aaron Edlin, Antitrust Analysis: Problems, Text, and Cases, 7th ed. (New York: Wolters Kluwer Law & Business, 2013); Stephen G. Breyer, Richard B. Stewart, Cass R. Sunstein, Adrian Vermeule, and Michael Herz, Administrative Law and Regulatory Policy: Problems, Text, and Cases, 7th ed. (New York: Wolters Kluwer Law & Business, 2011); Lisa Schultz Bressman, Edward L. Rubin, and Kevin M. Stack, The Regulatory State, 2nd ed. (New York: Wolters Kluwer Law & Business, 2013); and Lisa Heinzerling and Mark V. Tushnet, The Regulatory and Administrative State: Materials, Cases, Comments (New York: Oxford University Press, 2006). Useful governmental links regarding regulatory activities and research include the Office of Management and Budget’s Office of Information and Regulatory Affairs website (https://www.whitehouse.gov/omb

/information-regulatory-affairs/). The OIRA’s annual regulatory reports to Congress are posted at https:// www.whitehouse.gov/omb/information-regulatory-affairs/reports/. Information regarding proposed rules, final rules, and supporting materials can be found at https://www.regulations.gov/. Universities with active regulatory studies centers frequently post useful reports and analyses of current regulatory issues. These groups include the George Washington University Regulatory Studies Center (https://regulatorystudies.columbian.gwu.edu), the Weidenbaum Center on the Economy, Government, and Public Policy at Washington University in St. Louis (https://wc.wustl.edu/), the University of Pennsylvania’s Penn Program on Regulation (http://www.pennreg.org), the Institute of Public Utilities at Michigan State University (https://ipu.msu.edu), the Public Utility Research Center at the University of Florida (https://warrington.ufl.edu/centers/purc), and the Mercatus Center at George Mason University (http://mercatus.org/research/regulation). Appendix Abbreviations for Key Regulatory Agencies BLS CAB CEA CFTC CPSC DHS DOD DOJ DOT EEOC EPA FAA FAO FCC FDA FDIC FEC FERC FHA FMC FSLIC FTC ICC ITC MSHA

Bureau of Labor Statistics Civil Aeronautics Board Council of Economic Advisors Commodity Futures Trading Commission Consumer Product Safety Commission Department of Homeland Security Department of Defense Department of Justice Department of Transportation Equal Employment Opportunity Commission Environmental Protection Agency Federal Aviation Administration Food and Agricultural Organization Federal Communications Commission Food and Drug Administration Federal Deposit Insurance Corporation Federal Election Commission Federal Energy Regulatory Commission Federal Housing Administration Federal Maritime Commission Federal Savings and Loan Insurance Corporation Federal Trade Commission Interstate Commerce Commission International Trade Commission Mine Safety and Health Administration

NHTSA NIH NIOSH

National Highway Traffic Safety Administration National Institutes of Health National Institute of Occupational Safety and Health

NLRB NRC OIRA

National Labor Relations Board Nuclear Regulatory Commission Office of Information and Regulatory Affairs

OMB OSHA

Office of Management and Budget Occupational Safety and Health Administration

SEC TSA USDA

Securities and Exchange Commission Transportation Security Administration United States Department of Agriculture

2 The Making of a Regulation

A stylized account of the evolution of regulation and antitrust policies is as follows. A single national regulatory agency establishes government policy to maximize the national interest, and the legislative mandate of the agency defines its specific responsibilities in fostering these interests. The reality of regulatory policymaking differs quite starkly from this stylized view. The process is imperfect: some observers claim that “government failure” may be of the same order of importance as market failure.1 One important difference is that not all regulation is national in scope. Much regulation occurs at the state and local levels. Recent political concerns with the importance of reflecting the preferences and economic conditions at the local level have spurred an increased interest in regulatory activity other than at the federal level. It is noteworthy that from a historical standpoint, most regulation, such as the rate regulations for railroads, began at the state level. These regulations were subsequently extended to the national level. Even in situations in which a national regulatory body is acting, this body may not be fostering the national interest. Special interest groups and their diverse array of lobbyists also have an influence on regulatory policy. Moreover, the legislative mandates of the regulatory agencies are typically specified much more narrowly than simply urging the agency to promote the national interest. Another difference from the stylized model is that typically the regulatory agency is not the only governmental player. Congress and the judiciary provide one check, and, more important, the regulatory oversight process in the White House has substantial input. Each of these groups has its own agenda. Few observers would claim that any one of these agendas coincides exactly with the national interest. The final possible misconception is that it is a simple matter for the government to issue a regulatory policy or to make a decision regarding antitrust policy. Government agencies must take specific steps before instituting regulations. At each of these stages, several governmental and private players have an input to the process and can influence the outcome. The nature of this process and the way it affects regulatory outcomes is the subject of this chapter. The underlying principles governing antitrust and regulation policies must be consistent with the legislative mandates written by Congress. Actions taken with these legislative stipulations in turn are subject to review by the courts. These two sets of influences are pertinent to all policy actions discussed in this book. Other aspects of the character of these policies differ considerably. The U.S. Department of Justice’s vigilance in pursuing antitrust actions varies with political administrations, in part because of differences in interpretation of the law. Although the U.S. Department of Justice occasionally issues formal regulations to guide industry behavior, such as procedures for implementing civil penalties, for the most part, the main policy mechanism of influence is litigation against firms believed to be violating the antitrust statutes. This threat of litigation also produces many out-of-court settlements of antitrust cases.

Many economic regulation agencies are independent regulatory commissions, such as the Federal Trade Commission and the Federal Communications Commission. In addition to initiating legal action, these agencies place extensive reliance on issuance of regulations to guide business behavior. The steps that must be taken in issuing these regulations follow the procedures discussed later in this chapter, except that there is no review by executive authority over regulatory commissions. The final group of agencies consists of regulatory agencies in the executive branch. These agencies rely primarily on issuing formal regulations pursuant to their legislative mandates. For example, the Environmental Protection Agency (EPA) has issued fuel economy standards in implementing the Clean Air Act. This regulatory activity is subject to review by the Office of Management and Budget (OMB) and the full rulemaking process detailed later in this chapter. Because the regulatory procedures for executive branch agencies are most complex, this chapter focuses on them as the most general case. Regulatory oversight procedures have the greatest pertinence to the policies that are considered in part III of the book. However, the economic lessons involved are quite general. Government policies should not be regarded as fixed objects to be treated reverentially in courses on business and government. Rather, they are generated by a complex set of political and economic forces, not all of which produce desirable outcomes. Part of the task of the subsequent chapters is to ascertain which policies are beneficial and which are not. State versus Federal Regulation: The Federalism Debate Although regulation is frequently viewed as being synonymous with federal regulation, not all regulation is at the federal level. Restrictions on cigarette smoking in restaurants are determined at the local level, as are drinking ages. State regulatory commissions set utility rates and often are involved in complex legal battles over appropriate jurisdiction. Almost all insurance regulation occurs at the state level as well. Some states regulate insurance rates quite stringently, whereas in other states, these insurance rates have been deregulated. The terms under which there are payouts under insurance schemes also vary with locale, as some states have adopted no-fault rules in accident contexts. States also differ in terms of the factors that they will permit insurance companies to take into account when setting rates. In some instances, the states prohibit the insurance company from factoring in the driver’s age, sex, or race when setting automobile insurance rates. Finally, states differ in terms of whether they make automobile insurance mandatory and, if it is mandatory, the extent of the subsidy that is provided to high-risk drivers by the lower-risk drivers. Advantages of Federalism The existence of state regulations of various kinds is not simply the result of an oversight on the part of federal regulators. There are often sound economic reasons for wanting regulation to take place at the state level. Indeed, particularly in the Reagan and Bush administrations, there was an emphasis on transferring some of the control over the regulatory structure and regulatory enforcement to the states—an emphasis that comes under the general heading of “federalism.” However, only a modest degree of delegation of regulatory functions to the states has occurred. In recognition of the emphasis on a federalism approach, the OMB issued the following regulatory policy guideline: Federal regulations should not preempt State laws or regulations, except to guarantee rights of national citizenship or to avoid significant burdens on interstate commerce.2

Several sound economic rationales underlie this principle of federalism. First, local conditions may affect

both the costs and the benefits associated with the regulation. Preferences vary locally, as do regional economic conditions. Areas where mass transit is well established can impose greater restrictions on automobiles than can states where there are no such transportation alternatives. The second potential advantage to decentralized regulation is that citizens wishing a different mix of public goods can choose to relocate. Those who like to gamble can, for example, reside in states where gambling is permitted, such as Nevada or New Jersey. The entire theory of local public goods is built around similar notions, whereby individuals relocate in an effort to establish the best match between the local public policies and their preferences. The diversity of options made possible through the use of state regulation permits such choices to be made, whereas if all regulatory policies and public decisions were nationally uniform, there would be no such discretion. A third advantage of local regulation is that it can reflect the heterogeneity of costs and benefits in a particular locale. Ideally, we would like to set national standards that fully reflect variations in benefits and costs across areas. We want to recognize, for example, the need to regulate pollution sources more stringently when there are large populations exposed to the risk. Federal regulations seldom reflect this diversity. In contrast, state regulations are seldom structured to meet the needs in other states rather than their own. A related advantage stemming from the potential for heterogeneity with state regulation is also the potential for innovation. Many states have embarked on innovative regulatory policies. California has been a leader in this regard, as it has instituted labeling requirements for hazardous chemicals; regulations for ride sharing services, such as Uber and Lyft; and efforts to drastically roll back automobile insurance rates. Being innovative does not necessarily imply that these innovations are beneficial, but other states can benefit from these experiments, since they can see which regulatory experiments work and which ones do not. Experimentation at the local level will generally be less costly than at the national level, should the regulatory experiments prove to be a mistake. Moreover, if the experiment proves to be successful, then other states can and typically will follow suit. Advantages of National Regulations Although the benefits of local regulation are considerable, one should also take into account the potential advantages of national regulatory approaches as well. First, the national regulatory agencies often have an informational advantage over the local agencies. The U.S. Food and Drug Administration (FDA), for example, administers a regulatory structure for pharmaceuticals that entails substantial product testing. Duplicating this effort at the local level would be extremely costly and inefficient. Moreover, most local regulatory agencies have not developed the same degree of expertise found at the national level in this and many other scientific areas. A second rationale for national regulations is that uniform national regulations are generally more efficient for nationally marketed consumer products. If firms had to comply with fifty different sets of safety and environmental pollution standards for automobiles, production costs would soar. Labeling efforts as well as other policies that affect products involved in interstate commerce likewise will impose less cost on firms if they are undertaken on a uniform national basis. The efficiency rationale for federal regulation is often more general, as in the case of antitrust policies. If the product market is national in scope, then one would want to recognize impediments to competition in the market through federal antitrust policies rather than relying on each of the fifty states to pursue individual antitrust actions.

A third rationale for federal regulation is that many problems occur locally but have national ramifications. Air pollution from power plants in the Midwest is largely responsible for the problems with acid rain in the eastern United States and Canada. Indeed, many of the environmental problems we are now confronting are global in scope, particularly those associated with climate change. Policies to address global warming will affect all energy sources. There is a need not only for national regulation but also for recognition of the international dimensions of the regulatory policy problem. A final rationale for national regulations is that we view certain policy outcomes as being sufficiently important that all citizens should be guaranteed them. A prominent example is civil rights regulations. We do not, for example, permit some states to discriminate based on race and sex, even if they would want to if not constrained by federal affirmative action requirements. Product Labeling Example An interesting case study that illustrates the competing merits of national versus state regulation is the 1986 California initiative known as Proposition 65.3 That ballot measure required the labeling of all products that are carcinogenic or reproductive toxicants. In the case of carcinogens, the safe harbor warning read: “WARNING: This product contains a chemical known to the state of California to cause cancer.” The cancer risk threshold for such a warning requirement was a lifetime cancer risk of 1/100,000. The regulation exempted naturally occurring carcinogens, and alcoholic beverages would be addressed by point-ofpurchase warnings rather than on product labels. These more lenient provisions were in response to the pressures exerted by the California agriculture industry and wine industry rather than any underlying riskbased rationale for treating natural carcinogens and alcoholic beverages differently. Producers and grocery manufacturers were initially fearful of the prospect of a myriad of state regulations. Products labeled as carcinogenic in California might end up in stores in Oklahoma, possibly causing consumer confusion and alarm. As other states also adopted warnings, the prospect of not matching up the product and its state-specific warning with the correct market seemed substantial. About 45 percent of national retail sales of food products are produced and distributed nationally or regionally, so the differences in state labeling requirements affect about half of all food products sold. To avoid products labeled in one state from being shipped elsewhere, additional costs for transportation, plant warehousing, field warehousing, and inventory control would total $0.05 for a product costing $0.50. The prospect of these additional costs imposed by a myriad of state regulations led the food manufacturing and grocery industry to lobby the Reagan administration for a single national regulation. Companies that initially opposed the individual state regulations sought a national uniform standard to reduce their compliance costs. No national regulation was adopted, and the anticipated crisis for firms never materialized. Companies reformulated most of their products subject to the warnings so as to avoid the stigmatizing effect of the labels. Also, the feared proliferation of conflicting state warnings never occurred. A parallel situation arose in 2016 with respect to labeling of foods that include genetically modified organisms (GMOs). Passage of GMO laws in three states—Maine, Vermont, and Connecticut—and an additional twenty-seven states considering similar legislation created the threat of a highly inefficient patchwork of regulations. Moreover, unlike the Proposition 65 standards, it is often expensive to eliminate GMOs from foods. After receiving pressure from both industry groups and consumer advocates, in 2016 Congress passed legislation, and President Obama signed into law national requirements mandating a GMO labeling system. The product-risk labeling experiences illustrate how the compliance costs associated with a multiplicity of

state regulations can lead firms to support a national variant of regulations that they oppose on an individual state basis. Similar concerns regarding state regulations have led the U.S. automobile companies to oppose state-specific fuel economy standards and to favor uniform federal standards. Overlap of State and Federal Regulations Because national regulations tend to have a preemptive effect, even if there is no specific legal provision providing for preemption, the prevention of substantial encroachment on the legitimate role of the states requires some restraint on the part of federal regulators. In recent years, several attempts have been made to recognize the legitimate state differences that may exist. Many examples of policies providing for an increased role of the states pertain to the administration of federal regulation. Beginning in 1987, the Department of Health and Human Services (HHS) gave the states more leeway in their purchases of computers and computer-related equipment for the Aid to Families with Dependent Children program. Previously, the states had to undertake substantial paperwork to get approval for their computer needs. Similarly, the Department of Transportation (DOT) has eased the paperwork and reporting procedures associated with subcontract work undertaken by the states, as in their highway construction projects. On a more substantive level, the EPA has delegated substantial authority to the states for the National Pollutant Discharge Elimination System. This program establishes the water pollution permits that will serve as the regulatory standard for a firm’s water pollution discharges. Many states have assumed authority for the enforcement of these environmental regulations, and the EPA has begun granting the states greater freedom in setting the permitted pollution amount for the firms. The Occupational Safety and Health Administration (OSHA) has undertaken similar efforts, and many states are responsible for the enforcement of job safety regulations that are set at the national level but are monitored and enforced using personnel from a state enforcement program. Although the states continue to play a subsidiary role in the development and administration of antitrust and regulatory policies, there has been increased recognition of the important role that the states have to play. This increased emphasis on the role of the states stems from several factors. Part of the enthusiasm for state regulation arises from the natural evolution of the development of federal regulation. If we assume that the federal government will first adopt the most promising regulatory alternatives and then will proceed to expand regulation by adopting the less beneficial alternatives, eventually some policies will not be desirable nationally but will be beneficial in some local areas. The states will play some role in terms of filling in the gaps left by federal regulation. Another force that has driven the expanding role of state regulation has been the recognition of legitimate differences among states. In many instances, the states have taken the initiative to recognize these differences by taking bold regulatory action, particularly with respect to insurance rate regulation. Finally, much of the impetus for state regulation stems from a disappointment with the performance of federal regulation. Indeed, it is not entirely coincidental that the resurgence of interest in federalism principles occurred during the Reagan administration, which was committed to deregulation. There has consequently been an increased emphasis on the economic rationales for giving the states a larger role in the regulatory process and in ascertaining that federal intervention is truly needed. The main institutional player in promoting this recognition of federalism principles has been the OMB in the context of the regulatory oversight process, which we will consider in later sections.

The Character of the Rulemaking Process Although federal regulatory agencies do have substantial discretion, they do not have complete leeway to set the regulations that they want to enforce. One constraint is provided by legislation. Regulations promulgated by these agencies must be consistent with their legislative mandate, or they run the risk of being overturned by the courts. In addition, regulatory agencies must go through a specified set of administrative procedures as part of issuing a regulation. These procedures do not provide for the same degree of accountability as occurs when Congress votes on particular pieces of legislation. However, substantial checks in this process have evolved over time to provide increased control of the actions of regulatory agencies. Chronology of New Regulations Figure 2.1 illustrates the current structure of the rulemaking process. The two major players in this process are the regulatory agency and the OMB. The first stage in the development of a regulation occurs when the agency decides to regulate a particular area of economic activity. Once a regulatory topic is on the agency’s agenda, it must be listed as part of its regulatory program if it is a significant action that is likely to have a substantial cost impact. The OMB has the authority to review this regulatory program, where the intent of this review is to identify potential overlaps among agencies, to become aware of particularly controversial regulatory policies that are being developed, and to screen out regulations that appear to be particularly undesirable. For the most part, these reviews have very little effect on the regulations that the agency pursues, but they do serve an informational role in terms of alerting the OMB to potential interagency conflicts.

Figure 2.1 The Regulatory Management Process Source: National Academy of Public Administration, Presidential Management of Rulemaking in Regulatory Agencies (Washington, DC: National Academy of Public Administration, 1987), p. 12. Reprinted by permission of the National Academy of Public Administration.

Figure 2.1 (continued)

The next stage in the development of a regulation is to prepare a regulatory impact analysis (RIA). The requirements for such RIAs have become more detailed over time. At present, they require the agency to calculate benefits and costs and to determine whether the benefits of the regulation are in excess of the costs. The agency is also required to consider potentially more desirable policy alternatives. After completing the RIA, which is generally a very extensive study of the benefits and costs of regulatory policies, the agency must send the analysis to the OMB for its review, which must take place

sixty days before the agency issues a Notice of Proposed Rulemaking (NPRM) in the Federal Register. During this period of up to sixty days, the OMB reviews the proposed regulation and the analysis supporting it. In the great majority of cases, the OMB simply approves the regulation in its current form. In some instances, the OMB negotiates with the agency to obtain improvements in the regulation, and in rare cases, the OMB rejects the regulation as being undesirable. At that point the agency has the choice of revising the regulation or withdrawing it. This OMB review is generally a secret process. Later in this chapter, we present overall statistics regarding the character of the regulatory decisions in terms of the numbers of regulations approved and disapproved. However, what is lacking is a detailed public description of the character of the debate between the OMB and the regulatory agency. The secretive nature of this process is intended to enable the regulatory agency to alter its position without having to admit publicly that it made an error in terms of the regulation it proposed. It can consequently back down in a face-saving manner. Keeping the debate out of the public forum prevents the parties from becoming locked into positions for the purpose of maintaining a public image. The disadvantage of the secrecy is that it has bred some suspicion and distrust of the objectives of the OMB’s oversight process, and it excludes Congress and the public from the regulatory policy debate. Moreover, because of this secrecy, some critics of the OMB may have overstated the actual impact that the review process has had in altering or blocking proposed regulations. Under the Clinton, George W. Bush, and Obama administrations, the OMB made major efforts to open up more aspects of this review to public scrutiny. If the regulation is withdrawn, then the agency can pursue an additional step. It can attempt to circumvent the OMB review by making an appeal to the president or to the vice president if the vice president has been delegated authority for this class of regulatory issues. After receiving OMB approval, the agency can publish the NPRM in the Federal Register. This publication is the official outlet for providing the text of all proposed and actual regulatory policies, as well as other official government actions. As a consequence, publication in the Federal Register serves as a mechanism for disseminating to the public the nature of the regulatory proposal and the rationale for it. Included in the material presented in the Federal Register is typically a detailed justification for the regulation, which often includes an assessment of the benefits and costs of the regulatory policy. Once the regulatory proposal has been published in the Federal Register, it is now open to public debate. There is then a thirty- to ninety-day period for public notice and comment. Although occasionally the agency receives comments from disinterested parties, for the most part these comments are provided by professional lobbying groups for business, consumer, environmental, and other affected interests. After receiving and processing these public comments, the regulatory agency must then put the regulation into its final form. In doing so, it finalizes its RIA, and it submits both the regulation and the accompanying analysis to the OMB thirty days before publishing the final regulation in the Federal Register. The OMB then has roughly one month to review the regulation and decide whether to approve it. In many cases, this process is constrained even further by judicial deadlines or by deadlines specified in legislation that require the agency to issue a regulation by a particular date. Regulatory agencies have sometimes used these deadlines strategically, submitting the regulatory proposal and the accompanying analysis shortly before the deadline, so that the OMB will have little time to review the regulation before some action must be taken. Rejected regulations are returned to the agency for revision, and some of the most unattractive regulations may be eliminated altogether. Nearly all regulations are, however, approved and published as final rules in the Federal Register. Congressional review is a very infrequent process, and the typical regulation goes into effect after thirty

days. The regulation is still, of course, subject to judicial review in subsequent years. Despite the multiplicity of boxes and arrows in figure 2.1, there are very few binding external controls on the development of regulations. The OMB has an initial chance at examining whether the regulation should be on an agency’s regulatory agenda, but at that stage, so little is known that this approval is almost always automatic. The only two reviews of consequence are those of proposed rules and final rules. The OMB’s approval is required for these stages, but this approval process is primarily influential at the margin. The OMB review effort alters regulations in minor ways, such as by introducing alternative methods of compliance that agencies might have that will be less costly but equally effective. Moreover, as we will see in chapter 20, the OMB is also successful in screening out some of the most inefficient regulations, such as those with costs per expected life saved well in excess of $100 million. Although many of the other steps, particularly those involving public participation, are not binding in any way, the agency still must maintain its legitimacy. In the absence of public support, the agency runs the risk of losing its congressional funding and the support of the president, who appoints regulatory officials and, even in the case of commissioners to organizations such as the Securities and Exchange Commission, is responsible for periodic reappointments. Thus the public comment process often has a substantive impact as well. Nature of the Regulatory Oversight Process The steps involved in issuing a regulation did not take the form outlined in figure 2.1 until the 1980s. In the early 1970s, for example, there was no executive branch oversight. After the emergence of the health, safety, and environmental regulatory agencies in the 1970s, it became apparent that some oversight mechanism was needed to ensure that these regulations were in society’s best interests. For the most part, these agencies have been on automatic pilot, constrained by little other than their legislative mandate and potential judicial review as to whether they were adhering to their mandates. Congress can, of course, intervene and pass legislation requiring that the agency take a particular kind of action, as it did with respect to the lawn mower standard for the Consumer Product Safety Commission. However, routine regulatory actions seldom receive congressional scrutiny. Most important, congressional approval is not needed for a regulatory agency to take action, provided that the action can survive judicial review. Proponents of the various types of “capture theories” of regulation would clearly see the need for such a balancing review.4 If a regulatory agency has, in effect, been captured by some special interest group, then it will serve the interests of that group as opposed to the national interest. Some observers have speculated, for example, that labor unions exert a pivotal influence on the operation of OSHA and that the transportation industry wields considerable influence over the DOT. The Nixon and Ford Administrations The first of the White House review efforts was an informal “quality of life” review process instituted by President Nixon. The focus of this effort was to obtain some sense of the costs and overall economic implications of major new regulations. This review process was formalized under the Ford administration through Executive Order 11821. Under this order, regulatory agencies were required to prepare inflationary impact statements for all major rules. These statements required that agencies assess the cost and price effects that their new regulations would have. Moreover, President Ford established a new agency in the White House, the Council on Wage and

Price Stability, to administer this effort.5 Although no formal economic tests were imposed, the requirement that agencies calculate the overall costs of their new regulations was a first step toward requiring that they achieve some balancing in terms of the competing effects that their regulations had. Before the institution of this inflationary impact statement requirement, regulatory agencies routinely undertook actions for which there was no quantitative assessment of the costs that would be imposed on society at large. Clearly, the costs imposed by regulation are a critical factor in determining its overall desirability. Knowledge of these cost effects ideally should promote sounder regulatory decisions. The review process itself was not binding in any way. The Council on Wage and Price Stability examined the inflationary impact analyses prepared by the regulatory agencies to ensure that the requirements of the executive order had been met. However, even in the case of an ill-conceived regulation, no binding requirements could be imposed, provided that the agency had fulfilled its obligations to assess the costs of the regulation, however large these costs may have been. The mechanism for influence on the regulatory process was twofold. First, the Council on Wage and Price Stability filed its comments on the regulatory proposal in the public record as part of the rulemaking process. Second, these comments in turn provided the basis for lobbying with the regulatory agency by various members of the Executive Office of the President. Chief among these participants were members of the President’s Council of Economic Advisors and the President’s Domestic Policy Staff. The Carter Administration Under President Carter, this process continued, with two major additions. First, President Carter issued his Executive Order 12044, which added a cost-effectiveness test to the inflationary impact requirement. The RIAs that were prepared by regulatory agencies now also had to demonstrate that the “least burdensome of the acceptable alternatives have been chosen.” In practical terms, such a test rules out clearly dominated policy alternatives. If the government can achieve the same objective at less cost, it should do so. Reliance on this principle has often led economists, for example, to advocate performance-oriented alternatives to the kinds of command and control regulations that regulators have long favored. In practice, however, the cost-effectiveness test affects only the most ill-conceived regulatory policies. For the most part, this test does not enable one to rank policies in terms of their relative desirability. Suppose, for example, that we had one policy option that could save ten lives at a cost of $1 million per life, and we had a second policy option that could save twenty lives at a cost of $2 million per life. Also assume that these policy options are mutually exclusive: if we adopt one policy, we therefore cannot pursue the other. The first policy has a higher cost-effectiveness in that there is a lower cost per life saved. However, this policy may not necessarily be superior. It may well be in society’s best interest to save an additional ten lives even though the cost per life saved is higher because, overall, the total net benefits to society of the latter option may be greater. Comparison of total benefits and costs of regulatory impacts was a common focus of Carter’s regulatory oversight program, but no formal requirements had to be met. The other major change under President Carter was the establishment of the Regulatory Analysis Review Group. The primary staff support for this effort came from the Council on Wage and Price Stability and the President’s Council of Economic Advisors. However, the impact that reviews by this group had was enhanced because it also included representatives from the President’s Domestic Policy Staff, the OMB, and various cabinet agencies. The establishment of this group was a recognition that the executive oversight process had to be strengthened in some way, and the mechanism that was used for this strengthening was to

bring to bear the political pressure of a consensus body on the particular regulatory agency. Moreover, the collegial nature of this group served an educational function as well: a sustained effort was made to educate regulatory officials regarding the proper economic approach to be taken in the context of regulatory analyses. For example, EPA officials present during a discussion of a proposed regulation by the National Highway Traffic Safety Administration could participate in a debate over the merits of the regulation and the appropriate means for assessing these merits, where the same kinds of generic issues were pertinent to their own agency as well. The reports by this group were not binding, but because they reflected the consensus view of the major branches of the Executive Office of the President as well as the affected regulatory agencies, they had enhanced political import. The regulatory review process structure recommended by Justice Stephen Breyer in many respects resembles this collegial review system, in which officials from a broad set of regulatory agencies are engaged in the review process.6 Even with these additional steps, there was no binding test other than a cost-effectiveness requirement that had to be met. Moreover, the effectiveness of the informal political leverage in promoting sound regulatory policies was somewhat mixed. One famous case involved the OSHA cotton dust standard. OSHA proposed a standard for the regulation of cotton dust exposures for textile mill workers. The difficulty with this regulation, in the view of the regulatory oversight officials, was that the cost of the health benefits achieved would be inordinately high—on the order of several hundred thousand dollars per temporary disability prevented. The head of the Council of Economic Advisors, Charles Schultze, went to President Carter with an assessment of the undue burdens caused by the regulation. These concerns had been voiced by the textile industry as well. President Carter first sided with the Council of Economic Advisors in this debate. However, after an appeal by Secretary of Labor, Raymond Donovan, which was augmented by an expression of the affected labor unions’ strong interests, Carter reversed his decision and issued the regulation. What this incident made clear is that even when the leading economic officials present a relatively cogent case concerning the lack of merit of a particular regulation, political factors and economic consequences other than simply calculations of benefits and costs can drive a policy decision. As a postscript, it is noteworthy that the Reagan administration undertook a review of this cotton dust standard shortly after taking office. Although Reagan administration economists were willing to pursue the possibility of overturning the regulation, at this juncture the same industry leaders who had originally opposed the regulation now embraced it, having already complied with the regulation, and they hoped to force other, less technologically advanced firms in the industry to incur these compliance costs as well. The shifting stance by the textile industry reflects the fact that the overall economic costs imposed by the regulation, not the net benefit to society, are often the driving force behind the lobbying efforts involved in the rulemaking process. The Reagan Administration Several pivotal changes in the regulatory oversight mechanism took place during the Reagan administration. First, President Reagan moved the oversight function from the Council on Wage and Price Stability to an office in the OMB, the Office of Information and Regulatory Affairs (OIRA). Because the OMB is responsible for setting the budgets of all regulatory agencies and has substantial authority over them, this change increases the institutional clout of the oversight mechanism. The second major shift was to increase the stringency of the tests being imposed. Instead of simply imposing a cost-effectiveness requirement, Reagan moved to a full-blown benefit-cost test in his Executive Order 12291: Sec. 2. General Requirements. In promulgating new regulations, reviewing existing regulations, and developing legislative

proposals concerning regulation, all agencies, to the extent permitted by law, shall adhere to the following requirements: a. Administrative decisions shall be based on adequate information concerning the need for and consequences of proposed government action; b. Regulatory action shall not be undertaken unless the potential benefits to society for the regulation outweigh the potential costs to society; c. Regulatory objectives shall be chosen to maximize the benefits to society; d. Among alternative approaches to any given regulatory objective, the alternative involving the least net costs to society shall be chosen; and e. Agencies shall set regulatory priorities with the aim of maximizing the aggregate net benefits to society, taking into account the condition of the particular industries affected by regulations, the condition of the national economy, and other regulatory actions contemplated for the future.

If, however, the benefit-cost test conflicts with the agency’s legislative mandate—as it does for almost all risk and environmental regulations—the test is not binding. In that respect, the new regulatory oversight requirements for these agencies were not more stringent than the previous cost-effectiveness test of the Carter administration. The third major change in the executive branch oversight process was the development of a formal regulatory planning process whereby the regulatory agencies would have to clear a regulatory agenda with the OMB. This procedure, which was accomplished through Executive Order 12498, was an extension of a concept begun under the Carter administration known as the Regulatory Calendar, which required the agency to list its forthcoming regulatory initiatives. This exercise has served to alert administration officials and the public at large as to the future of regulatory policy. But practically speaking, it has not had as much impact on policy outcomes as has the formal review process, coupled with a benefit-cost test. The Bush Administration Under President George H. W. Bush, the regulatory oversight process remained virtually unchanged. The thrust of the effort was almost identical in character to the oversight procedures that were in place during the second term of the Reagan administration. For example, the same two key executive orders issued by Reagan remained in place under President Bush. The Clinton Administration President William Clinton continued the regulatory oversight process in a manner that was not starkly changed from the two previous administrations. In his Executive Order 12866, President Clinton established principles for regulatory oversight similar to the emphasis on benefits, costs, and benefit-cost analysis of previous administrations. However, the tone of the Clinton executive order was quite different: it was less adversarial with respect to the relationship with regulatory agencies. Moreover, this executive order correctly emphasized that many consequences of policies are difficult to quantify and that these qualitative concerns should be taken into account as well. Although subsequent administrations amended parts of this executive order, it remains the cornerstone to the regulatory oversight process. The Clinton administration also raised the threshold for reviewing proposed regulations, restricting the focus to the truly major government regulations. The George W. Bush Administration The administration of President George W. Bush kept Clinton’s Executive Order 12866 intact until 2002,

when Executive Order 13258 introduced some minor structural changes pertaining to the role of the vice president, which President Obama subsequently revoked. The two principal advances in the rulemaking process during the recent Bush administration were a fuller articulation of the economic principles to guide regulatory analyses and the introduction of “prompt letters” as a mechanism for urging agencies to initiate regulatory initiatives. Although the main function of regulatory oversight will continue to be restraining excessive regulations, ideally the OMB will also be able to make assessments of how resources can be allocated more effectively and whether valuable regulatory opportunities are being missed. The OMB prompt letters, which are available to the public, have created pressures that led HHS and the FDA to introduce labeling for trans-fatty acids, strengthen corporate governance of Fannie Mae and Freddie Mac, and consider a proposed EPA rule to reduce pollution from non-road diesel engines. The Obama Administration Notwithstanding another change in political parties of the president, the fundamental aspects of the regulatory oversight criteria remained unchanged, as the Clinton administration’s Executive Order 12866 continued to serve as the principal guidance document. By the time of the Obama administration, the calculations of benefits and costs of regulatory proposals had become a routine practice. Even for agencies whose legislation imposed policy criteria other than a benefit-cost test, a widespread effort was made to demonstrate that the calculated economic benefits exceeded the regulatory costs. The Obama administration’s most important contribution to the structure of the regulatory oversight process was the regulatory look-back effort under Executive Order 13563 in 2011 and a companion Executive Order 13610 in 2012. Instead of focusing solely on proposed new regulations, these executive orders encouraged agencies to review existing regulations and to identify regulations that could be modified or eliminated because they are out of date or unduly burdensome given the benefits that they provide. Although the institutional inertia of regulatory agencies tends to limit the efficacy of efforts to revisit existing regulations, this effort did yield some notable successes. The easing of the ban on use of all electronic devices on airplanes is one such change that provided broad benefits to airline passengers. Some seemingly silly regulatory requirements also disappeared, as in the case of the law from the 1970s that designated milk as an “oil,” so that regulatory requirements pertaining to the spillage of milk were comparable to those for oil spills. Perhaps the look-back success story with the greatest savings was the elimination of some unnecessary regulatory and health care reporting requirements that had been imposed by HHS, yielding $5 billion in savings. Continued vigilance to ensure the soundness of existing regulations remains desirable, but the emphasis of OIRA’s regulatory efforts continues on proposed new regulations where OIRA’s efforts can have the greatest impact. The Obama administration also launched ambitious new initiatives focusing on climate change but nevertheless ensured that the estimated benefits exceeded the costs. The administration also advocated greater reliance on “nudge” policies rather than command and control approaches. While the character of most regulatory interventions remained the same as in previous administrations, the regulatory oversight effort highlighted the increased pertinence of behavioral economics approaches. The Trump Administration On January 30, 2017, President Trump issued Executive Order 13771, titled “Reducing Regulations and Controlling Regulatory Costs.” Instead of a benefit-cost focus, the executive order places primary emphasis

on cost reduction and a reduction in the overall level of regulation. In particular, the order specifies that “for every one new regulation issued, at least two prior regulations be identified for elimination.” Moreover, at least in the first fiscal year, the total cost of new and repealed regulations for each agency must be no greater than zero unless otherwise required by law. The regulatory budget concept embodied in this zero regulatory cost increase requirement has been discussed in the regulation literature for decades. However, it has found little support among economists because such budgets ignore the overall policy merits and serve principally as a form of fiscal discipline. The Trump administration’s two-for-one regulatory approach parallels similar efforts in other countries. The United Kingdom has a one-in-three-out approach based on regulatory compliance costs. Canada has a one-for-one regulatory program based on the administrative burdens of the regulation, so that the main concern is with the paperwork requirements of the regulation, that is, the associated “red tape” of regulations rather than the overall social costs. The U.S. cost focus is broader, as it includes the overall opportunity costs to society. The retrospective aspect of the executive order has some commonality with the look-back efforts in the Obama administration and previous administrations. However, the Trump administration’s criteria for repealing regulations are not tied to any overall economic efficiency test but are driven by costs alone. From a political standpoint, the two-for-one requirement may incentivize regulatory agencies to identify potential regulatory reforms. But the impact of the executive order may be muted to the extent that the two-for-one requirement is not binding, given that statutory requirements and emergencies are exempted. The prospective nature of regulatory benefits and costs will drive both the efficiency aspects of whether a regulation should be eliminated as well as the political support for such a change. Previous regulations for which the costs exceeded the benefits may no longer be inefficient if the costs have already been incurred, but there are substantial prospective benefits. In many instances, firms have already incurred the expenditures to comply with the regulation, so that the benefits from and costs of relaxing or eliminating the regulation may be quite different than they were at the time the regulation was adopted. The retrospective nature of these costs will also dampen political support for altering the regulation. Firms already in compliance may view the compliance costs as a barrier to potential competitors. Regulatory Reform Legislation In addition to the influence of executive branch oversight, Congress has also sought to bring the cost of regulation under control. There has been increasing recognition that a greater effort must be made to restrict regulatory initiatives to those that are truly worthwhile. Coupled with this belief is an acknowledgment that executive branch oversight alone cannot ensure sound regulatory outcomes. One source of the difficulty can be traced to the restrictive legislative mandates of regulatory agencies. In the case of health, safety, and environmental regulations, the legislation drafted by Congress did not require that agencies achieve any balance between benefits and costs. Indeed, in some cases, the legislation even precluded that agencies undertake such balancing or consider cost considerations at all. Such an uncompromising approach can be traced in part to ignorance on the part of legislators, who did not understand the potential scope of these regulatory efforts or the fact that absolute safety is unattainable. Society could easily exhaust its entire resources with potential safety-enhancing efforts before achieving a zero-risk level. Typical of such uncompromising mandates is the requirement in the Occupational Safety and Health Act that the agency “assure so far as possible every man and woman in the nation safe and healthful working

conditions.” In the 1980 U.S. Supreme Court decision with respect to the proposed OSHA cotton dust standard, the court interpreted this obligation narrowly.7 While the agency had to impose standards for which compliance was “feasible,” the court interpreted feasibility as “capable of being done” rather than in terms of benefit-cost balancing. Regulators have used this decision in conjunction with their own restrictive legislative mandates to claim that they are constrained by their legislation to ignore benefit-cost concerns. Agencies consequently seek to bolster their position by claiming that they are constrained by legislation, but these constraints are not necessarily always binding. In a subsequent U.S. Supreme Court decision, the Court ruled that agencies did have the flexibility to interpret their legislative mandate in a reasonable manner.8 In this particular case, the court gave the EPA the flexibility to adopt the “bubble” policy, whereby it let firms select the most cost-effective means of reaching an air pollution target rather than requiring that firms meet a specific pollution objective for each emissions source. To date, regulatory agencies have made few attempts to avail themselves of this flexibility, and the OMB has been unsuccessful in urging them to do so. As a result, there have been continuing efforts to pass regulatory reform legislation that, in effect, would make the regulatory guidelines issued by the president override the influence of the legislative mandates. The closest such efforts have come to success was in 1995, when both the House and the Senate passed regulatory reform legislation. No consensus legislation emerged, and regulatory reform bills continue to be pending before Congress. These efforts have failed thus far perhaps because the proposed bills have been overly ambitious. In addition to benefit-cost requirements, proposed legislation would have also revamped the risk analysis process by requiring that agencies use mean risk assessments rather than upper-bound values. Many proposed bills also included requirements that went beyond revamping the criteria for regulations, including provisions for peer review, judicial review of regulatory analyses, and retrospective assessments of regulatory performance. The principal components of any such legislation are requirements that agencies assess the benefits and costs of their regulations and demonstrate that the benefits exceed the costs. Other less ambitious possibilities also could be effective, such as permitting agencies to balance benefits and costs but not requiring them to do so. Under this approach, it would be the responsibility of the OMB regulatory oversight group to exert leverage without the presence of existing legislative constraints. These issues are likely to continue to be on the congressional legislative agenda until some kind of regulatory reform bill resolves the conflict between the national interest in balanced regulatory policies and the agencies’ adherence to restrictive legislative mandates. Benefit-Cost Analysis From an economic efficiency standpoint, the rationale for a benefit-cost approach seems quite compelling. At a very minimum, it seems reasonable that society should not pursue policies that do not advance our interests. If the benefits of a policy are not in excess of the costs, then clearly it should not be pursued, because such efforts do more harm than good. Ideally we want to maximize the net gain that policies produce. This net gain is the discrepancy between benefits and costs, so our objective should be to maximize the benefit-minus-cost difference. The underlying economic impetus for the benefit-cost approach is the Hicksian potential compensation principle. The gainers from such policies can potentially compensate the losers, making all parties better off. However, unless potential compensation is actually paid, there is no assurance that everyone’s welfare will

be improved. As a practical matter, it is generally impossible to make everyone better off from each individual regulatory policy, but making sound decisions across the entire spectrum of regulatory policies will make almost all of us better off. The requirement that benefits exceed costs for sound regulatory policies has also given rise to a simple shorthand. The ratio of benefits to costs, or the benefit-cost ratio, must exceed 1.0 for a policy to be potentially attractive. This requirement serves as the minimal test for policy efficacy, as our overall objective should be to maximize the spread between benefits and costs. To see how one would design a regulatory policy to reap the greatest net benefits, let us consider as a concrete example environmental policy choice. The underlying principles are identical in other policy arenas as well. As indicated in figure 2.2, the cost of providing environmental quality rises as the level of environmental quality improves. Moreover, the cost rises at an increasing rate because improvements in environmental quality become increasingly costly to achieve. As the most promising policy alternatives are exploited, one must dip into less effective means of enhancing environmental quality, and resorting to these contributes to the rise in costs.

Figure 2.2 Benefit-Cost Analysis of Environmental Quality Control

The other curve in the diagram is the total benefits arising from improved environmental quality. The initial gains are the greatest, as they may affect our life and well-being in a fundamental manner. The additional health and welfare effects of environmental quality improvements eventually diminish. Our task of finding the best level of environmental quality to promote through regulation reduces to achieving the largest spread between the total benefit and total cost curves. This maximum is achieved at the environmental quality level q*. At that point, the gap between the cost and benefit curves is the greatest, with the gap giving the maximum value of the net benefits less costs that are achievable through environmental regulation. The slope of the total cost and total benefit curves is equal at environmental quality q*. The slope of the total cost curve is known as the marginal cost, as it represents the incremental increase in cost that arises from a unit increase in environmental quality. Similarly, the slope of the total benefit curve is known as the marginal benefit curve, as it represents the increment in benefits that would be produced by a one-unit increase in environmental quality. An alternative way to assess the optimal policy is to examine the marginal cost and marginal benefit curves, which are illustrated in figure 2.3. Marginal costs are rising because of the decreasing productivity of additional environment-enhancing efforts as we pursue additional improvements in environmental quality. Similarly, the marginal benefits shown by this curve are declining because they experience the greatest incremental benefits from such improvements when the environmental quality is very bad. The optimal policy level is at environmental quality level q*, at which we equate marginal benefits and marginal costs. Thus the requirement for optimal quality choice can be characterized by the following familiar equation:

Figure 2.3 Marginal Analysis of Environmental Policies

This result that maximizing net benefits is achieved by equating marginal benefits and marginal costs will be a recurring theme throughout the book. Subsequent chapters examine decisions by firms that can be recast in this framework. Consider equation 2.1 in the context of a firm choosing how much output to produce. A profit-maximizing firm will produce up to the point where the marginal costs of production are equal to the marginal benefit, which equals the additional revenue produced by selling one more unit. In the case of a competitive firm, which is small relative to the entire market, the marginal benefit of selling an extra unit is the product price, so a competitive firm setting marginal benefits equal to marginal costs will produce at the point where price equals the marginal cost of production. A monopolistic firm will be somewhat different, in that this firm is so large relative to the market that more sales by the monopoly will affect the market price. The monopolist will set the marginal cost equal to the additional revenue brought in by selling one more unit, which will differ from the price of the last unit sold, since more sales affect the price paid for all units of the good. Discounting Deferred Effects If all the effects of regulatory policies were immediate, one could simply sum up these influences, treating effects today the same as one would treat an impact many years from now. Even if one ignores the role of inflation, it is important to take the temporal distribution of benefits and costs into account. If one could earn a riskless real rate of interest r on one’s own money, then the value of a dollar today is (1 + r)10 ten years from now. Thus, resources have an opportunity cost, and one must take this opportunity cost into account when assessing the value of benefit and cost streams over time. This issue is not unique to the social regulation area, but it plays a particularly important role with respect to these regulations because of the long time lags that tend to be involved, particularly when evaluating regulations focusing on cancer and the future of the planet. Although a substantial literature exists on how one should approach the discount rate issue and estimate the appropriate rate of discount, these approaches can be simplified into two schools of thought. One approach relies on the opportunity cost of capital. In this instance, market-based measures provide the guide as to the appropriate discount rate. A simple but not too unreasonable approximation to this measure is simply the real rate of return on federal bonds. The alternative is the social rate of time preference approach, under which society’s preference for allocating social resources across time may be quite different from the time rate expressed in private markets. How the social rate differs from the private rate and the extent of the difference from private rates of return remain subjects of considerable debate. From a practical standpoint, such controversies are not of major consequence in actual regulatory decisions. The OMB (under OMB Circular A-4) now requires that all policy benefits and costs for regulatory policies be assessed using rates of interest of 3 percent and 7 percent, thus providing information of the sensitivity of the present value calculations to the interest rate used. Before 1993, the OMB had required a 10 percent rate, which is an extremely high real (that is, inflation-adjusted) rate of return. Present Value The procedure by which one converts a stream of benefits and costs into a present value is simply to divide any deferred impacts in year i by (1 + r)i. Viewed somewhat differently, if one could earn a rate of interest r on $1 invested today, the value of this dollar i years from now would be (1 + r)i. Thus the present value calculation simply puts the future payoff into terms that are comparable to payoffs today. More specifically, if one has project benefits b and c in year i, then the formula is given by

To see the implications of the present value calculation, consider the simplified discounting example in table 2.1. Three different sets of results are provided. First, the benefits and costs in which there is no discounting comprise the first part of the table. As can be seen, the benefits exceed the costs by 0.15, and the policy is worth pursuing. If one adopts a discount rate of 5 percent, then the deferred benefits one year from now have a lower present value. Nevertheless, the policy still remains justified on benefit-cost grounds, although the strength of the justification has been weakened. The final example shows the discount rate raised to 10 percent. This higher rate lowers the value of next year’s benefits even further. In this instance, costs exceed benefits, and the policy is no longer justified. As a rough rule of thumb, since costs are generally imposed early in the life of a regulation, and benefits often accrue later, raising the discount rate tends to reduce the overall attractiveness of policies. The exact relationship hinges on the number of sign reversals in the netbenefit-less-cost stream over time. For one sign reversal—net costs in the early periods followed by net benefits—raising the discount rates reduces the attractiveness of a policy. The role of discounting is particularly instrumental in affecting the attractiveness of policies with long-term impacts, such as environmental regulations that address long-run ecological consequences or cancer regulations for which the benefits will not be yielded for two or three decades. Not surprisingly, policies with long-term effects have been major battlegrounds over discounting. The EPA RIA for asbestos regulation generated opposition from the OMB and members of Congress for adopting a zero discount rate to boost the present value of the benefits. Such discount rate controversies have not abated, as they remain at the heart of climate change policy debates, for which the discount rate for long-term impacts is a critical driver of the desirability of climate change initiatives. Table 2.1 Discounting Example Year 0

Year 1

Total

No discounting Benefits Costs Benefits − costs

1.00 3.00 –2.00

2.15 0.00 2.15

3.15 3.00 0.15

Discounting at 5% Benefits Costs Benefits − costs

1.00 3.00 –2.00

2.05 0.00 2.05

3.05 3.00 0.05

Discounting at 10% Benefits Costs Benefits − costs

1.00 3.00 –2.00

1.95 0.00 1.95

2.95 3.00 –0.05

Although the practice of reducing the value of deferred benefits may seem unduly harsh, it will be muted at least to some extent by increases in the unit benefit value over time. As society continues to become richer, the value we place on environmental quality and risk reduction will also rise. As a result, there will be some increase in the value of benefits over time because of society’s increased affluence, which generally raises the value that people attach to their health or to environmental quality. In general, one will still discount in a manner that reduces the present value of future impacts. In a

situation in which one did not discount at all—a position that has been frequently advocated by the EPA and by some members of Congress—then any action with permanent adverse effects could never be undertaken. A $1 annual loss that was permanent would swamp in value any finite benefit amount that was for one time only. No policies that would affect a unique natural resource or that would lead to the extinction of a species could ever be pursued. The cost of such efforts would be infinite. Trivial losses that extended forever could never be imposed, irrespective of how great the current benefits are. When confronted with the full implications of not discounting at all, it is likely that few would advocate this practice. We certainly do not follow this practice in our daily lives. Otherwise, we would save all of our resources, earn interest, and spend the money only in our last years of life. In many instances, it is necessary to calculate the present value of an infinite stream of payoffs. What, for example, is the value of a taxicab license that generates $v every year? Suppose that the payment is received at the end of each period. It is straightforward to show that the present value of this infinite stream is given by $v/r.9 For example, with an interest rate of 10 percent, the present value of $5,000 per year would be $5,000/(0.10) = $50,000. This formula for calculating the present value of an infinite stream of payoffs is often useful for getting a reasonable estimate of the present value of other long streams of payoffs, even if they are not infinite. The Criteria Applied in the Oversight Process Certainly the most dominant criteria now used in the oversight process are those ascertaining that the benefits of the regulation exceed the costs. Although the OMB has frequently been unable to enforce the benefit-cost requirements because of conflicts with the agency’s legislative mandate, several notable success stories illustrate how effective regulation can be if approached in a sound economic manner. Regulatory Success Stories One of these success stories is visible every time we ride in an automobile. A prominent regulatory innovation has been the requirement that all cars have center-high mounted stop lamps. When the driver puts on the brakes, the brake lights go on as always, but so does a red light in the bottom center of the rear window. This regulation was the subject of an extensive analysis in which the DOT demonstrated that the benefits of the regulation exceeded the costs. Equally important is that the DOT also conducted a series of tests with various fleets of automobiles to determine which of several stop lamp designs would be the most effective in reducing rear-end collisions. Thus an explicit attempt was made to evaluate regulatory policy alternatives and to select the most attractive from among these alternatives. A well-established environmental regulation success story involving the OMB is the phase-down of lead in gasoline. Through a series of regulations, the EPA requirements have all but eliminated the use of lead in gasoline. This regulation was accompanied by a comprehensive regulatory analysis that clearly established that the benefits of the regulation exceeded the costs.10 This regulation, one of the first where the EPA clearly established the economic attractiveness of the policy in terms of benefit-cost ratio, has had enormous demonstrable impacts. Lead emissions have declined dramatically, consistent with projected regulatory benefits. Promotion of Cost-Effective Regulation One general way in which the government promotes the most cost-effective regulation is through the

encouragement of performance-oriented regulation. Our objective is to promote outcomes that are in the interests of the individuals affected by regulations rather than simply to mandate technological improvements irrespective of their impact. This concern with ends rather than means leads to the promotion of the use of performance-oriented regulations whenever possible. Rather than mandate nationally uniform standards, it is frequently desirable to give firms some discretion in terms of their means of compliance. The FDA’s tamper-resistant packaging requirements impose effectiveness requirements on the packaging but do not dictate particular types of packaging that must be used. Similarly, the child-resistant cap requirements of the Consumer Product Safety Commission specify safety thresholds that the caps must meet in terms of preventing children from opening the bottles, but they do not prevent firms from adopting particular cap designs that they might believe are most appropriate for the product. The adoption of performance-oriented alternatives has generally lagged behind economists’ enthusiasm for these policies. Two principal reasons account for this discrepancy. First, the enforcement of some performance-oriented alternatives can be more expensive. If firms were simply given general guidelines to make their workplace safer but were not given any explicit instructions for doing so, then government inspectors would have a more difficult task determining whether the firm had met the minimal safety requirements.11 The second major barrier to performance-oriented regulation has been political. In the case of air pollution requirements, representatives from soft-coal-producing states lobbied for legislation that required firms to develop technological solutions to air pollution (for instance, the use of scrubbers) as opposed to changing the type of fuel they used to a less polluting form of coal. This emphasis was dictated by regional economic self-interests, not by national efficiency concerns. Distortion of Benefit and Cost Estimates Another principle that has been promoted through the oversight process is the utilization of unbiased estimates of the benefits and costs. The need for lack of bias may appear to be both obvious and uncontroversial, but in fact it represents an ongoing problem with respect to risk regulations. The scientific analyses underlying risk regulations typically include a variety of assumptions for the purpose of “conservatism,” but which in effect distort the assessment of the merits of the regulation. For example, projections of the cancer-causing implications of some chemical may be made by relying on the most sensitive animal species, as opposed to the animal species most relevant to extrapolation to humans. In addition, scientific analysts frequently focus on the upper end of the 95 percent confidence interval, thus placing great emphasis on how high the risk potentially could be as opposed to their best estimate of how high the risk actually is. Focusing on the upper limit of the potential risk distorts the policy mix in a number of ways. Most important is that it shifts our attention to those hazards about which the least is known, as opposed to those hazards that pose the greatest threat and will endanger the greatest number of lives. Because we often know the least about the very-low-probability events since we have little experience to guide us, the effect has often been to tilt policies in the direction of the inconsequential low-probability events that we dimly understand, whereas the major sources of accidents and illness that are precisely understood receive less attention. In some cases, additional conservatism factors are incorporated arbitrarily in the risk analysis process. For example, risk analysts assessing the reproductive toxicity of different chemicals may simply multiply these

risk levels by a factor of 1,000 for the purposes of “conservatism,” but there is no justification for multiplying by any factor. The problem that these conservatism adjustments pose from the standpoint of government policy is that when we address different regulations and are comparing their efficacy, we do not know the extent to which the benefits have been distorted. Various conservatism factors are used by different agencies in different contexts. These adjustments are seldom detailed in the regulatory analysis and are often compounded in the successive stages of analysis. Conservatism multipliers are often added in each round of the calculations. Such distortions prevent the regulatory policymakers from having the accurate information they need to choose among policies. The overall judgment as to how conservative society wishes to be in bearing risk or in incurring other outcomes is a social policy decision that should be made at the policymaking level of the regulatory agencies and the executive branch. Arbitrary conservatism factors incorporated in the risk analysis in effect involve little more than stealth policymaking that is masquerading as a scientific exercise. Regulatory Role of Price and Quality A general principle that has guided the development of regulation and in particular the deregulation effort is that “regulation of prices and production in competitive markets should be avoided.”12 The price system has a legitimate role to play, as is evidenced in the discussion of markets in all elementary economics textbooks. Recognition of the role of the price mechanism has provided the impetus for the deregulation of the rate and entry regulations that were formerly present in such industries as airlines, trucking, and communications. Some regulations, such as minimum wage requirements, explicitly interfere with these prices. The purported benefits of these regulations is that they will raise workers’ income level to a fairer wage amount needed for subsistence, although most labor economists believe that the long-run effect of minimum wage regulations is to displace workers from jobs, especially when the minimum wage level is high relative to local labor market conditions. It appears in this regard that teenagers, particularly minority teenagers, have been most hard hit by the adverse employment effects of higher minimum wage levels. Just as we do not want to standardize product prices, we also do not wish to standardize quality except when there are legitimate reasons for doing so, as in the case of provision of minimal safety levels for cars. Electronic stability control and accident avoidance systems are beneficial safety features, but they are also expensive. We would like to give consumers the option to purchase such equipment; the more expensive cars typically offer these features. However, we currently do not require that all cars have them because those features would make up a substantial part of the product price for the low end of the market. Instead of mandating all available safety devices for all cars, we have required that certain minimal safety features be universal, and we permit other safety features to be optional. Consumers who place substantial value on safety can purchase the cars offering these additional features, and we can continually revise the nationally mandated safety standards to reflect the safety floor that is most sensible from the standpoint of being imposed on a universal basis. Impact of the Oversight Process The objective of regulatory oversight is to foster better regulations, not necessarily less regulation. However, one consequence of improving regulation is that we will eliminate those regulations that are unattractive from the standpoint of advancing the national interest. Moreover, much of the impetus for regulatory oversight has been a concern with the excessive costs imposed by unattractive regulations, so that

considerable attention has been devoted to these costs. Trends in Major Regulations The OMB designated regulations that impose annual costs of $100 million or more as being major rules. Notwithstanding the substantial cost level that is required to be designated a major rule, regulations that impose such substantial costs are not uncommon. Figure 2.4 presents a chart of the number of new final major rules that were promulgated from 1981 to 2016. The regulations promulgated during Republican administrations are indicated by the black bars, while those issued in Democratic administrations are indicated by the gray bars. There is, as one might expect, greater regulatory activity in Democratic administrations, but the differences in the number of major rules issued are not especially stark. However, the total level of regulatory costs associated with these initiatives does vary considerably by administration. The first three years of the George W. Bush administration accounted for a total of only $4.4 billion in new regulations, while the Clinton administration issued $13.1 billion in new regulations in the year 2000 alone.

Figure 2.4 Number of Final Economically Significant Rules Published by “Presidential Year” Source: Susan E. Dudley, “Can Fiscal Budget Concepts Improve Regulation?” NYU Journal of Legislation and Public Policy 19, no. 2 (2016): 259–280, figure 3.

What is particularly striking is that figure 2.4 demonstrates that there are as many spikes in regulatory activity as there are years in which at least seventy major new regulations were issued. The timing of four of these spikes is not random, as regulatory activity surged in the final years of the administrations of President George H. W. Bush, President Clinton, President George W. Bush, and President Obama. The only exception of a presidential term ending in no surge in regulatory activity is the final full year of the Reagan administration, 1988. However, since there would be policy continuity with the succession of Vice President

Bush to become the next president, the Reagan administration felt no urgency to finalize its regulatory agenda. The rising costs of regulatory policy and the surge in regulatory activity at the end of presidential administrations does not necessarily imply that the regulatory initiatives are unsound. Agencies may wish to issue worthwhile regulations during the administration in which the proposals were developed so that the administration can be credited with the initiative. It may also be the case, however, that regulations that are particularly controversial or do not have clear-cut merit may be given lower priority until the end of an administration. The Costs and Benefits of Major Regulations The stakes involved are enormous. In 1990 President George H. W. Bush noted the staggering levels of costs involved: Federal regulations impose estimated direct costs on the economy as high as $175 billion—more than $1,700 for every taxpayer in the United States. These costs are in effect indirect “taxes” on the American public—taxes that should only be levied when the benefits clearly exceed the costs.13

Roughly half of these costs are attributable to EPA regulations, as earlier estimates of the costs imposed by EPA policies indicated that these regulatory costs alone were in the range of $70–$80 billion per year.14 While the cost levels are indeed substantial, advocates of regulation observe that the benefits are considerable as well. During the Obama administration, OIRA prepared a tally of the benefits and costs by year for all major rules for which there were available benefit and cost estimates. Table 2.2 presents these values for the 2005–2014 decade. The values shown are the midpoints of the benefit and cost ranges for each year. The regulatory costs for this subset of major rules are $92.9 billion, which is a considerable amount. The peak year for regulatory costs was 2012, in which new regulations had a price tag of $22.5 billion. Table 2.2 Benefits and Costs of Major Rules, 2005–2014 Fiscal Year

Number of Rules

Benefits

Costs

2005 2006 2007 2008 2009 2010 2011 2012 2013 2014

12 6 12 12 15 17 12 14 7 13

134.9 4.9 139.4 31.5 24.6 68.5 81.1 110.0 60.9 17.7

6.5 1.7 13.2 11.2 8.7 12.3 9.9 22.5 2.9 4.0

Total

120

673.5

92.9

Source: U.S. Office of Management and Budget, 2015 Report to Congress on the Benefits and Costs of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities (Washington, DC: U.S. Government Printing Office, 2016). Note: Benefits and costs are in billions of 2015 dollars. Figures reflect midpoint of OMB ranges.

The benefits that are derived from these regulations dwarf the level of costs, as the total benefit amount is $673.5 billion. Estimated regulatory benefits exceed costs in every year, sometimes by more than an order

of magnitude. Some of these benefit estimates are controversial. But if the benefits of these policies are in fact this great, these regulations provide positive net benefits to society and are not wasteful impositions. Regulatory agencies differ quite markedly in their regulatory activities. Table 2.3 presents the distribution of the number of major rules, their benefits, and their costs for different regulatory agencies for the 2005– 2014 decade. The most prominent agency is the EPA, which is responsible for the greatest regulatory costs and benefits of any agency. Not far behind is the DOT, which issued almost as many major rules but imposed costs only one-third of those of EPA regulations. There were also three rules jointly issued by the EPA and the DOT for which the costs and benefits exceed the totals for any other agency. These jointly issued rules all pertained to motor vehicle emissions: light-duty greenhouse gas emission standards and corporate average fuel economy (CAFE) standards, commercial medium- and heavy-duty on-highway vehicles and work truck fuel, and joint rulemaking to establish 2017 and later model year light-duty vehicle greenhouse gas emissions and CAFE standards. Table 2.3 Total Benefits and Costs of Major Rules, by Agency, 2005–2014 Agency

Number of Rules

Benefits

Costs

Department of Agriculture Department of Energy Department of Health and Human Services Department of Homeland Security Department of Housing and Urban Development Department of Justice Department of Labor Department of Transportation Environmental Protection Agency Joint Department of Transportation and Environmental Protection Agency

4 13 16 2 1 4 8 28 32 3

1.3 24.6 29.1 0.3 3.0 3.7 18.8 28.0 514.0 50.4

1.3 8.3 3.3 0.2 1.2 1.2 4.8 13.5 45.0 14.0

Source: U.S. Office of Management and Budget, 2015 Report to Congress on the Benefits and Costs of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities (Washington, DC: U.S. Government Printing Office, 2016). Note: Benefits and costs in billions of 2015 dollars. Figures reflect the midpoint of OMB ranges.

In addition to these regulatory efforts that impose costs, regulatory policies also include cost-saving deregulation efforts that were often spearheaded by economists in prominent government positions. The Council of Economic Advisors estimates that airline deregulation led to $15 billion worth of gains to airline travelers and airline companies.15 Similarly, estimates suggest that savings resulting from trucking deregulation have been in excess of $30 billion annually.16 The annual benefits from railroad deregulation have also been substantial—on the order of $15 billion annually.17 The total savings from these deregulation efforts in the transportation field are on the order of $60 billion per year, a substantial payoff indeed for a return to greater reliance on market forces. Notwithstanding the success of these deregulation efforts, deregulation is not always desirable, as regulations often have a legitimate role to play, particularly when market failures occur. Alternative Measures of the Scale of Regulation An instructive measure of trends in regulatory burdens is provided by the index of the number of pages published in the Federal Register. One would expect there to be a correlation between the number of pages devoted to government rules and regulations and the cost these regulations impose. This need not be the case if, for example, agencies become adept at editing their regulatory documents to make them shorter but no

less burdensome. Moreover, some Federal Register entries modify regulations and decrease costs rather than increase them. However, it is generally believed that a positive, albeit highly imperfect, correlation exists between the amount of federal regulation published in the Federal Register and the regulatory costs imposed. These statistics represent measures of the flow of new regulatory activity rather than the total stock of regulations. Figure 2.5 indicates the trends in these costs for the past eighty years. In 1936 the number of pages in the Federal Register was relatively modest—2,620. The pace of regulation increased steadily but slowly until 1970, when the number of pages was 20,036. It is apparent from figure 2.5 that a rapid escalation in regulation began in that decade. The 1970s marked the establishment of the new wave of health, safety, and environmental regulations, which greatly expanded the role of the government and its regulatory activities. By 1980 the number of pages in the Federal Register had reached 87,012. The first half of the 1980s marked a decrease in the dissemination of new regulations, which was consistent with the Reagan administration’s efforts to deregulate and roll back regulations. However, by the second term of the Reagan administration, there was renewed regulatory activity, which is also reflected in the subsequent increase in the number of pages of regulations published in the Federal Register. The number of pages in 1990 was 53,620, which was below the previous peak, but by 2000, the number of pages reached 83,294.

Figure 2.5 Federal Register Pages Published, 1936–2014 Source: U.S. Office of the Federal Register, Federal Register Pages Published, 1936–2014, https://www.federalregister.gov /uploads/2015/05/OFR-STATISTICS-CHARTS-ALL1-1-1-2014.xls.

The more recent upward trend in the total number of pages published in the Federal Register is more reflective of the increased volume of regulatory initiatives under the Clinton administration and the George W. Bush administration. Whereas about 50,000 pages were published during many of the years in the 1980s, since 1993 the total Federal Register page count has been in a high but narrower range from 67,518 to 82,480. How much meaning one should attach to such statistics is unclear. For example, some years of peak regulatory activity, such as 1980, include statistics that are quite misleading as a measure of regulatory burden. That year featured a flurry of regulatory initiatives at the end of the Carter administration in January

1980, which was subsequently followed by a rescinding of regulations and a major deregulation effort on the part of the Reagan administration later that year. With this principal exception, however, the overall implication of figure 2.5, that regulation has become an increasingly important part of the economy, is certainly valid. Other measures of regulatory activity have similar implications. The Code of Federal Regulations summarizes the stock of existing regulations, whereas the Federal Register page count provides a measure of the flow of annual regulations. Figure 2.6 provides the tally of number of pages in the Code of Federal Regulations from 1950 to 2014. The total number of pages of regulation in the Code of Federal Regulations was under 10,000 in 1950 and had risen to 54,834 by 1970. The emergence of the health, safety, and environmental regulations and the accompanying jump in Federal Register pages also was reflected in a doubling of the stock of regulations in the Code of Federal Regulations, which reached 102,195 by 1980. By 2014, the number of pages in the Code of Federal Regulations was 175,268, as the total stock of regulations has continued its upward trajectory.

Figure 2.6 Trends in Code of Federal Regulation Pages, 1950–2014 Source: U.S. Office of the Federal Register, Code of Federal Regulations, Total Volumes and Pages 1950–2014, https:// www.federalregister.gov/uploads/2015/05/OFR-STATISTICS-CHARTS-ALL1-1-1-2014.xls.

Another useful measure of regulatory activity is the total spending by regulatory agencies, which gives some sense of the changing scale of regulatory efforts but does not measure the total societal costs associated with regulatory compliance. The trends in regulatory agency spending shown in table 2.4 show a similar upward trend in regulatory activity by fiscal year. In inflation-adjusted 2009 dollars, total spending by regulatory agencies rose from $3 billion in 1960 to $61 billion in 2017. The composition of the spending has also shifted as well. Whereas social regulations comprised 66 percent of all agency spending in 1960, by 2017, this amount had grown to 82 percent. In addition to the jump in spending on social regulation between 1970 and 1980 (the decade that marked the establishment of such agencies as the EPA and OSHA), there has also been a tremendous increase in spending on homeland security in response to the September 11, 2001, attack on the World Trade Center. Department of Homeland Security spending now constitutes 46

percent of all regulatory budget costs. The appendix to this chapter includes trends in agency staffing and detailed breakdowns of spending patterns that also document the increased prominence of social regulation. Table 2.4 Spending Summary for Federal Agencies (Fiscal Years, Millions of 2009 Dollars in Outlays)

The Character of Regulatory Oversight Actions It is also instructive to consider the mix of actions undertaken through the regulatory oversight process to obtain an assessment of the nature of the oversight activity that has led to many of these changes. Table 2.5 summarizes the oversight actions undertaken since 1990. During the early years of the oversight process, the OMB approved over 70 percent of regulations without change. At the present time, the overall approval rate without any changes is just 8 percent. Table 2.5 Types of Action Taken by the OMB Regulatory Oversight Process on Agency Rules, 1985–2015 (Percent of Proposed Rules)

One should be cautious in attributing any change in character of the regulatory oversight process to the trends exhibited by the statistics in table 2.5. A higher percentage of regulations are changed as a result of the current review process, in large part because of the increased selectivity of the regulations that are earmarked for review. The number of executive order reviews has decreased over time and are consequently much more targeted than before. One would expect a higher percentage of the regulations to be revised in response to the review efforts. Either with or without changes, the OMB now approves 90 percent of all regulations, but in almost all instances, changes are made in the regulatory proposal before it is issued by the agency. Agencies also appear to be able to anticipate the objections that will be raised to the regulations, as the percentage of regulations that are withdrawn by regulatory agencies has dropped to under 2 percent. Some of these changes to regulatory proposals that have occurred as part of the OIRA review process have been quite consequential. For example, at the OMB’s insistence, OSHA offered firms a variety of alternative means of compliance to reduce the explosion hazards arising from the dust levels in grain mills. This expanded flexibility did not impede the safety effects of the regulation, but it did lower the regulatory costs. The high percentage of the regulations that are consistent with OMB principles after such changes are made or without change also indicates that the dominant emphasis of the OMB process is to promote negotiated solutions to enhance regulatory policy as opposed to simply being obstructionist. The OMB oversight process has limited political resources so that it cannot afford to do battle in every regulatory arena, even though few would claim that 98 percent of all regulations that are reviewed will in fact maximize the net benefits to society. The percentage of instances in which the OMB blocks regulations is quite small. In 2010, for example, 5.7 percent of the regulations reviewed were withdrawn by the regulatory agency, and none were returned for reconsideration. Many of the withdrawn regulations were among the most burdensome. Perhaps the most interesting trend exhibited in table 2.5 pertains to the first two rows of the table. The percentage of regulations that are consistent with OMB guidelines without any change dropped by 64 percent from 1990 to 2015, and the percentage of regulations that are consistent with change rose by a comparable amount over that period. The dominant emphasis of OMB actions has been either to approve regulations or to promote moderate modifications of them, and over time there has been an increased attempt to alter regulations in an incremental fashion rather than simply to approve them without any

change. Such incremental modifications in regulation are where we would expect the regulatory oversight process to have its greatest influence, because major conflicts, such as those over the entire thrust of a regulatory policy, would be escalated to higher political levels. If all regulatory policy decisions were escalated in this manner, the president would have little opportunity to devote time to other national problems. In any given year, agencies issue hundreds of major regulations and an even greater number of minor regulations. Given the substantial volume of regulatory activity, the only feasible way to address these issues is to remain within the interagency negotiations between the regulatory agency and the OMB, saving appeals to a higher level for the small percentage of regulatory issues that involve controversial issues of national policy. In the Reagan administration, one such policy meriting presidential involvement was the decision with respect to acid rain policies, and in the George H. W. Bush administration, global warming policies received the greatest presidential scrutiny. In the Clinton administration, there was substantial high-level involvement in the rewriting of the Superfund law, which governs the treatment of hazardous wastes. Initiatives related to energy efficiency and climate change were prominent in the Obama administration. More routine regulations, such as standards for the combustion of municipal waste, are handled without a national debate. Judicial Review of Regulatory Impact Analyses The courts play no formal role in terms of a requirement that agency RIAs undergo judicial approval.18 But the courts are empowered under the Administrative Procedure Act to set aside agency efforts that are “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”19 RIAs may play a pivotal role in this assessment, as they may serve to indicate whether the agency behaved in a reasonable manner by using an appropriate methodology in which they considered relevant and reliable data in making the policy decision. To date there have been at least thirty-eight judicial reviews of agencies’ benefit-cost analyses. The reviews sometimes overturn regulations, but in other instances, the reviews may suggest a lesser role for benefit-cost analyses. If the agency has a narrow statutory mandate, the court may rule that the agency was not permitted to base its policies on a benefit-cost assessment. Alternatively, the court could conclude that the agency had a statutory obligation to utilize benefit-cost analysis to justify its regulation. There also may be claims with respect to the adequacy of the benefit-cost analysis, such as considering all pertinent factors, using appropriate data, and adopting a sound methodology. The agency’s RIA also could play an indirect role by highlighting deficiencies in the reasoning that led to the regulation even if the analysis itself was not flawed. An agency usually has leeway in basing policies on benefit-cost analyses, given the substantial degree of discretion that agencies have in setting policies when the statutory language does not indicate that the policy is inconsistent with the agency’s statutory mandate.20 Indeed, in the 2014 Homer City Generation case, the EPA was given permission to consider costs and to utilize cost-effectiveness analysis in setting emissions standards, but the EPA is not required to do so. The challenges to RIAs have been of several types. First, the scope of the analysis may be challenged. Were all pertinent benefits and costs identified and evaluated? For example, the court concluded that the EPA’s asbestos ban analysis did not consider the consequences of dangerous substitutes for asbestos. Second, the underlying assumptions or the methodology used in the analysis may be challenged, such as the EPA’s assumptions regarding the maximum chemical contaminant levels at which there are no health risks. Third, the agency’s analysis may not be sufficiently transparent, as it may not have provided enough

information to evaluate the methodology and assumptions used in the analysis, such as the Department of Energy’s arbitrary selection of the discount rate used in evaluating appliance energy efficiency standards. Unless there is regulatory oversight legislation to formalize judicial review, such judicial reviews of agencies’ regulatory analyses will not be as routine as the OIRA reviews. But even under the current system, judicial review can serve as an additional check on the more extreme cases in which agencies overstep their statutory authority. Particularly when the regulatory stakes are high and no sound justification exists for the agency’s policy, judicial challenges could potentially lead to overturning regulations, or the agency could further attempt to justify the regulatory policy. What Do Regulators Maximize? In theory, regulatory agencies serve to maximize the national interest subject to their legislative mandates. Similarly, the OMB is presumably motivated to maximize the net benefits minus costs to society. Such a characterization of regulatory objectives is, unfortunately, excessively naive. Diverse factors influence policy decisions, many of which have very little to do with these formal statements of purpose. What is clear at this stage is that there are certainly influences at work other than those that are formally specified. However, economists have yet to reach a consensus regarding the specific formulation that best captures the political mechanisms at work. A brief review of some of these theories can, however, highlight the range and the types of approaches that have been taken. Capture Theory Under the capture theory of regulation, such as that espoused by George Stigler, the regulatory agency is captured by the economic interests that it serves.21 Stigler has been most successful in testing this model with respect to the economic regulation agencies, such as the Interstate Commerce Commission. Examples of how government regulation can foster industry interests abound. Regulation of airline fares can, for example, provide a floor on airline rates that enables firms to make greater profits than if there were price competition. Similarly, minimum quality standards for products can promote the interests of the more established and advanced firms in the industry, which will use these mandated quality standards to squeeze the producers with less advanced technological capabilities. Most models based on capture theory recognize the competing demands on regulatory agencies. Private interests as well as public interests may affect the political survival of the regulatory officials as well as the agency’s budget. Although the most direct descendant of Stigler’s work is that of Peltzman,22 other authors have developed similar models reflecting the diversity of political influences at work. Roger Noll has developed an external signaling theory of regulation whereby regulatory agencies attempt to minimize the conflicting criticism that appears through signals from the economic and social environment in which the regulatory agency operates.23 Noll proposes that agencies construct an administrative apparatus for the development and enforcement of their regulations to promote the ability of groups that approve their actions and to limit the ability of political forces that disapprove of their actions. Other Theories of Influence Patterns Other researchers have also formulated models reflecting diverse patterns of influence but have concluded that particular sets of influences are most influential. For example, Wilson and Stewart suggest that regulatory agencies have substantial discretion with respect to the regulatory actions they take, so that it is

the regulatory agency that plays the dominant role.24 Other authors have advocated a quite different view, in which Congress has the dominant role, not the regulatory agency.25 The leverage of Congress stems from the fact that Congress drafts regulatory statutes, and the congressional committees are responsible for setting the budgets of the regulatory agencies and for confirming the leading administrators in these agencies. Comprehensive Models of Regulatory Objectives In all likelihood, the actual outcomes are influenced by a multiplicity of factors that cannot be characterized by any simple, single model. The regulatory agency does not have sole control, nor does the OMB. Moreover, Congress and the judiciary play a restraining role, and lobbyists for and against the regulation can affect the political payoffs to the regulatory agency as well. The actual strength of the influences undoubtedly varies depending on the particular context. An interesting case study of the extent to which there are multiple influences at work is provided through detailed analysis of the rulemaking process for the EPA regulations that implemented the industrial effluent standards used to control water pollution. The study of the evolution of these standards by Magat, Krupnick, and Harrington highlights the types of outcomes that will ultimately be explained through an analysis of the competing interests affecting regulatory outcomes.26 The heterogeneity of the regulation in different industries and for firms of different sizes clearly suggests that no simple or naive regulatory objective guides behavior. Through detailed statistical analysis of a series of decisions made by the EPA as part of this rulemaking process, Magat et al. have identified a variety of factors that were influential in the setting of these water pollution standards. One such influence was efficiency concerns. The EPA did adjust the stringency of regulations in different industries to reflect the differences in compliance costs across firms. This is the kind of heterogeneity one would want to promote, in that standards should not be as stringent for industries that must bear greater burdens to reduce pollution. In those contexts, the costs of compliance will be greater, so that to maximize the net benefits of the standard, one would want to reflect these cost differences in the standard level. A second influence was the quality of the economic analysis supporting the standard. Standards supported by high-quality economic analyses were more likely to lead to more stringent effluent guidelines than those lacking substantive support. This result also suggests that there is a sense of economic rationality to the process whereby the strength of the analysis does affect the policy outcome. It should be noted, however, that the particular price and cost effects of the regulation did not appear to be as influential as the overall quality of the economic analysis. Other players have an impact as well. The economic resources of the trade association for the particular industry affect the stringency of the standards in the expected manner. In particular, industries with large budgets for their trade association are able to obtain weaker standards, after taking into account other factors that should determine the stringency of the regulation. Trade association budgets appear to be much more influential than the volume of industry comments provided, in that these resources presumably reflect the political clout of the lobbying group to a greater degree than does the number of pages of comments submitted. Conclusion In later chapters we develop a series of models of the regulatory process. All such models should be viewed as a simplification of the actual objectives guiding the regulatory agencies. Economists have made

substantial progress in recent decades in developing approaches to indicate how regulators make decisions, which is often quite different than one would predict based on their legislative mandates or their stated agency objectives. Various political factors also are at work and will affect policy outcomes. Despite the multiplicity of these influences, one should not understate the pivotal role that legislative mandates have. These mandates, which are written by Congress, in many circumstances define the terms of the regulatory debate and impose stringent limits on the scope of discretion of the regulatory officials. It is through these mandates that Congress has a long-run influence on regulatory policy, even though most short-run regulatory decisions appear to be governed by actions of the regulatory agency, the influence of the regulatory oversight process, and recognition of the political factors at stake in the regulatory policy decision. Questions and Problems 1. A frequent proposal has been to replace the oversight process through a system known as a “regulatory budget.” Each agency would be assigned a total cost that it could impose on the American economy, and its task would be to select the regulations that best foster the national interest subject to this cost. Can you identify any problems with the regulatory budget approach? How feasible do you believe it would be to calculate the costs of all the regulations of a particular agency? What, for example, are the costs associated with affirmative action? Are they positive or negative? What are the pros and cons of Trump’s two-for-one regulatory approach? 2. Inadequacies in government action are frequently called “government failures.” In some cases, government failures reinforce market failures. In particular, the government may promote inefficient outcomes in a way that exacerbates the shortcomings of the market rather than alleviates these shortcomings. Can you think of any examples where such mutually reinforcing failures might occur and the reasons they might occur? 3. One justification often given for using a variety of conservatism factors in risk analyses is that society is risk-averse, so that we should be conservative. Can you identify any flaws in this reasoning? 4. Regulatory agencies are not permitted to publicly release the details of their regulatory proposals until after the appropriate review by the OMB, as outlined in figure 2.1. How do you believe the process would change if the agency first issued the proposal publicly and then began its discussions with the OMB? Do you believe this change would improve the regulatory decision-making process? What new factors would be brought to bear? 5. What problems arise when using such measures as Federal Register page counts to assess the costs imposed by regulation? In the chapter as well as in the appendix, the measures of regulatory trends include Federal Register page counts, page counts from the Code of Federal Regulations, agency budget trends, and agency staffing trends. Which of these sets of information do you believe is most informative with respect to the regulatory costs imposed on society? What other measures do you believe would be useful in assessing the changing regulatory burden? 6. In your view, what is the appropriate rate of discount for regulatory policies? Suppose that the measure is the real rate of return to capital. How would you measure this? If a group of economists were given the task, do you believe they would all arrive at the same answer? Why might there be differences in the discount rate estimate?

Appendix Trends in Regulatory Agency Budgets and Staff An instructive measure of the changing role of government regulation is provided by the magnitude of government expenditures in this area. Although the principal costs of regulations are those borne by business and the public at large, the levels of the budgets of the regulatory agencies do provide some index of the degree of regulatory activity. The Weidenbaum Center at Washington University, which was directed by Murray Weidenbaum (chairman of President Reagan’s Council of Economic Advisors) and the George Washington Regulatory Studies Center, which is directed by Susan

Dudley (former OIRA administrator under President George W. Bush) regularly compile a series of tables summarizing these budgetary and staffing trends. These regulatory studies centers also provide detailed comments on regulatory proposals. Tables A.1 and A.2 summarize the key data. Table A.1 reviews the staffing trends, and table A.2 provides a very detailed breakdown of agency budgetary trends. These patterns are generally consistent with those displayed by the Federal Register page counts. Regulation accelerated dramatically in the 1970s, when the health, safety, and environmental regulation agencies grew substantially. The deregulation in the transportation fields in the 1980s, coupled with the moderation in the health, safety, and environmental regulation area, led to some reduction in the regulatory effort in the early 1980s. However, there is some evidence of a resurgence in regulation, particularly after the September 11, 2001 attacks. Table A.1 Staffing Summary for Federal Regulatory Agencies, Selected Years*

Table A.2 Spending on Federal Regulatory Activity, by Agency, Selected Fiscal Years (Millions of 2015 Dollars)

Notes 1. Charles Wolf, Markets or Governments: Choosing between Imperfect Alternatives, 2nd ed. (Cambridge, MA: MIT Press, 1993); and Clifford Winston, Government Failure versus Market Failure (Washington, DC: AEI-Brookings Institution, 2006). 2. U.S. Office of Management and Budget, Regulatory Program of the United States Government: April 1, 1988–March 31, 1989 (Washington, DC: U.S. Government Printing Office, 1988), p. 20. 3. The empirical basis for this section is documented in W. Kip Viscusi, Product-Risk Labeling (Washington, DC: American

Enterprise Institute Press, 1993). 4. Most economic models along the lines of a capture theory are based at least in part on the work of George J. Stigler, The Citizen and the State: Essays on Regulation (Chicago: University of Chicago Press, 1975). 5. For an overview of this effort, see Thomas Hopkins and Laura Stanley, “The Council on Wage and Price Stability: A Retrospective,” Journal of Benefit-Cost Analysis 6 (September 2015): 400–431. 6. Stephen Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (Cambridge, MA: Harvard University Press, 1993). 7. See American Textile Manufacturers Institute v. Donovan, 452 U.S. 490 (1981). 8. See Chevron USA, Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984). 9. Letting s be the present value of this infinite stream, we have

Multiply s by [1/(1 + r)]:

Subtracting the left-hand expression from the left-hand side of the first equation and subtracting the right-hand expression from the second equation from the right-hand side of the first equation yields:

Solving this equation for s, we have

10. U.S. Office of Management and Budget, Regulatory Program (1988), pp. 16–17. 11. The government could utilize an outcomes-based performance measure, such as total worker deaths and injuries. However, such a measure would be more effective for large firms than for smaller firms, which have a sufficiently small sample of workers that precise inferences cannot be drawn regarding the firms’ safety performance. 12. U.S. Office of Management and Budget, Regulatory Program (1988), p. 18. 13. Statement by George H. W. Bush in U.S. Office of Management and Budget, Regulatory Program of the United States Government, April 1, 1990–March 31, 1991 (Washington, DC: U.S. Government Printing Office, 1990), p. vii. 14. U.S. Office of Management and Budget, Regulatory Program (1988). 15. Council of Economic Advisors, Economic Report of the President (Washington, DC: U.S. Government Printing Office, 1988), p. 206. 16. Diane S. Owen, Deregulation in the Trucking Industry (Washington, DC: Federal Trade Commission, 1988). 17. Christopher C. Barnekov and Andrew N. Kleit, “The Efficiency Effects of Railroad Deregulation in the United States,” International Journal of Transport Economics 17 (February 1990): 21–36. 18. For a systematic exploration of all principal judicial reviews of benefit-cost analysis, see Caroline Cecot and W. Kip Viscusi, “Judicial Review of Agency Benefit-Cost Analysis,” George Mason Law Review 22 (Spring 2015): 575–617. 19. U.S.C. § 706(2)(A) (2012). 20. The Chevron doctrine established the deferential approach to agency behavior, and this was recently affirmed in the Homer City Generation case. See Chevron v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984), and EPA v. EME Homer City Generation, 134 S. Ct. 1584 (2014). 21. George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2

(Spring 1971): 3–21. 22. Sam Peltzman, “Toward a More General Theory of Regulation,” Journal of Law and Economics 19 (August 1976): 211– 240. 23. Roger G. Noll, Reforming Regulation: Studies in the Regulation of Economic Activity (Washington, DC: Brookings Institution Press, 1971). 24. See James Q. Wilson, “The Politics of Regulation,” in James W. McKie, ed., Social Responsibility and the Business Predicament (Washington, DC: Brookings Institution Press, 1974); and Richard B. Stewart, “The Reformation of American Administrative Law,” Harvard Law Review 88 (June 1975): 1669–1813. 25. See Barry R. Weingast and Mark J. Moran, “Bureaucratic Discretion or Congressional Control? Regulatory Policymaking by the Federal Trade Commission,” Journal of Political Economy 91 (October 1983): 765–800; Mathew D. McCubbins, Roger G. Noll, and Barry R. Weingast, “Administrative Procedures as Instruments of Political Control,” Journal of Law, Economics, and Organization 3 (Autumn 1987): 243–277; and Mathew D. McCubbins, Roger G. Noll, and Barry R. Weingast, “Structure and Process, Politics and Policy: Administrative Arrangements and the Political Control of Agencies,” Virginia Law Review 75 (March 1989): 431–482. 26. Wesley A. Magat, Alan J. Krupnick, and Winston Harrington, Rules in the Making: A Statistical Analysis of Regulatory Agency Behavior (Washington, DC: Resources for the Future, 1986), pp. xi–xii.

I ANTITRUST

3 Introduction to Antitrust

The heart of our national economy long has been faith in the value of competition. —U.S. Supreme Court, Standard Oil v. FTC, 1951

There is no such thing as “unconstrained competition” in the modern economy. A firm is not allowed to blow up a rival’s factory or renege on agreements with customers. The government is relied on to enforce property rights and contractual arrangements. The issue, then, is not whether the government has a role in the economy but rather the extent of its role. As a general rule, as long as property rights and contracts are respected, most economists think firms and consumers should be left unconstrained. But there are always exceptions to any rule. Parts II and III explore some of those that provide a rationale for governments to regulate price, profit, product standards, worker safety, pollution, entry, exit, and the like. Here in part I, we will focus on the unregulated sector of the economy, where we rely on competition to be the primary mechanism to produce good economic results. What we will find, however, is that even here some constraints need to be placed on what firms do. Those constraints comprise antitrust (or competition) law, enforcement, and policy. Their purpose is to ensure that competition does not evaporate because firms decide to merge into a monopoly, or because a firm that legitimately came to dominate a market tries to illegitimately perpetuate or extend that dominance, or because firms decide to coordinate their prices rather than set them independently and compete for customers’ business (as in the lysine cartel of the mid-1990s— the cartel’s slogan was “The competitor is our friend, and the customer is our enemy”). The purpose of antitrust is simply to make markets work better by preserving competition. To understand the implications of existing antitrust policy and to design better policies, we need an understanding of the circumstances under which anticompetitive behavior might emerge. What conditions are ripe for it? What should we look for? How can we correct it? Those questions are addressed in the field of economics known as industrial organization (also called industrial economics). By developing theoretical models of an industry and empirically analyzing actual industries, industrial organization economists seek to answer such questions as: What determines the extent of competition? What is the effect of the number of firms on prices, investment, product variety, and other variables relevant to market performance? What industry conditions are conducive to cartel formation? What determines the number of firms in an industry? How can incumbent firms deter entry and promote exit? Under what conditions should we observe an industry being dominated by a firm? If we observe such dominance, is it bad for society? And, more broadly, when does an industry’s performance fall well short of a social welfare optimum? As an introduction to part I, this chapter has three objectives. First, we review why we generally like competition and dislike monopoly. In doing so, some key methods for evaluating economic policies are presented. Second, useful concepts from the field of industrial organization are discussed which will help us evaluate the appropriateness and impact of antitrust law and policy in specific industries. Finally,

background information on antitrust laws and their enforcement is provided to gain a general sense of the legal environment in which companies operate in the United States and other countries. Competition and Welfare We begin by considering the theoretical world of perfect competition. Every microeconomics text devotes much attention to the perfectly competitive model. The key assumptions are these: 1. Consumers are perfectly informed about all goods, all of which are private goods. 2. Producers have production functions that rule out increasing returns to scale and technological change. 3. Consumers maximize utility given budget constraints as defined by their income and product prices; producers maximize profits given their production functions and input prices. 4. All agents are price takers, and externalities among agents are ruled out. 5. A competitive equilibrium, which is a collection of prices such that all markets clear, is then determined. An important welfare theorem that follows from the preceding assumptions is that a competitive equilibrium is Pareto optimal. This means that the equilibrium allocation cannot be replaced by another allocation of goods that would increase the welfare of some consumers without harming others. An important property of the equilibrium is that price equals marginal cost in all markets. Note that the ideal competitive world that we have described would have no need for government intervention in the marketplace. Government’s role would be limited to defining property rights, enforcing contracts, and, if society so desires, engaging in income redistribution (using such instruments as taxation and transfer payments). Many of the listed assumptions will be relaxed and discussed in detail throughout this book. Of course, the key assumption to be discussed in this part of the book is the price-taking assumption. That is, antitrust economics is concerned with the causes and consequences of firms’ abilities to (profitably) set price above marginal cost. That is the world in which we live and which creates an expanded role for government. Welfare Tools The competitive model described above was said to satisfy the condition of Pareto optimality. This is also referred to as Pareto efficiency or simply economic efficiency. One tool for evaluating the effect of a policy change (say, breaking up a monopoly) is the Pareto criterion. If everyone is made better off by the change (or no one is made worse off, and at least one person is made better off), then the Pareto criterion would say that the change is “good.” It is hard to argue with this criterion for evaluating public policies. The problem is that rarely does a policy not harm at least some groups of people. Thus, if one strictly went by the Pareto criterion, then few policies would be adopted, and that would mean most economies would be locked into the status quo, which is arbitrary and could be unfair. A less stringent and more useful criterion is the compensation principle, which is equivalent to choosing policies that yield the highest total economic surplus. The basic idea is that if the “winners” from any policy change can, in principle, compensate the “losers” so that everyone is better off, then it is a “good” change. Note that actual compensation of the losers is not required. If it was required, then it would satisfy the Pareto criterion. We will return to critically discuss this criterion, but let us first apply it in the context of

comparing competition and monopoly. To illustrate, consider figure 3.1. The figure shows the market demand and supply curves for smartphones. Recall first a few facts about these two curves. The competitive industry’s supply curve is found by the horizontal aggregation of the supply curves of individual firms. The individual firms’ supply curves are their marginal cost curves; hence we can think of the supply curve in figure 3.1 as the industry’s marginal cost curve.

Figure 3.1 Demand and Supply Curves in the Determination of Economic Surplus

Another useful point is that the area under the marginal cost curve represents the sum of the incremental costs for all units of output and, as a result, equals the total cost. Hence the total cost of producing Q* smartphones is the area of trapezoid 0Q*DC (this amount is exclusive of any fixed costs). Under certain assumptions, the demand curve can be viewed as a schedule of the marginal willingness-topay of customers.1 For example, at the competitive equilibrium (price P*, output Q*), the marginal willingness-to-pay P* exactly equals marginal cost at the output Q*. Because the area under this schedule of marginal willingness-to-pay is total willingness-to-pay, consumers are willing to pay area 0Q*DA for output Q*. The difference between total willingness-to-pay and total cost is the area ACD and is referred to as the total surplus generated in the smartphone market. Finally, it is common to divide total surplus into consumer surplus of AP*D and producer surplus of P*CD.

Consumer surplus is defined as the total willingness-to-pay 0Q*DA less what the consumers pay. Because consumers pay the rectangle defined by price P* and output Q* (that is, area 0Q*DP*), the area AP*D in figure 3.1 is the consumer surplus. Producer surplus, defined in an analogous manner, is equal to the variable profit of the firms in the industry; that is, it is profit without any fixed costs subtracted from it. Because firms receive revenues of price P* times output Q* (that is, area 0Q*DP*) and they incur costs equal to the area under the marginal cost curve, 0Q*DC, they earn a producer surplus of the difference, P*CD. We next show that maximizing total surplus is equivalent to selecting the output level at which price equals marginal cost. In figure 3.1, assume that output Q′ is being produced and it is sold at price Q′F. Clearly, at the output Q′, the marginal willingness-to-pay Q′F exceeds the marginal cost Q′H. Hence, a small increase in output of ΔQ would increase surplus by the area of the slender shaded region (approximately FH height by ΔQ width). Output increases would continue to increase surplus up to output Q*. Hence, maximizing surplus implies that output should be increased from Q′ to Q*, adding an increment to total surplus of area FHD. Of course, by an analogous argument, we can show that output increases beyond Q* would reduce surplus, since marginal cost exceeds marginal willingness-to-pay. In short, equating price and marginal cost at output Q* maximizes total surplus. It is useful to provide another interpretation for the area FHD in figure 3.1. Recall that this area represents potential increases in total surplus if for some reason output is held at Q′. For illustrative purposes, assume that a cartel has agreed to restrict output to Q′, charging price Q′F. This results in a so-called deadweight loss of surplus equal to area FHD. This is often referred to as the social cost of monopoly. In other words, without the cartel, competition would cause price to equal marginal cost, yielding the higher total surplus of ACD as compared to the surplus under the cartel case of ACHF. As before, it is sometimes said that there is a deadweight loss in consumer surplus of the triangle FGD and a deadweight loss of producer surplus of the triangle GHD. Now, consider the point made earlier about the compensation principle and the argument that if the winners can compensate the losers, the policy change is a good one. Using a simple monopoly versus competition example, we will show that additional insights can be obtained by considering consumers and producers separately. Monopoly versus Competition Example Figure 3.2 provides a monopoly solution with price Pm and quantity Qm. For simplicity, average cost AC is assumed to be constant and therefore equals marginal cost MC. MR is marginal revenue associated with Demand. At any quantity, the height of Demand is the price at which the good is sold, while the height of MR is the revenue received by the monopolist from selling the last (or marginal) unit. Marginal revenue is less than price (and, therefore, MR lies below Demand) because, for a monopolist to sell that additional unit, it must lower the price on all other units. The additional revenue it receives is then the price from selling that marginal unit minus the forgone revenue from selling all of the inframarginal units at a lower price. Hence, price exceeds marginal revenue.

Figure 3.2 Monopoly versus Competition

In maximizing its profit, the monopolist chooses output Qm, where marginal revenue MR equals marginal cost MC. Recall why that condition must hold at a profit maximum. The marginal profit of the last unit sold equals what it adds to revenue less what it adds to cost, which is just marginal revenue minus marginal cost. If that difference is positive, then the monopolist could earn more profit by producing and selling a few more units as each additional unit brings in more revenue than it adds to cost. Similarly, if marginal revenue is less than marginal cost, then the last unit sold would be lowering profit, in which case the monopolist would do better by reducing supply. Only when marginal revenue equals marginal cost is there no way to raise profit; hence, Qm is the profit-maximizing level. The associated profit, or producer surplus, equals price minus average cost multiplied by quantity, or area PmPcCB. Consumer surplus equals the triangle APmB. Next, consider a policy to break up the monopoly and replace it with a competitive industry. Let us assume no change in costs, so that the competitive industry supply is the horizontal line at the level of MC. (This assumption may not be satisfied in practice, inasmuch as one reason for the existence of a monopoly may be some technological superiority that achieves lower costs of production.) Hence the new equilibrium is price Pc and output Qc. Consumer surplus increases to the triangular area APcD, and producer surplus disappears. In effect, the elimination of monopoly has led to a net gain in total surplus of triangle BCD. This triangle, the deadweight loss caused by the monopoly, is labeled DWL in figure 3.2. To reinforce the points we have made, we can use specific numerical demand and cost functions. In

particular, assume Q = 100 − P MC = AC = 40

Demand;

Marginal and average cost.

The monopoly price is therefore Pm = $70, Qm = 30, and the competitive equilibrium is Pc = $40, Qc = 60.2 Monopoly Total surplus = APcCB = $1,350; Consumer surplus = APmB = $450; Producer surplus = PmPcCB = $900. Competition Total surplus = APcD = $1,800; Consumer surplus = APcD = $1,800; Producer surplus = 0. The pro-competition policy leads to an increase in total surplus from $1,350 to $1,800. On this basis, the policy should be carried out. Notice, however, that producer surplus falls from $900 to zero. The owners of the monopoly are therefore harmed. Consumers gain enough to compensate the monopoly owners and still be better off. That is, consumers gain by $1,800 − $450 = $1,350. In principle, consumers could compensate the monopoly owners with $900 to offset their loss, and still have a net gain of $1,800 − $900 = $900. Of course, as discussed earlier, under the compensation principle the compensation need not be carried out. Is the Compensation Principle Compelling? The role of an economist is not to determine what society should care about but rather what policies are most effective at achieving a society’s objectives, whatever they may be. A relatively neutral stance would argue for policies that satisfy the Pareto criterion and thus make everyone better off (or at least some better off and no one worse off). However, as already noted, few policies meet that standard, so from a practical perspective, economists only ask of a policy that it raise total surplus so that there exists a set of transfers that would make everyone better off. While the particular policy need not involve those transfers, the standard response is to say that taxes and subsidies could consummate them. Unfortunately, little evidence suggests that a particular antitrust decision or regulatory policy that harms some parties, but still raises total surplus, affects subsequent redistribution policies. And if the transfers never occur to make everyone better off, then pursuing the maximization of total surplus is effectively giving equal weight to all impacted agents so that, for example, the loss of $1,000 to a consumer is valued the same as the gain of $1,000 to a shareholder. However, most people would be uncomfortable with such a transfer when the consumer has low income and would have spent that $1,000 on day care, while the shareholder is a hedge fund trader who would use that $1,000 to pay for a dinner at a Michelin three-star

restaurant. The allocation of income and wealth is an issue of increasing importance in the United States. It is now well documented that there has been growing income and wealth inequality and that the beneficiaries of economic growth are the wealthiest. This fact is apparent in figure 3.3 where a dramatic change in how growth is shared has taken place. While “all boats were lifted” by a growing economy prior to 1980, since that time the highest income earners have claimed a disproportionate share of income growth. The top 5 percent of income earners have seen their income grow by 60 percent since 1973, while the median income growth is only one-third as high, and incomes in the bottom fifth have not improved at all.

Figure 3.3 Real Family Income as a Percentage of 1973 Level Source: Center on Budget and Policy Priorities (calculations based on U.S. Census Bureau Data). From www.cbpp.org /research/poverty-and-inequality/a-guide-to-statistics-on-historical-trends-in-income-inequality (accessed July 24, 2016).

This rise in inequality has become an issue of concern to the body politic. It is relevant to antitrust and regulation because actions in those domains can have distributional consequences that could either exacerbate or ameliorate income and wealth inequality. As a case in point, with the exception of luxury goods, consumers in a market are likely to have lower income and wealth than the shareholders and executives of the companies supplying that market. Hence, a merger that raises total surplus but harms consumers while benefiting firms may not be desirable to a society that cares about the distribution of

income. Or a policy of deregulation that benefits consumers and results in lower wages for laborers may not be appealing if consumers tend to be wealthier than workers. The reason for raising the issue of the distributional consequences of government intervention, such as antitrust decisions, is not to take a position but rather to note that it should be a consideration in the evaluation of economic policies if income and wealth inequality is a concern to society. Later in this chapter, we will discuss the criteria commonly used in antitrust practice which, in fact, is often not total surplus.3 Some Complications Returning to the comparison of monopoly and competition, economies of scale were implicitly assumed to be relatively small. That is, we ignored the problem that arises when the representative firm’s long-run average cost curve reaches its minimum at an output level that is not small relative to the market demand. In other words, in our monopoly example, we assumed that the single firm could be replaced with a large number of firms, with no effect on costs. To take an extreme case to the contrary, consider figure 3.4. Economies of scale are such that the long-run average cost curve LRAC reaches its minimum at an output level that is substantial relative to market demand. Situations of this kind are referred to as natural monopolies, to reflect that production can be most cheaply carried out by a single firm. The profit-maximizing monopolist would set price equal to Pm and output to Qm.

Figure 3.4 Economies of Scale and Natural Monopoly

Suppose that it were known that in order to have a sufficient number of firms in the industry for competition to obtain, each firm would only have enough demand to warrant producing an output of q. As figure 3.4 shows, the average cost of output q would be quite high and would result in a price of Pc, which exceeds the monopoly price. Economies of scale can then make monopoly the preferred market organization. Public utilities that distribute electric power or water are notable examples. In extreme cases of the type depicted in figure 3.4, the policy problem becomes one of regulating the natural monopolist. The approach usually followed in public utility regulation is to force the monopolist to price so as to earn only a “fair” rate of return on its investment. An alternative is to create a public enterprise, owned and operated by the government, though that is rarely pursued in the United States. More relevant to antitrust policy is the intermediate case, where economies of scale are moderate but not small relative to market demand. For example, it may be imagined that the size of the electric automobile market is only large enough to support three or four firms, each producing at the minimum point on its longrun average cost curve. This situation would give rise to an industry of three or four firms (that is, an oligopoly). The key factor differentiating oligopoly from perfect competition and monopoly is that the small number of firms creates a high degree of interdependence. Each firm must consider how its rivals will respond to its own decisions. Oligopoly theory does not yield any definitive predictions analogous to the “Price equals Marginal cost” prediction of perfect competition, or the “Price exceeds Marginal cost” prediction of monopoly. Nevertheless, most theories of oligopoly imply that price will exceed marginal cost, but by less than under monopoly. Yet oligopoly is quantitatively very significant in most industrial economies, and it is therefore an important topic for study. It should be stressed, in addition, that the prevalence of oligopoly does not necessarily imply that large-scale economies are the cause. In fact, whether or not economies of scale explain the existence of particular oligopolies is a key public policy concern. We will further discuss oligopolies in chapter 4. A second complication is the existence of product differentiation. Product differentiation refers to the situation in which buyers perceive differences in the products of rival sellers. The differences may be real differences, such as differences in size, styling, horsepower, and reliability for automobiles, or they may be primarily the result of image differences conveyed through advertising. The main requirement is that consumers regard the differentiation sufficiently important that they are willing to pay a somewhat higher price for their preferred brand. E. H. Chamberlin constructed the theory of monopolistic competition in which many competitors produce differentiated products.4 All firms that produce products that are reasonably close substitutes are members of the product group. Given these assumptions and the assumption of free entry, the long-run equilibrium of a monopolistic competitor is given by the tangency of the typical firm’s demand curve with its average cost curve, as displayed in figure 3.5.

Figure 3.5 Equilibrium of a Monopolistic Competitor

The monopolistic competitor earns zero profits in long-run equilibrium. This property is a consequence of the assumption of free entry. The entry of firms shifts in an individual firm’s demand curve, because there is less total demand for each firm. Firm entry occurs to the point that profit is zero, when there is no longer an incentive for any further entry (and no incentive to exit either). As shown in figure 3.5, this equilibrium state is characterized by the tangency between the firm demand curve and firm average cost curve. At that point, note that a price of not only yields zero profit but it is also the profit-maximizing price (because all other prices are below average cost and thus yield negative profit). It is important to emphasize that product differentiation gives the firm’s demand curve its negative slope and distinguishes it from the case of perfect competition, which has a flat firm demand curve at the market price. With a differentiated product, a firm can increase its price without losing all its sales to competitors. The relevant point here is that price exceeds marginal cost—the signal that there is a misallocation of resources. But consider Chamberlin’s argument: The fact that equilibrium of the firm when products are heterogeneous normally takes place under conditions of falling average costs of production has generally been regarded as a departure from ideal conditions.… However, if heterogeneity is part of the welfare ideal, there is no prima facie case for doing anything at all. It is true that the same total resources may be made to yield more units of product by being concentrated on fewer firms.… But unless it can be shown that the loss of satisfaction from a more standardized product is less than the gain through producing more units, there is no “waste” at all, even though every firm is producing to the left of its minimum point.5

X-Inefficiency The deadweight loss from monopoly is referred to as an allocative inefficiency and arises because too little output is produced and consumed. Another type of inefficiency from monopoly is referred to as Xinefficiency.6 Thus far, we have assumed that both monopolists and perfect competitors combine their factors of production efficiently, thereby minimizing cost for each level of output. However, it can be argued that the pressures of competition force perfect competitors to be cost minimizers, whereas the freedom from competition makes it possible for the monopolist to be inefficient, or X-inefficient. That is, the monopolist

may operate at a point above its theoretical cost curve.7 Of course, X-inefficiency is inconsistent with the assumption that monopolists maximize profits. However, the separation of ownership from control in large firms with market power could permit managers to substitute their own objectives for the profit objectives of the owners. Or perhaps owners do not care only about profits. The latter is surely exemplified by owners of sports franchises, which often appear to be run for reasons other than profit. In these scenarios, X-inefficiency may arise. As Nobel Laureate Sir John Hicks famously claimed: “The best of all monopoly profits is a quiet life.” Monopoly-Induced Waste A third source of inefficiency created by monopoly is competition among agents to become a monopolist. Consider the example of a government-mandated monopoly in the form of a franchise. If figure 3.2 depicts the relevant demand and cost curves, then the franchise owner will earn profits equal to PmPcCB. Knowing that the firm that receives this franchise will earn rents of PmPcCB, firms will invest resources in lobbying the legislature or the regulatory agency in order to become the recipient of this franchise. This competition to earn monopoly profits uses up real resources in the form of labor by lobbyists and lawyers. These wasted resources represent a cost to society, just as do the traditional deadweight loss and any X-inefficiencies. Competition among firms for rents is appropriately referred to as rent-seeking behavior.8 How large is the welfare loss from rent-seeking behavior? We know that it cannot exceed the amount of monopoly profits (PmPcCB in figure 3.2). No firm would find it optimal to spend in excess of that amount to become a monopolist. In some simple models, it has been shown that if rent-seeking is perfectly competitive (that is, there are many identical firms), then all rents will be competed away.9 In that case, the total welfare loss from monopoly is PmPcDB. More generally, PmPcDB represents an upper bound on the welfare loss from monopoly (excluding any X-inefficiencies), while BCD is a lower bound. Rent-seeking behavior may arise in various ways. As just mentioned, competition for rents could take the form of firms lobbying legislators to get favorable legislation passed, for example, entry regulation or import quotas. When these lobbying activities use up real resources, they represent a welfare loss associated with monopoly. Alternatively, if favorable government actions are achieved by bribing legislators or regulators, then this is not a welfare loss but rather a transfer from the briber to the bribee. However, one could take the rent-seeking argument one step further and argue that agents will compete to become legislators or regulators in order to receive the rents from bribes. If real resources are used at that stage, then they represent a welfare loss. Rent-seeking behavior can also arise in the form of excessive nonprice competition. Suppose firms are able to collude so that price exceeds cost. The lure of this high price-cost margin could generate intense advertising competition as firms compete for market share. Depending on the particular setting, this advertising may have little social value and simply be the by-product of competition for rents. Historically, socially wasteful advertising has been thought to be a feature of the cigarette industry. As we will see in later chapters, nonprice rivalry among firms in a cartel or in a regulated industry can lead to excessive spending on product quality, product variety, and capacity, as well as on advertising. Finally, unions have been found to be quite effective in extracting some of a firm’s profits in the form of higher wages. This higher wage results in the private marginal cost of labor exceeding its social marginal cost, so that a firm tends to use too little labor in the production process. This inefficient input mix represents yet another source of welfare loss associated with monopoly. Back when unions were more powerful than they are now, one study found that unions extracted in excess of 70 percent of monopoly

rents.10 It is important to distinguish the type of wasteful competition just described from competition for a monopoly through innovation. Perhaps the most common method for dominating a market is coming up with a better product or service. We’ve witnessed markets dominated by the likes of Apple, Amazon, eBay, Google, and Microsoft; and it is rightly recognized as a good thing, because their dominance was (at least initially) attributable to delivering what consumers wanted in an efficient manner. Though the competition to “own a market” may have been costly in the use of resources (think of all of the firms that failed), the process almost surely raised welfare. Some innovation implications of monopoly will be discussed later in this chapter, while in chapter 9 we will explore the role of antitrust when the primary form of competition is “for a market” rather than “in a market.” Estimates of the Welfare Loss from Monopoly Having identified various sources of welfare losses due to price exceeding marginal cost, it is natural to wonder about the quantitative size of these losses in the U.S. economy. One method for estimating the traditional deadweight welfare loss (which we have denoted DWL) is as follows. From figure 3.2, we know that DWL equals BCD when the monopoly price is charged. BCD can be approximated by where this approximation is exact if the demand function happens to be linear. Because P* and Q* are the actual price and quantity, respectively, one can collect data on P* and Q* for various firms or industries. However, we usually do not know the competitive price without estimating marginal cost. As it is typically a labor-intensive task to get a reliable estimate of marginal cost for even a single industry, to do so for a significant portion of the U.S. economy is unrealistic. We then need to find some alternative way of estimating DWL that does not require having data on Pc and Qc. In his pioneering study, Arnold Harberger used the following approach.11 To begin, one can perform a few algebraic manipulations and show that

where η is the absolute value of the market demand elasticity and d is the price-cost margin. More formally, d = (P* − Pc)/P*, and η = |(ΔQ/Q)/(ΔP/P)|, where ΔQ = Qc − Q* and ΔP = P* − Pc. While data on industry revenue, P*Q*, are available, one needs to come up with estimates of d and η. To derive a ballpark figure for DWL, Harberger used the difference between an industry’s rate of return and the average for the sample to estimate the price-cost margin d, and simply assumed that η = 1. With this back-of-the-envelope technique, Harberger found that DWL was on the order of one-tenth of 1 percent of GNP. Though the assumption of unit elasticity is arbitrary, what is important is that the conclusion one draws from this estimate is robust to the value of η. Even increasing it fivefold will mean that DWL is only one-half of 1 percent of GNP. Harberger concluded that the welfare losses from monopoly are very small indeed. We thought it worthwhile to review Harberger’s work to show how one might go about estimating welfare losses from monopoly. However, there are several reasons to question the relevance and accuracy of his low estimate of DWL. First, it is an estimate based on data from the 1920s. Whether such an estimate is relevant to today’s economy is an open question. Second, we know that there are sources of welfare loss from monopoly other than DWL. Harberger estimated that the size of above-normal profits was around 3–4 percent of GNP. This leaves open the question of how much resources were used in competing for these

rents. Depending on the extent of such competition, we know that the true welfare loss could be as high as 3–4 percent of GNP. The third and perhaps most important reason for questioning the validity of Harberger’s estimate is that later researchers have performed more careful analyses and found significantly higher values of DWL. Another study developed by Keith Cowling and Dennis Mueller took a quite different approach to estimating DWL.12 Their approach avoided having to make an arbitrary assumption on the demand elasticity by instead assuming that firms maximize profit. The first step in their analysis is to note that a firm’s profitmaximizing price P* satisfies the following relationship:

where MC is marginal cost. In words, a firm sets price so that the price-cost margin equals the inverse of the (absolute value of the) firm demand elasticity. Note that in a competitive industry, η is infinite, so that equation 3.2 tells us that P* = MC. Recall that Harberger showed that DWL could be estimated by where d = (P* − MC)/P* (and we have replaced Pc with MC). From equation 3.2, it follows that 1/η = d. Now substitute d for 1/η in the expression that estimates DWL (equation 3.1):

Substituting (P* − MC)/P* for d in equation 3.3, it follows that

where Π* is firm profits. Because Π* = (P* − AC) Q*, where AC is average cost, the last equality in equation 3.4 uses the assumption that marginal cost is constant, so that MC = AC. Hence, the deadweight welfare loss created by a firm is approximately equal to half of its profits. With this method, data was collected on Π* for 734 U.S. firms for 1963–1966. Remember that Π* represents economic profits, not accounting profits. Hence Cowling and Mueller used 12 percent as the normal return on capital in the economy and subtracted normal profits from accounting profits to estimate Π*. Their estimate of DWL was around 4 percent of GNP, considerably higher than that found by Harberger. If one includes advertising expenditures as wasted resources associated with rent-seeking behavior, their measure jumps to 13 percent of GNP. Of course, inclusion of all advertising expenditures assumes that advertising lacks any social value. This assumption is clearly false, because some advertising reduces search costs for consumers. Thus, one would expect Cowling and Mueller’s best measure of the welfare loss from monopoly to lie somewhere between 4 and 13 percent of GNP. It is interesting that under their most comprehensive measure, General Motors by itself created a welfare loss of one-fourth of 1 percent of GNP! It is clearly important to understand the quantitative size of the welfare loss from price exceeding marginal cost, whether it is due to monopoly, collusion, or regulation. Unfortunately, estimating welfare losses is an inherently precarious task because of data limitations. Thus one must interpret these estimates with considerable caution. A final point is that even if we knew for certain that monopoly welfare losses

were, say, only 1 percent of GNP, this would not be grounds for abolishing antitrust, because the 1 percent figure is for an economy with active antitrust enforcement. Perhaps if antitrust did not exist, the monopoly losses would be much larger. Innovation: Monopoly versus Competition Taking stock, it is clear from the preceding discussion that competition is preferred to monopoly. Monopoly results in some units not being produced and consumed for which there were “gains from trade,” and that lowers total surplus. In addition, if society values consumers more than shareholders, there is an additional source of harm coming from the higher price that consumers pay for the units that are still produced. Monopoly is looking pretty miserable from a social welfare perspective. We have not, however, considered the entire welfare ledger associated with a comparison of monopoly and competition. Thus far, the analysis has been static in the sense that it takes as given the current set of products and technologies. But we know that the most substantive gains in welfare over decades and centuries is in the expansion of available products and the reduction in cost from producing existing products, all of which come from innovation. In other words, competition may outperform monopoly for a given body of knowledge, but how do those market structures compare from a dynamic perspective, when knowledge is allowed to expand as a result of firms’ efforts? This is a far more challenging question to analyze, and an extensive body of research in economics has not yet produced a definitive answer.13 However, it has managed to identify some relevant forces pertaining to how the incentives and ability to innovate vary between monopoly and competition. Our focus here will almost exclusively be on incentives. To gain some insight into this complex economic issue, we will address the following narrow question.14 Assume that an industry can be organized either competitively or as a monopoly and, in both cases, a single inventor is considering investing in research and development (R&D) to achieve a cost-reducing innovation.15 The inventor is not concerned about competition from other inventors, and complete protection from imitation is (initially) assumed. In the competitive case, the inventor has an infinitely-lived patent, and in the monopoly case, the inventor is the monopolist and further entry is not allowed. Minor innovation case Figure 3.6 shows both the competitive industry and the monopoly for the case of a minor invention.16 That is, the original equilibria for both cases are based on a constant cost of production C0 and the demand DD′. Hence, the competitive industry equilibrium before the invention is at price P0 and quantity Q0, where demand and the constant cost supply curve intersect. The original monopoly equilibrium is determined by the intersection of marginal revenue, DJ, and marginal cost (constant at C0), or at the quantity M0, yielding price Pm.

Figure 3.6 Incentives to Innovate in Monopoly and Competition: Minor Innovation Case

First, we focus on the incentive to the inventor in the competitive industry. The invention will lower cost to C1, and the inventor is assumed to produce in the market at that cost. A competitive fringe using the old technology produces at cost C0. The inventor faces a kinked demand curve, P0AD′. If price was set above C0 then the other firms would supply all demand. Thus the maximum price for output levels up to Q0 would be P0 (or just a bit below), and above that output level the market demand curve would be the relevant demand. The marginal revenue curve would be P0AHJ, with a vertical discontinuity at the kink. Hence, the inventor would choose price P0 and output Q0, because marginal revenue intersects marginal cost (C1) at this output. We pause for a moment to distinguish the difference between a minor invention and a major invention. Notice in figure 3.6 that the marginal cost C1 lies within the “gap” AH of the marginal revenue curve. This ensures that the quantity Q0 remains unchanged after the invention. However, if the marginal cost C1 should be so low as to intersect the HJ segment of the marginal revenue curve, then the inventor’s quantity would be larger than Q0, leading to a price decrease as a result of the invention. Large cost reductions inducing price reductions are termed major inventions. For minor inventions, market price is unaffected. (A major invention can also be defined as one that makes the inventor’s monopoly price below the original marginal cost.) Given the equilibrium as just explained, the inventor’s profit is therefore the rectangle equal to the cost

saving per unit (C0 − C1) multiplied by the output level Q0. Or, the incentive in the competitive case is the sum of the two shaded areas I and II in figure 3.6. We now consider the case in which the industry is organized initially as a monopoly. Originally, the monopoly would charge a price of Pm, as noted earlier. At this price the monopoly profit is the triangular area DEP0. It equals the area under marginal revenue (which is total revenue) less the area under marginal cost (which is total cost). The monopolist’s incentive to invest in a cost-reducing invention is simply its increment to profit due to the lower cost process. One can show that this is the trapezoid P0EFG in figure 3.6, or area I. The reason is that profit with the lower cost process increases from DEP0 to DFG, and the difference is P0EFG. The key conclusion is that the incentive to invent in the competitive industry case is the sum of areas I and II, while in the monopoly case it is only area I. Hence, for the minor invention case, the incentive is greater if the industry is organized competitively. Before examining the same question in the case of a major invention, it is useful to consider the “firstbest” social benefit of the cost-reducing invention in figure 3.6. If the lower cost process were made available to firms at the efficient price of zero, the competitive equilibrium would change to a price equal to C1 and a quantity of Q1. The social benefit would then be equal to the sum of areas I, II, and III, the increase in consumer surplus due to the price decrease. The ranking is therefore that the social benefit (I + II + III) exceeds the incentive in competition (I + II), which in turn exceeds the incentive in monopoly (I). Major innovation case Figure 3.7 shows the case of a major cost-reducing invention. As we explained earlier, a major invention leads to a substantive price decrease, unlike the minor invention case (where the inventor prices just below the cost of the competitive fringe). Before the invention, the competitive and monopoly equilibria are as described before. Competition has price P0 and quantity Q0, while monopoly has price Pm and quantity M0. After invention, both the inventor monopolist in the competitive industry and the monopolist choose price Pm′ and quantity M1. The inventor monopolist in the competitive industry therefore obtains a profit incentive equal to the large shaded rectangle, Pm′SVW.

Figure 3.7 Incentives to Innovate in Monopoly and Competition: Major Innovation Case

To find the profit increase for the monopolist, and therefore its incentive, simply subtract the preinvention profit, which equals the small shaded rectangle PmRTP0, from the large shaded rectangle. The comparison of incentives is therefore clear. It is again the case that the incentive is greater in the competitive industry than in the monopoly—the monopolist has a pre-incentive profit that must be subtracted from the large rectangle, whereas the inventor in the competitive industry does not need to subtract anything. Nobel Laureate Jean Tirole has referred to this weaker incentive under monopoly as the replacement effect: “The monopolist gains less from innovating than does a competitive firm because the monopolist ‘replaces itself’ when it innovates whereas the competitive firm becomes a monopoly.”17 Appropriability and ability Having taken a more dynamic perspective, we find that competition still comes out ahead of monopoly. But two factors related to innovation could work in favor of monopoly. First is the appropriability of the rewards from an invention. The preceding analysis presumed that the inventor was protected from imitation, such as through a patent or entry barriers. If patent protection is not perfect then it is possible that a monopoly may be better placed to reap the rewards, while the entry of imitators under competition may result in the dissipation of those rewards. While a lower price and higher quantity from imitative entry increases welfare given that the innovation was made, an inventor anticipating such imitation will have

much weaker incentives to invest in R&D. Returning to figure 3.6, if an inventor anticipated that the innovation resulting in a reduction of cost to C1 would be imitated and result in a new competitive price of C1 then it would never innovate to begin with, as there would be no pot of gold at the end of the R&D rainbow. For a firm to invest in innovating, it must reap a sufficient share of the gains created; they cannot all go to consumers and other firms. In sum, a monopoly that can more effectively appropriate the gains from its invention may be more innovative than a competitive industry. A second issue pertains to the ability to innovate. Some economists have argued that a monopoly with substantial financial resources could be more capable of making innovations than competitive firms that must draw on the capital market for resources. Examples can be found that are consistent and inconsistent with this claim. A notable one in its favor is AT&T, which as a regulated monopolist, produced many major innovations through its Bell Labs, including the transistor and the laser. On the other side, “disruptive” technologies often come from nondominant firms outside the industry, such as occurred with the personal computer and online retailing.18 In conclusion, it remains an open question whether monopoly or competition is more effective at innovation. Two legendary economists came down on opposite sides of the issue. Nobel Laureate Kenneth Arrow argued that “product market competition spurs innovation,” while Joseph Schumpeter took the view that “the prospect of market power and large scale spurs innovation.”19 On that equivocal note, we leave the matter to a future generation of economists to resolve. Industrial Organization Now that we have examined the market scenarios of monopoly and perfect competition, it’s time to go where the action in real economies lies, which is between those two extremes. Most industries are characterized as having multiple firms, often of drastically varying sizes, with some or all having the ability to raise price above their competitors’ prices and still have positive demand. Such a situation is known as imperfect competition, and modeling and understanding those industries is the primary task of industrial organization. The field of industrial organization began with research by economists at Harvard University in the 1930s and 1940s. They developed a general approach to the economic analysis of markets that is based on three key concepts: (1) structure, (2) conduct (or behavior), and (3) performance. They hypothesized a causal relationship between these three concepts: Structure (number of sellers, ease of entry, etc.) determines firm conduct (pricing, advertising, etc.), which then determines market performance (efficiency, innovation). For example, more firms (structure) result in more intense price competition (conduct) which yields higher consumer surplus and lower industry profit (performance) because of lower prices. Known as the structureconduct-performance paradigm (SCPP), it is depicted in figure 3.8.

Figure 3.8 The Structure-Conduct-Performance Paradigm of Industrial Organization

During the 1950s and 1960s, empirical work based on this framework sought to identify general relationships that would hold for all industries, such as how much lower price would be from having one more firm enter the market. Experience has shown that such a research program was misguided. Industrial organization economists now recognize that each industry is too idiosyncratic for us to think that such a general stable relationship would be applicable to a wide class of industries. It was also found that the causal story told above is too simplistic; more causal relationships are running around than were originally described. In figure 3.8, the dashed arrow between the conduct and structure blocks indicates that conduct can sometimes “feed back” to change structure. The behavior of existing firms in a market can affect future market structure in a variety of ways. For example, through investing in R&D, a firm can lower its cost to a point where it can profitably price its competitors out of the market. Alternatively, firms can influence market structure by affecting the decisions of potential entrants to enter through the strategic manipulation of price or capital. Perhaps the bluntest way in which conduct affects structure is through merger. Although the SCPP is no longer the foundation for theory and empirical work in industrial organization, the categories of structure, conduct, and performance remain useful in organizing knowledge about an industry. These three elements will be examined in detail in later chapters; a short description of them is given here. Defining the Market and Market Power The starting point for the SCPP and for most analyses in the field of industrial organization is the market. While the market seems like a rather natural and obvious concept, it is actually rather tricky when one seeks to lay down a general definition and to define it in a particular instance. For example, the outcome of one famous antitrust case hinged on whether the relevant market was cellophane or whether the correct market was “flexible wrapping materials” (that is, plastic wrap, aluminum foil, brown paper, cellophane, and the like).20 Most economists agree that any market definition should take into account substitution possibilities in both consumption and production. George Stigler has expressed this point as follows: An industry should embrace the maximum geographical area and the maximum variety of productive activities in which there is a strong long-run substitution. If buyers can shift on a large scale from product or area B to A, then the two should be combined. If producers can shift on a large scale from B to A, again they should be combined. Economists usually state this

in an alternative form: All products or enterprises with large long-run cross-elasticities of either supply or demand should be combined into a single industry.21

A further difficulty on the supply side is the distinction between substitution and new entry. That is, where does the market stop and potential entry begin? Consider the airline industry. Should the market be defined as, say, the New York–Los Angeles market or the entire United States? If the market is the New York–Los Angeles route, concentration would be relatively high, given this tight market definition. However, entry would be easy, as airlines serving, say, Miami–San Francisco could easily switch routes. Alternatively, if the market is defined as the entire United States, concentration would be low but might include some airlines not well suited for the New York–Los Angeles route, for instance with aircraft designed for short hops and a low volume of traffic. Of course, some definition must be followed. No harm is done if it is recognized that what is important is the competitive constraint on potential monopoly pricing. That is, one would get the same answer when analyzing pricing on the New York–Los Angeles route by viewing it either as a highly concentrated market with easy entry or as part of the unconcentrated United States market. We return to this issue in earnest in chapter 6, when we discuss market definition in the context of evaluating a merger between two competitors. Having given a sense of what it means for a collection of firms to make up a market, what does it mean for a firm to have market power? Market power is the ability of a firm to profitably charge a price above the competitive level. As the competitive price equals marginal cost, one common method of measuring the extent of market power is the price-cost margin, which is (P − MC)/P. Under perfect competition, firms lack market power, because each faces an infinitely elastic demand curve. If a firm prices at the market price (which, in equilibrium, is marginal cost) then it has as much demand as it cares to supply, while if it prices above that level, it will have no demand, because consumers will choose to buy from its lower-priced rivals offering the same good. Under monopoly without the prospect of entry, the monopolist has significant market power. It can price above its cost and consumers will still buy, because there are no alternative sources of supply. (Though that does presume some consumers’ willingness-to-pay exceeds marginal cost. Otherwise, the market would not be active.) When a monopolist faces a credible threat of entry or there are multiple firms competing and the intensity of competition falls short of perfect competition, firms typically still have market power though to a lesser extent than under pure monopoly. Much of antitrust is concerned with exactly that scenario, with the focus on understanding what acts are anticompetitive in the sense that they allow firms to price higher (that is, exercise more market power) without mitigating benefits for consumers. In principle, one does not need to define the market to assess whether a firm has market power, and we will see how that is done in Chapter 6. Nevertheless, market definition is conventional and at present, courts expect it. Equipped with some sense of what a market is and what market power is—both crucial concepts in industrial organization—let us turn to defining the three components of the SCPP. Structure Theories of oligopoly typically assume sellers of equal size and so specify only the number of sellers. Actual industries, however, contain sellers of unequal size, often vastly unequal size. The concept of concentration is intended to arrive at a single number that takes account not only of the number of firms but also of how sales are distributed among those firms. By way of example, table 3.1 lists the number of companies that brewed beer in the United States from

1947 to 2005, as well as the number of plants. If one simply counts the number of firms, the industry looks highly competitive. Even after the exit of a large number of firms (many of which were acquired by or merged with other brewing companies), the number of firms has always exceeded twenty. Also reported, however, is the sum of the market shares of the five largest firms, known as the five-firm concentration ratio. That measure tells quite a different story. Since the 1980s, the five largest firms have controlled more than 80 percent of sales. By that measure, the beer market is highly concentrated. Table 3.1 U.S. Brewing Companies, 1947–2005 Year

Traditional Breweries

Specialty Breweries

Five-Firm Concentration Ratio (percent)

1947 1954 1964–1965 1974–1975 1984–1985 1994–1995 2004–2005

404 262 126 52 34 29 21

0 0 0 1 8 977 1,346

19.0 24.9 39.0 64.0 83.9 87.3 85.2

Source: Kenneth G. Elzinga, “Beer,” in James Brock, ed., The Structure of American Industry, 12th ed. (Upper Saddle River, NJ: Pearson/Prentice Hall, 2009), chapter 5.

Perhaps, then, the industry is not as competitive as we originally thought. But just because an industry is concentrated does not mean it is not competitive. If existing firms set price too high, new firms can come in with a lower price and take away much of their demand—or can they? This brings us to two other elements of structure, entry conditions and product differentiation. Entry conditions describe the ease with which a new firm can enter an industry. Ease depends on the cost of entry but also on the extent to which incumbent firms have an advantage, not because their product is better or their cost is lower but because they were there first. If entry is difficult, then a high price set by existing firms may not be driven down by the arrival of new firms. An important related concept is that of an entry barrier, which, for the present discussion, can be thought of as something that makes entry more costly or more difficult. The significance of entry barriers is that they may permit existing firms to charge prices above the competitive level without attracting entry. A clear example is a patent on a product. The patent holder on a drug for which there are no available substitutes can charge a monopoly price for the legal life of the patent (or at least until some other firm develops a better drug). Strong brand loyalties created through intensive advertising have been cited as an entry barrier to new firms. As we will find in chapter 5, the concept of entry barriers is controversial, but it persistently arises in many antitrust cases. For the beer industry, recent decades have witnessed entry by craft breweries that make small quantities of high-quality beer and sell it at high prices. As shown in table 3.1, there has been massive entry by these small producers. It is clear that entry is not difficult in that segment of the market. The brewing industry exemplifies the complexity of describing structure because of both increasing concentration in the lower-priced mass market and significant entry and expansion of product variety in the higher-priced specialty beer market. While entry may be easy in the latter market, entry could still be difficult in the mass market segment, where scale economies and heavy marketing may be critical. Another source of market power is product differentiation. If a firm has a unique product, consumers may be willing to buy it even if the price well exceeds the prices of competitors. To give a flavor of product differentiation, let us return to Homer Simpson’s favorite beverage. Two identified dimensions of beer are

its bitterness and how malty it is. Figure 3.9 depicts how various classes of products fall in this product space. Different consumers weigh these dimensions differently; it is not just price that explains why some buy Sam Adams and others buy Coors. Of course, other product dimensions matter as well, including calories, carbohydrates, the shape of the bottle, and the more ephemeral properties that marketers try to influence.

Figure 3.9 Product Differentiation in the Beer Market Source: Figure 2.5 from Douglas F. Greer, “Beer: Causes of Structural Change,” in Larry L. Deutsch, ed., Industry Studies, 2nd ed. (Armonk, NY: M. E. Sharpe, 1998), pp. 28–64.

An important part of structure is not only the existing degree to which products are differentiated but also the possibilities for further differentiation. It is not difficult for firms to distinguish automobiles, breakfast cereals, clothes, and the like. In contrast, products like sugar, vitamins, and natural gas are intrinsically homogeneous. The technological opportunities to engage in product differentiation are then also relevant. Conduct Conduct refers to the decisions made by firms regarding such variables as price, quantity, advertising, R&D, and capacity. In that product differentiation is partly influenced by firm choice, it appears as an element of conduct as well as of structure. As will become clear in chapter 4, economists think of two general states of conduct: competition and collusion. Collusion refers to forms of coordination among firms; specifically,

firms coordinate by jointly raising price, which serves to deliver higher profit at a cost to consumers and social welfare. Two types of collusion exist. Explicit collusion entails overt communication among firms. With tacit collusion, firms are able to achieve some mutual understanding with more subtle forms of communication. Even when firms are not colluding, industries can differ tremendously in the extent to which price exceeds cost. This difference is due to the elements of structure, especially the number of firms. Structure also partially influences whether firms are colluding or competing. Industries differ not only in the intensity of competition but also in the instruments of competition. Do firms largely compete on price, product design, service, or some other variable? While price is clearly a crucial competitive instrument in the automobile industry, manufacturers also compete aggressively in the design of their cars and complement it with extensive advertising campaigns. Historically, the tobacco industry was described as lacking competition in price. Firms charged the same price for cigarettes, and price changes were done in unison, regardless of whether one brand was much more popular. Though price competition largely appeared absent, competition was intense in advertising and brands; the tobacco companies were regularly introducing new brands and heavily advertising existing ones. The competitive landscape changed, however, on April 2, 1993, which became known as Marlboro Friday. On that day, Philip Morris reintroduced price competition by cutting the price of Marlboro by 20 percent. The intensification of price competition was in response to the growing sales of low-priced generic cigarettes. The character of competition can then vary across time in a market as well as across markets. Performance The performance component contains two elements: efficiency and technical progress. Efficiency concerns the allocation of resources for a given state of technology and is commonly measured by total surplus. For example, the monopolist that sets price above marginal cost causes a loss in surplus compared to competition. Of course, one could list other desirable attributes of economic performance beyond quantity, including product quality, product variety, and level of service. And then there are broader elements of performance, such as wages, safety of the work environment, job stability, and income redistribution. Generally, we will focus on performance as it pertains to the price and quality of goods and services and also product variety. Technical progress (or innovation) is an economics term for what might better be called dynamic efficiency. It is the efficiency with which an industry develops new and better production methods and products. For example, one of the more striking examples of technical progress in recent decades is the performance of microprocessors. Figure 3.10 reports the spectacular growth in the speed of microprocessors. Compared to the frontier technology in 1978, speed increased 100-fold by the mid-1990s (when Intel introduced its Pentium chip), 1,000-fold by 2000, and over 10,000-fold by 2010. This innovation has allowed computers, tablets, and mobile phones to deliver a much higher array and level of service to consumers than had been previously possible.

Figure 3.10 Growth in Processor Performance Compared to the VAX 11/780 Benchmark, 1978–2010 Source: John Leroy Hennessy and David A. Patterson, Computer Architecture: A Quantitative Approach, 5th ed. (Burlington, MA: Morgan Kaufman, 2011), figure 1.1, p. 3.

Government In addition to the three components just discussed, figure 3.8 shows a government policy block that contains the two major categories of policy examined in this book: antitrust and regulation. The arrows show that antitrust and regulation can be viewed as influencing the structure and conduct of an industry so as to affect an industry’s economic performance. An antitrust decision might lead to the dissolution of a monopoly into independent sellers. This restructuring could directly affect concentration. A 1911 antitrust case resulted in the creation of thirty-three companies by splitting up John D. Rockefeller’s infamous monopoly (or trust) of the oil-refining industry. Moving to the late 1990s, the initial court ruling in the monopolization case against Microsoft called for the company to be broken up into an operating system company (composed of Windows) and an applications company (which would include, for example, Microsoft Office). However, due to further judicial rulings and a change in the presidential administration, that remedy was not pursued. Such structural remedies are draconian and rare. Far more common is for an antitrust decision to restrict conduct. A prohibition on certain contracts that Microsoft could enter into with computer manufacturers was a conduct remedy in the first case against the company in the early 1990s. The dashed arrow in figure 3.8 indicates a feedback from conduct to government policy. Business firms often maintain public affairs departments or lobbyists whose purpose is to try to change government policy to favor them. The Robinson-Patman Act of 1936 is generally viewed as an economically harmful law enacted by Congress under strong political pressure from hundreds of small businesses. The act was designed to protect these small businesses from the operations of large discount chains that emerged during the 1930s. Antitrust

[Nobel laureate] Ronald [Coase] said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.22 While Professor Coase was presumably speaking in satiric hyperbole, there is some truth in what he says. The challenge of antitrust is to distinguish anticompetitive behavior, such as collusion and monopolization, from competition and the exercise of fairly obtained monopoly power (for example, due to better products). After reviewing the general intent of antitrust laws, we will discuss those laws in the United States and more broadly, the global community. Antitrust law and the antitrust authority are commonly referred to as “competition law” and the “competition authority,” respectively, in other parts of the world. We will use both sets of terms. (Interestingly, Canada, which had the first competition law, originally referred to it as “anti-combine,” where “combine” was another term for a trust.) Purpose and Design of Antitrust Laws The purpose of antitrust (or competition) law is to maintain a competitive marketplace by prohibiting certain practices that allow a firm or firms to create, enhance, or extend market power. To begin, an important distinction needs to be drawn between protecting competition and protecting competitors. In the United States in the 1950s and 1960s, a goal of antitrust law was interpreted as protecting small businesses, often to the detriment of consumers. This perspective is attributed to the Supreme Court of Earl Warren (1953–1969) which defined “competitive” as when the market has many small firms that can effectively compete with large firms. This meant, for example, that a merger that lowered cost and benefited consumers by lowering price could be prohibited, because it harmed small firms by causing them to be at a cost disadvantage. This view of the role of U.S. antitrust radically changed during the 1970s and 1980s. The Chicago School revolution shifted the focus to protecting consumers, not small businesses. More competition was supposed to mean lower prices, higher output, and more innovation. Out of this new perspective emerged the standard of consumer welfare: If consumers are made worse off then the practice is to be prohibited. Otherwise, it is to be allowed. This position is expressed by Robert Bork in his landmark book The Antitrust Paradox: Whether one looks at the texts of the antitrust statutes, the legislative intent behind them, or the requirements of proper judicial behavior, the case is overwhelming for judicial adherence to the single goal of consumer welfare in the interpretation of the antitrust laws. Only that goal is consistent with congressional intent, and equally important, only that goal permits courts to behave responsibly and to achieve the virtues appropriate to law.23

While the consumer welfare standard is currently used in the United States, it is not universal. In Canada, for example, the standard is total welfare or surplus. The Competition Act of Canada permits a merger when the gains in efficiency “will be greater than, and will offset, the effects of any prevention or lessening of competition that will result.” Exemplifying this standard is the merger between Superior Propane and ICG Propane in 2002. As depicted in figure 3.11, the court estimated the reduction in average cost from the merger would produce an annual cost savings of $29.2 million.24 However, price was predicted to rise, which would reduce output and cause a deadweight loss of $3.0 million per year. Since total surplus would be then higher by $26.2 million, on those grounds, the merger was approved by the Competition Tribunal. Note that the price increase would cause a projected transfer from consumers to producers equal to $40.5 million per year. If instead a consumer welfare standard were used, the merger would have been disallowed, as consumers are worse off.

Figure 3.11 Welfare Effects of the Merger of Standard Propane and ICG Propane

As a closing note on this case, the Commissioner of Competition sought to augment the efficiency standard with equity considerations. Demand for propane is fairly inelastic, which implies that the merger would cause a large price increase and therefore a large transfer of surplus from producers to consumers. Furthermore, propane is used as a source of energy for rural customers who are poorer than average. (But propane is also used for rural summer homes, which are owned by customers who are richer than average.) The Competition Tribunal chose not to consider distributional issues in their decision, because the law placed efficiency above equity, and equity is difficult to measure. In some countries in the European Union, antitrust decisions have been influenced by the broader public interest concerning such issues as environmental protection, public health, animal well-being, and sustainability.25 In a decision in 1999 involving the European Committee of Domestic Equipment Manufacturers, the European Commission (EC) allowed the manufacturers of washing machines to coordinate in discontinuing production of their least energy-efficient models on the grounds that the energy savings and environmental benefits exceeded the loss of surplus from what would typically be unlawful collusion. While this public interest rationale has only rarely come into play in the EU competition policy arena, it is gaining traction, as exemplified by some recent cases, including one in The Netherlands where North Sea shrimp fisheries sought to justify collusion on the grounds of sustainability. As a final statement as to the extent of the factors that could, in principle, be taken account of when making an antitrust decision, consider South Africa. The Competition Act of 1998 states that its purpose is to promote and maintain competition (1) to promote the efficiency, adaptability, and development of the economy; (2) to provide consumers with competitive prices and product choices; (3) to promote employment and advance the social and economic welfare of South Africans; (4) to expand opportunities for

South African participation in world markets and recognize the role of foreign competition in the Republic; (5) to ensure that small and medium-sized enterprises have an equitable opportunity to participate in the economy; and (6) to promote a greater spread of ownership, in particular to increase the ownership stakes of historically disadvantaged persons. When enforcing antitrust laws, competition authorities and the courts should not only consider the standard—whether it is consumer surplus, total surplus, or something else—but also the cost of applying the standard. One cost is administrative: It takes resources to properly evaluate some action and determine whether it is in violation of the law. This could involve the cost of discovering and prosecuting a cartel, forecasting the impact of a merger on price, or determining whether the practice of manufacturers placing a floor on retailers’ prices is harmful to consumers. A second cost is that associated with making errors in the enforcement of antitrust laws. Using the wellaccepted jargon from statistics, type I error is a “false positive”; that is, a competition authority or court wrongly concludes that a firm or firms have violated antitrust law. The primary concern with such mistakes is that it chills competition, because firms may avoid legitimate activities out of concern of being wrongly convicted. A type II error is a “false negative,” which results when a firm or firms have committed an illegal anticompetitive act but are able to continue the harm (or are not punished for having inflicted harm). The less likely a practice is to be anticompetitive, the more that type I error becomes a concern. For example, suppose a practice only creates harm 1 percent of the time and there is a 10 percent error in evaluating the practice. Out of 1,000 instances of the practice, only 10 will be anticompetitive and, of those 10 cases, 9 of them will be correctly classified as such (with one case of type II error). However, out of the 990 times it is legitimate, 99 of them (that is, 10 percent) will be wrongfully convicted. Hence, there would be 108 convictions but only 9 of them were actually harmful! The proper way to avoid so many false positives is to set a very high standard for guilt for practices that are rare. For example, if the bar is raised so that type I error is reduced to 5 percent and type II error is raised to 20 percent, false positives fall from 90 to 45 and false negatives rise from 1 to 2, which seems like a desirable tradeoff. In setting the bar for concluding that firms have violated antitrust law, it is useful to keep in mind that choosing antitrust over the heavier hand of regulation is an admission that market forces are generally desirable. Antitrust as an enterprise is dedicated to the proposition that markets work tolerably well as a general matter, so intervention must be justified. If the judge does not hear a fairly robust theory explaining why certain behavior is anticompetitive, and some relatively unambiguous facts supporting application of that theory, then intervention is not justified.26

U.S. Federal Antitrust Law and Enforcement Antitrust laws The Sherman Act of 1890 was the political reaction to the widespread growth of large-scale business combinations, or trusts, formed in the 1880s. Weakness in demand due to a severe business depression, along with aggressive pricing practices, resulted in persistent losses in certain industries. To avoid what was viewed as cutthroat competition, trusts were formed in many industries, including petroleum, meatpacking, sugar, lead, coal, tobacco, and gunpowder. Farmers’ organizations, labor unions, and small businesses united in urging passage of a law to protect them from the economic power of these trusts. The U.S. Congress obliged with an impressive consensus, as the Sherman Act passed both houses with only a lone dissenting vote in the Senate. The Sherman Act has two main sections. Section 1 prohibits contracts, combinations, and conspiracies in

restraint of trade. Penalties for violators can be imprisonment, a fine, or both. Section 2 prohibits monopolization, attempts to monopolize, and combinations or conspiracies to monopolize “any part of the trade or commerce among the several states, or with foreign nations.” The classic target under Section 1 is price-fixing arrangements, while Section 2 is applied to market dominance. We will examine price fixing in chapter 4 and monopolization in chapters 7 and 8. As a result of dissatisfaction with the Sherman Act during the first few decades, two additional statutes were enacted in 1914. The Clayton Act was designed to define anticompetitive acts more clearly. It outlawed price discrimination, various contractual arrangements between firms at different levels of the supply chain (also known as vertical restraints), interlocking directorates, and mergers between competitors. However, these practices were illegal only where they would “substantially lessen competition or tend to create a monopoly.” Section 7 of the Clayton Act, which dealt with mergers, was largely ineffective because of a legal loophole. This problem was remedied by the Celler-Kefauver Act of 1950, which amended Section 7. The law concerning mergers and vertical restraints will be discussed in detail in chapters 6 and 7. Also, Section 2 of the Clayton Act, having to do with price discrimination, was heavily amended in 1936 by the Robinson-Patman Act, which will be covered in chapter 8. The second statute passed in 1914 was the Federal Trade Commission (FTC) Act. The objective of this legislation was to create a special agency that could perform both investigatory and adjudicative functions. Prior to this time, the Antitrust Division of the Department of Justice (DOJ) was the sole enforcement agency in antitrust matters. The FTC Act also contained a section that outlawed “unfair methods of competition.” These three laws—the Sherman Act of 1890, and the Clayton and FTC Acts of 1914—together form the substantive framework for U.S. antitrust policy. (Key sections of these three statutes are reproduced in the appendix at the end of this chapter.) As already indicated in our brief description, the language of the acts is general, and interpretation has been left to the courts. Hence, to really understand what is legal and what is illegal in specific situations, one must be familiar with the important court decisions (and the establishment of precedents) and the specific rules of law that have been developed in these decisions. In many situations there remains considerable uncertainty about what a future court might hold to be legal or illegal. This is true, for example, with regard to monopolization. If Microsoft has 90 percent of the general-purpose computer market, is it guilty of monopolization? If Google has 70 percent of all searches in the search engine market, is it illegally extending its market power when it ventures into online shopping and favors its own sites over those of rivals? The answers are not immediately clear from the law, and it is up to competition authorities and courts to interpret that law. A company is subject to antitrust law unless it is given an exemption. There are a few industries and business activities that are exempt, including labor unions, export cartels, agricultural cooperatives, and some joint R&D ventures. Labor unions were exempted from antitrust in the Clayton Act, though some limits are placed on them. The reasoning for the exemption was to permit labor to match the bargaining power of employers. The Capper-Volstead Act of 1922 authorized agricultural cooperatives of farmers, ranchers, and dairy farmers to market their commodities collectively. As with the labor union exemption, the rationale was to permit the cooperatives to offset bargaining power on the demand side of the market. The Webb-Pomerene Act of 1918 exempted export associations. This means that U.S. firms can form and join an association to fix prices on their foreign sales and allocate foreign markets. These practices would clearly violate the Sherman Act if done domestically. Of course, if the standard is domestic consumer welfare, then such an exemption is easy to rationalize, because it harms only foreign consumers while

benefiting domestic firms. Professional sports teams are also treated somewhat leniently under antitrust. Baseball was actually granted immunity by the U.S. Supreme Court in a 1922 decision. One reason for the lenient treatment is the view that a sports league is not simply a collection of independent firms, such as, say, the steel industry. A sports league must cooperate in various ways to produce its product: competitive sports contests. An illustration of the permitted leniency is the practice of drafting new players. The league does not permit its teams to bid against one another for new players graduating from college or high school. Rather, the players are allocated by league rules to particular teams. The purported rationale is that this practice is necessary to promote “competitive balance.” Also, owners are allowed to jointly restrict entry by controlling the number of franchises. Finally, some joint R&D ventures are exempt from antitrust. The argument is that such ventures can enhance welfare when competition may undersupply innovation. However, members of a research joint venture are limited to coordinating their behavior with regard to R&D, and it would be unlawful for them to coordinate on other variables, such as price. Categories of anticompetitive offenses A claim of anticompetitive conduct could be evaluated using the per se rule or the rule of reason. When a practice can have no beneficial effects and only harmful effects, the “inherent nature” of the practice is injuriously restraining trade. Price fixing by a cartel seems to fit this description and is illegal per se. This means that the behavior need only be proved to have existed; no defense is allowed. If a certain practice does not qualify as a per se offense, the rule of reason applies. This term refers to the tests of “inherent effect” and “evident purpose.” For example, a merger between two firms in the same market is not necessarily harmful or beneficial. Hence the court must then look to the inherent effect of the merger and its evident purpose or intent. A merger would be judged as legal or not depending on an evaluation of the evidence concerning the actual intent of the firms to monopolize the market and their ability to realize any such intent. Justice Thurgood Marshall has described the rationale for the per se category: Per se rules always contain a degree of arbitrariness. They are justified on the assumption that the gains from imposition of the rule will far outweigh the losses and that significant administrative advantages will result. In other words, the potential competitive harm plus the administrative costs of determining in what particular situations the practice may be harmful must far outweigh the benefits that may result. If the potential benefits in the aggregate are outweighed to this degree, then they are simply not worth identifying in individual cases.27

These categories are also generally consistent with economic analysis.28 In figure 3.12 the initial price is P0 and output is Q0. The degree of competition is assumed to be sufficient to force price down to AC0. Now assume that a merger takes place that creates both cost savings and market power. Hence the postmerger equilibrium results in a price increase to P1 and a cost reduction to AC1. Output falls from Q0 to Q1. The merger results in a deadweight loss in consumers’ surplus equal to triangle A1 in figure 3.12. In contrast, a gain to society results from the cost savings, which is shown by the rectangle A2 in figure 3.12. That is, A2 represents the cost savings in producing output Q1 at an average cost of AC1 rather than of AC0. Of course, not all mergers produce both gains and losses. Some may result in only one or the other, or neither. It is, however, appropriate for the courts to investigate this issue rather than simply declaring mergers to be illegal per se.

Figure 3.12 Benefits (A2) and Costs (A1) to Society of Merger

In contrast to a merger that integrates the productive activities of the firms, a cartel can lead only to the area A1 losses. Cost savings are quite unlikely without actual integration. Hence it is sensible to place coordination among firms that attempts to fix prices or allocate markets in the per se category. The “inherent nature” of price fixing is to suppress competition, and there are no beneficial effects. Enforcement and remedies Our mission is to enforce our antitrust laws in a vigorous, transparent, even-handed, and fact-based fashion in order to ensure that consumers benefit from a competitive marketplace. (Bill Baer, assistant attorney general, Antitrust Division, U.S. Department of Justice, March 9, 2016)29

As noted earlier, federal government enforcement is shared by the DOJ and the FTC. The states also have their own antitrust laws, which are enforced by their attorneys general. The focus in this book is at the federal level which has a far larger impact on the competitiveness of markets than do state antitrust authorities. Antitrust laws are also enforced by private actions. For example, consumers or businesses that believe they have been harmed by price fixing or some other possible antitrust violations can bring a private antitrust suit. In fact, private suits have been the predominant form of antitrust enforcement for many decades and account for more than 90 percent of all cases. In some instances, private cases are pursued after defendants’ guilt was established through government prosecution. It is clear that both public and private litigation are crucial elements of antitrust enforcement in the United States. In most other countries, public enforcement is the dominant or even exclusive form though private litigation is expanding in some

jurisdictions, such as the European Union. The outcomes of antitrust cases are varied. By far the most common outcome is some form of settlement rather than a full-blown trial leading to a judicial verdict. Almost 90 percent of private cases are either settled or voluntarily dropped by the plaintiff.30 Settlements often take the form of agreements by the defendants to pay damages in order to avoid a trial. The damages are usually less than those claimed by the plaintiff, but the uncertainty of the outcome of a trial can make it in both parties’ interests to agree on some settlement amount and avoid a trial. Government cases frequently end by consent decrees or orders. These are agreements between the government and the defendant that specify certain actions that the defendant will take. For example, in 1982 AT&T agreed to divest its telephone operating companies in return for the DOJ’s agreement to terminate its monopolization case. Many leading companies have entered into consent decrees in association with an antitrust violation, including General Electric, IBM, and Microsoft. Merger cases are often settled by the use of so-called remedies, whereby the firms agree to spin off products or divisions in which there is an overlap in return for an agreement by the government not to prosecute the case. The 2014 merger of American Airlines and US Airways was approved by the DOJ with the requirement that some gates controlled by the two airlines at seven airports, including Reagan National and LaGuardia, were sold to low-cost carriers, such as Southwest. The intent of these sales was to lessen concerns about enhanced market power at those airports for which the two airlines had significant market shares and gates were in short supply (which would make entry difficult). In cases that proceed through the trial phase, the defendant may, of course, be found innocent or guilty. Various penalties are possible in cases where the defendant is found guilty (or pleads guilty as part of a plea agreement for a lesser penalty). In monopolization and mergers, a guilty defendant may be forced to divest certain assets. In 2012, the FTC succeeded in convincing the court that the acquisition of the physician practice group Saltzer Medical Group by St. Luke’s Health System violated Section 7 of the Clayton Act and the Idaho Competition Act. The court ordered St. Luke’s to divest itself of Saltzer’s physicians and assets. The Ninth Circuit Court affirmed the decision in 2015. More typical is for the case to be litigated prior to a merger’s consummation, in which case the remedy would be an injunction. An injunction is a court order to prohibit an antitrust violator from some specified future conduct. For example, a firm may be prohibited from only leasing (and not also selling) its photocopying machines. In its 2001 settlement with the DOJ, Microsoft was prohibited from using certain types of contracts with various types of companies, including computer manufacturers and Internet service providers. Fines or prison sentences may be used in criminal cases brought under the Sherman Act. These sanctions are usually reserved for price-fixing cases under Section 1, and it is increasingly common for the DOJ to obtain jail time in association with a price-fixing conviction. In the 1990s, 37 percent of convicted cartelists went to prison; that number has risen to 60–70 percent since 2000.31 In addition, the length of sentences has also been rising. The average prison sentence has more than tripled in the past 20–30 years. While currently almost two years in length, there is room for further growth, as the maximum sentence is ten years! For a very long time, corporate fines were very low and did not play a meaningful role in deterring antitrust violations. In the 1960s the average fine for price-fixing conviction was only $131,000, which represented about two-tenths of 1 percent of the sales involved in the conspiracy.32 Beginning in the late 1990s, fines started to drastically increase due to revisions of the Federal Sentencing Guidelines. Fines can now reach a maximum of $100 million or twice the gain to the firm from the illegal activity, whichever is larger. As of May 2016, there have been 130 fines of $10 million or more for Section 1 violations, with the current record standing at $500 million in 1999 for Hoffman-LaRoche’s participation in the vitamins price-

fixing conspiracy. In the European Union, government fines can reach up to 10 percent of sales, which is fifty times larger than the typical fine in the United States fifty years ago. In July 2016, the European Commission announced fines in excess of $3 billion dollars in association with a cartel of trucking firms. In private cases, successful plaintiffs can win treble damages. Damages are measured as the excess payments made by customers over what the prices would have been in the absence of the conspiracy. If defendants are found guilty of fixing prices, then they must pay triple the amount of damages. However, a very high fraction of cases settle before there is a judicial verdict, and damages paid at settlement are more on the order of single damages.33 Even with settlement, some cases have resulted in billions of dollars of damages collected. Global Competition Law and Enforcement Competition law begins in 1889. It is the year that Canada passed its law, which was then followed by the United States in 1890. The subsequent path revealed these two countries to be outliers. A half century later, only Argentina had followed suit; fewer than a dozen countries had adopted a competition law by the end of the 1950s (see figure 3.13). A full century after Canada passed its law, only twenty-four countries had such laws, but by that time the tide had turned. In the past thirty years, competition laws have spread like wildfire with more than 100 countries now having adopted legislation that prohibits certain practices associated with the exercise of market power. Excerpts from the competition laws of the United States, the European Union, and China are provided in the Appendix to this chapter.

Figure 3.13 Number of Countries with Competition Laws Source: “Competition & Consumer Protection Authorities Worldwide,” Federal Trade Commission. From www.ftc.gov /policy/international/competition-consumer-protection-authorities-worldwide (accessed on July 8, 2016).

Most competition laws are quite similar in what they prohibit: collusion, harmful mergers, and monopolization (also referred to as “exclusion” and as “abuse of dominance” in the European Union). However, there is considerable variation in terms of how the laws are interpreted (e.g., defining monopolization), the standard used in evaluating a practice (e.g., consumer surplus versus total surplus), the

intensity of enforcement for stopping anticompetitive conduct (e.g., how carefully mergers are evaluated), and the severity of penalties for deterring anticompetitive conduct (e.g., the magnitude of corporate fines). Surveying the global antitrust environment, a reasonably broad consensus has been established as to what it means to illegally collude, and a generally tough attitude prevails when prosecuting it. The differences lie more in the penalties that are brought to bear. Although about thirty-five countries have criminalized unlawful collusion, the United States remains the only place that routinely puts price-fixers in prison. For some countries, part of the initial challenge in aggressively enforcing competition law was that, under the auspices of industrial policy, there was a legacy of encouraging (or at least tolerating) collusion on the grounds of market stability and enhancing competitiveness in foreign markets. That permissive attitude has been significantly changing and the world is on a trajectory to universally prohibit and punish collusion. Where international distinctions are more apparent is with regard to mergers and monopolization practices. Compared to the European Union, the United States has been distinctly more lenient in approving mergers. For example, the United States approved the merger of General Electric and Honeywell, which was then subsequently blocked by the European Commission in 2001. The firms were forced to abandon the merger. The United States and European Union also came to analogously different decisions on a proposed merger between Boeing and McDonnell-Douglas. Additional jurisdictions have since appeared on the scene, including, most notably, China. By way of example is a vessel-sharing alliance on transoceanic trade routes proposed in 2013 by three global ocean freight companies. Approved by the United States and the European Union, the Anti-Monopoly Bureau of China’s Ministry of Commerce (MOFCOM) chose to prohibit it. While the different decisions could have been due to different assessments of the potential market power effects and efficiencies (which would have come from the more efficient utilization of industry capacity), it is also possible that MOFCOM was engaging in industrial (rather than competition) policy by protecting Chinese shipping companies. The possibility for conflict between competition authorities is a challenge faced by global companies. The greatest source of differences in competition policy between the United States and the European Union concerns monopolization, which are practices by a dominant firm that harms competitors by excluding them from the market. These practices are prohibited by Section 2 of the Sherman Act and Section 3 of the Clayton Act (when they concern vertical restraints) and Article 102 of the Treaty of the Functioning of the European Union (TFEU). Historically, the European Union has been more likely to find such practices inherently abusive and to pursue cases. However, in the past twenty-five years, the European Union has been moving toward a system that more closely resembles that of the United States in being effects based, which means that a practice is not prohibited unless its effect is deemed harmful in that particular instance. At the same time, the United States has become increasingly noninterventionist, which has resulted in the DOJ and FTC pursuing far fewer monopolization cases. The DOJ has had only one Section 2 case since 2000, while judicial rulings have made fewer practices inherently illegal under Section 2.34 This distinction between the European Union and the United States is exemplified by recent investigations into Google. With 70 percent of searches in the United States and 80 percent of searches in the European Union, there is the legitimate concern that Google’s dominant position might lead it to bias certain searches—for example, with regard to dining and travel—so as to divert follow-up search away from its competitors and to sites from which Google profits. The FTC engaged in a preliminary investigation and the Federal Trade Commissioners chose not to pursue a case (though apparently against the recommendation of their staff economists). In contrast, the European Commission charged Google with violating Article 102 in a Statement of Objections in April 2016.

These different actions are consistent with a greater concern in the United States that errors in antitrust enforcement leading to an inappropriate intervention could chill competition. Google is an innovative company, and restrictions on its conduct could retard innovation. In contrast, the European Union seems less concerned with chilling competition and more with ensuring that market dominance does not corrupt competition. As with mergers, these different perspectives make for a challenging global environment for companies, though they provide plenty of work for lawyers and economists!

Summary of the Chapter and Overview of Part I Antitrust (or competition) law is intended to provide an environment for competition to thrive. This chapter has reviewed the primary laws, methods of enforcement, and exceptions to those laws. The field of industrial organization provides the tools for understanding imperfectly competitive markets and identifying when competitive processes may be thwarted. It can also be used to explore the implications of antitrust policy and to design more effective policies. Finally, we reviewed the case for competition and against monopoly, while also noting that the type of market structure that performs best in a dynamic setting remains unclear. Subsequent chapters in part I will describe how economists model markets characterized by imperfect competition. While largely the focus of chapters 4 and 5, all the chapters offer some coverage in this regard. Issues include, for example, when collusion is likely to emerge, how a dominant firm can cripple competition in ways that reduce social welfare, and what features of a merger suggest that it should be prohibited. The chapters on antitrust are organized by first stating the primary antitrust issue, providing some economic theory to understand where the market failure lies and why there is a role for government intervention, and then reviewing the development of antitrust case law, along with some of the more important cases. Chapter 4 focuses on collusive pricing and, in addition to the coverage just mentioned, describes recent innovations in enforcement policies. Chapter 5 examines entry and dynamic competition which provides a foundation for later chapters. Chapter 6 introduces the topic of merger and focuses on horizontal merger—when two competitors combine to form a single firm—which is the type of merger of greatest concern to competition. Mergers between two firms that have a buyer-seller relationship (known as vertical mergers) are covered in chapter 7 along with vertical restraints (a form of monopolization or exclusion). Such restraints include tying, exclusive dealing, resale price maintenance, and territorial restraints. Chapter 8 covers monopolization practices, including predatory pricing, refusal to deal, and payto-delay entry that has recently arisen in the pharmaceutical industry. Chapter 9 focuses on “New Economy” markets, including multi-sided platforms (such as Google and Facebook) but also credit cards and computer software (such as Microsoft’s Windows and Google’s Android). These markets have some features that naturally create market dominance and thereby pose a serious challenge to distinguishing legitimate competition from anticompetitive practices. Questions and Problems 1. Assume, in the monopoly-versus-competition example in the text, where demand is Q = 100 − P and marginal cost MC = average cost AC = $40, that MC under competition remains at $40. However, assume that the reason the monopoly can continue to be a monopoly is that it pays $10 per unit of output to reimburse lobbyists for their efforts in persuading legislators to keep the monopoly insulated from competition. For example, the lobbyists may be generating (false) studies that demonstrate that competition results in higher costs. a. Calculate the prices and quantities under monopoly and competition. b. Calculate total economic surplus under monopoly and competition. The difference is the social cost of monopoly. c. The social cost of monopoly can be disaggregated into two distinct types of cost: the resources cost of rent seeking and the usual deadweight loss of output restriction. What are their respective magnitudes? 2. Because of strong scale economies, a refrigerator monopolist would charge a price of $120 and sell forty-five refrigerators

in Iceland. Its average cost would be $60. However, the Iceland Planning Commission has determined that five refrigerator suppliers would be sufficiently competitive to equalize price with average cost. The five-firm equilibrium would yield a price of $100 and a total output of fifty refrigerators. a. Consumer surplus under the five-firm industry organization would be larger than under monopoly. If the demand curve is linear, by how much is consumer surplus larger? b. How much larger is producer surplus under monopoly? c. If the Planning Commission thinks that total economic surplus is the correct criterion, which market structure of the refrigerator industry will they choose? 3. Explain the difference between the Pareto criterion and the compensation principle as rules for deciding whether a particular policy change is in the public interest. 4. What is the best market structure for promoting technical progress? 5. Assume that the market demand for shoes is Q = 100 − P and that the constant average cost of production is $60. Consider two alternatives. In case C the industry is initially organized competitively, and in case M the industry is organized as a monopoly. In each case an invention leads to a lower, constant average cost of production of $50. The R&D cost of the invention is not relevant for this problem, as the issue is the magnitude of the incentive to invent in the two cases. Finally, there is no rivalry to make the invention: In case C the inventor may be assumed to have a patent of infinite life, and in case M the inventor is the existing monopolist and entry is barred. a. Find the initial price and quantity equilibria for the two cases. b. What is the return that the inventor in case C could expect from its lower cost production process? Assume that the inventor monopolizes the shoe industry. c. What is the return to the monopolist in case M that results from inventing the lower cost process? d. Interpret your results regarding the incentive to invent under monopoly and competition. e. In case C, if society could require that the new process be used efficiently—by making it freely available to all—what would be the increase in total economic surplus? How does this magnitude compare to the returns found in parts b and c? 6. Answer the same questions as in problem 5, but assume that the invention lowers average cost to $10 (rather than to $50). This problem describes a major invention (as opposed to the minor invention considered in problem 5). Does this change affect the comparative incentives under monopoly and competition? 7. Should antitrust policy be used to reduce income and wealth inequality? If so, how should this be done? Should there be a “public interest” amendment to the Sherman and Clayton Acts? Consider the possibility of errors and intentional abuse. 8. A study in 1975 estimated the effect of monopoly on equity as opposed to efficiency (W. Comanor and R. Smiley, “Monopoly and the Distribution of Wealth,” Quarterly Journal of Economics 89 [May 1975]: 177–194). For 1962, the wealthiest 0.27 percent of the population accounted for 18.5 percent of wealth. If all industries were competitive, this study estimated that the wealthiest 0.27 percent would have only 13 percent of wealth in 1962. Can you explain this finding? Hint: The wealthiest 0.27 percent held 30 percent of business ownership claims. 9. What are some challenges created by the global expansion of competition laws? What are ways in which to deal with those challenges? Has this expansion made competition policy more or less difficult for the United States? 10. Consider competition for an invention that would reduce the cost of making shoes. Assume that the demand for shoes is Q = 100 − P and the constant average cost of production is $60. The incumbent monopolist, M, faces competition from a single potential entrant, E, in being first to invent and patent a new, lower cost process for producing shoes. Let the new, lower cost process be one with a constant average cost of $50. If E wins the race for the patent, a Cournot equilibrium will result with M having an average cost of $60 and E having an average cost of $50. If M wins the race, it will remain a monopolist, but with an average cost of $50. a. What is M’s incentive to win the race? b. What is E’s incentive to win the race? c. Explain the intuition underlying the answers to parts a and b. It can be argued that this so-called efficiency effect leads to the “persistence of monopoly.” Why?

Appendix: Excerpts from Antitrust Statutes United States Sherman act (1890) 1. Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony, and, on conviction thereof, shall be punished by fine not exceeding one million dollars if a corporation, or, if any other person, one hundred thousand dollars or by imprisonment not exceeding three years, or by both said punishments, in the discretion of the court. 2. Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony, and, on conviction thereof, shall be punished by fine not exceeding one million dollars if a corporation, or, if any other person, one hundred thousand dollars or by imprisonment not exceeding three years, or by both said punishments, in the discretion of the court.

Clayton act (1914) 2. a. It shall be unlawful for any person engaged in commerce, in the course of such commerce, either directly or indirectly, to discriminate in price between different purchasers of commodities of like grade and quality, where either or any of the purchases involved in such discrimination are in commerce, where such commodities are sold for use, consumption, or resale within the United States or any Territory thereof or the District of Columbia or any insular possession or other place under the jurisdiction of the United States, and where the effect of such discrimination may be substantially to lessen competition or tend to create a monopoly in any line of commerce, or to injure, destroy, or prevent competition with any person who either grants or knowingly receives the benefit of such discrimination, or with customers of either of them: Provided, That nothing herein contained shall prevent differentials which make only due allowance for differences in the cost of manufacture, sale, or delivery resulting from the differing methods or quantities in which such commodities are to such purchasers sold or delivered. b. Upon proof being made, at any hearing on a complaint under this section, that there has been discrimination in price or services or facilities furnished, the burden of rebutting the prima facie case thus made by showing justification shall be upon the person charged with a violation of this section, and unless justification shall be affirmatively shown, the Commission is authorized to issue an order terminating the discrimination: Provided, however, That nothing herein contained shall prevent a seller rebutting the prima facie case thus made by showing that his lower price or the furnishing of services or facilities to any purchaser or purchasers was made in good faith to meet an equally low price of a competitor, or the services or facilities furnished by a competitor. c. It shall be unlawful for any person engaged in commerce, in the course of such commerce, to pay or grant, or to receive or accept, anything of value as a commission, brokerage, or other compensation, or any allowance or discount in lieu thereof, except for services rendered in connection with the sale or purchase of goods, wares, or merchandise, either to the other party to such transaction or to an agent, representative, or other intermediary therein where such intermediary is acting in fact for or in behalf, or is subject to the direct or indirect control, of any party to such transaction other than the person by whom such compensation is so granted or paid. d. It shall be unlawful for any person engaged in commerce to pay or contract for the payment of anything of value to or for the benefit of a customer of such person in the course of such commerce as compensation or in consideration for any services or facilities furnished by or through such customer in connection with the processing, handling, sale, or offering for sale of any products or commodities manufactured, sold, or offered for sale by such person, unless such payment or consideration is available on proportionally equal terms to all customers competing in the distribution of such products or commodities. e. It shall be unlawful for any person to discriminate in favor of one purchaser against another purchaser or purchasers of a commodity bought for resale, with or without processing, by contracting to furnish or furnishing, or by contributing to the furnishing of, any services or facilities connected with the processing, handling, sale, or offering for sale of such commodity so purchased upon terms not accorded to all purchasers on proportionally equal terms.

f. It shall be unlawful for any person engaged in commerce, in the course of such commerce, knowingly to induce or receive a discrimination in price which is prohibited by this section. 3. It shall be unlawful for any person engaged in commerce, in the course of such commerce, to lease or make a sale or contract for sale of goods, wares, merchandise, machinery, supplies, or other commodities, whether patented or unpatented, for use, consumption, or resale within the United States or any Territory thereof or the District of Columbia or any insular possession or other place under the jurisdiction of the United States, or fix a price charged therefor, or discount from, or rebate upon, such price, on the condition, agreement, or understanding that the lessee or purchaser thereof shall not use or deal in the goods, wares, merchandise, machinery, supplies, or other commodities of a competitor or competitors of the lessor or seller, where the effect of such lease, sale, or contract for sale or such condition, agreement, or understanding may be to substantially lessen competition or tend to create a monopoly in any line of commerce. 7. No corporation engaged in commerce shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no corporation subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of another corporation engaged also in commerce, where in any line of commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly. This section shall not apply to corporations purchasing such stock solely for investment and not using the same by voting or otherwise to bring about, or in attempting to bring about, the substantial lessening of competition. Nor shall anything contained in this section prevent a corporation engaged in commerce from causing the formation of subsidiary corporations for the actual carrying on of their immediate lawful business, or the natural and legitimate branches or extensions thereof, or from owning and holding all or a part of the stock of such subsidiary corporations, when the effect of such formation is not to substantially lessen competition.

Federal trade commission act (1914) 5. a. (1) Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are declared unlawful.

European Union Treaty on the functioning of the european union (2008) Article 101: The following shall be prohibited as incompatible with the internal market: all agreements between undertakings, decisions by associations of undertakings and concerted practices which may affect trade between Member States and which have as their object or effect the prevention, restriction or distortion of competition within the internal market, and in particular those which: (a) directly or indirectly fix purchase or selling prices or any other trading conditions; (b) limit or control production, markets, technical development, or investment; (c) share markets or sources of supply; (d) apply dissimilar conditions to equivalent transactions with other trading parties, thereby placing them at a competitive disadvantage; (e) make the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial usage, have no connection with the subject of such contracts. Article 102: Any abuse by one or more undertakings of a dominant position within the internal market or in a substantial part of it shall be prohibited as incompatible with the internal market in so far as it may affect trade between Member States. Such abuse may, in particular, consist in: (a) directly or indirectly imposing unfair purchase or selling prices or other unfair trading conditions; (b) limiting production, markets or technical development to the prejudice of consumers; (c) applying dissimilar conditions to equivalent transactions with other trading parties, thereby placing them at a competitive disadvantage; (d) making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial usage, have no connection with the subject of such contracts.

Merger regulation (council regulation no. 139/2004 of 20 January 2004 on the control of concentrations between undertakings) Article 2: Appraisal of concentrations. 1. Concentrations within the scope of this Regulation shall be appraised in accordance with the objectives of this Regulation and the following provisions with a view to establishing whether or not they are compatible with the common market. In making this appraisal, the Commission shall take into account: (a) the need to maintain and develop effective competition

within the common market in view of, among other things, the structure of all the markets concerned and the actual or potential competition from undertakings located either within or outwith the Community; (b) the market position of the undertakings concerned and their economic and financial power, the alternatives available to suppliers and users, their access to supplies or markets, any legal or other barriers to entry, supply and demand trends for the relevant goods and services, the interests of the intermediate and ultimate consumers, and the development of technical and economic progress provided that it is to consumers’ advantage and does not form an obstacle to competition. 2. A concentration which would not significantly impede effective competition in the common market or in a substantial part of it, in particular as a result of the creation or strengthening of a dominant position, shall be declared compatible with the common market. 3. A concentration which would significantly impede effective competition, in the common market or in a substantial part of it, in particular as a result of the creation or strengthening of a dominant position, shall be declared incompatible with the common market. Article 3: Definition of concentration. A concentration shall be deemed to arise where a change of control on a lasting basis results from: (a) the merger of two or more previously independent undertakings or parts of undertakings, or (b) the acquisition, by one or more persons already controlling at least one undertaking, or by one or more undertakings, whether by purchase of securities or assets, by contract or by any other means, of direct or indirect control of the whole or parts of one or more other undertakings.

China Anti-monopoly law35 (2008) Article 1: This Law is enacted for the purpose of preventing and restraining monopolistic conduct, protecting fair market competition, enhancing economic efficiency, safeguarding the interests of consumers and the interests of the society as a whole, and promoting the healthy development of the socialist market economy. Article 13: Any of the following monopoly agreements among the competing business operators shall be prohibited: (1) fixing or changing prices of commodities; (2) limiting the output or sales of commodities; (3) dividing the sales market or the raw material procurement market; (4) restricting the purchase of new technology or new facilities or the development of new technology or new products; (5) making boycott transactions; or (6) other monopoly agreements as determined by the Antimonopoly Authority under the State Council. For the purposes of this Law, “monopoly agreements” refer to agreements, decisions or other concerted actions which eliminate or restrict competition. Article 14: Any of the following agreements among business operators and their trading parties are prohibited: (1) fixing the price of commodities for resale to a third party; (2) restricting the minimum price of commodities for resale to a third party; or (3) other monopoly agreements as determined by the Anti-monopoly Authority under the State Council. Article 17: A business operator with a dominant market position shall not abuse its dominant market position to conduct the following acts: (1) selling commodities at unfairly high prices or buying commodities at unfairly low prices; (2) selling products at prices below cost without any justifiable cause; (3) refusing to trade with a trading party without any justifiable cause; (4) requiring a trading party to trade exclusively with itself or trade exclusively with a designated business operator(s) without any justifiable cause; (5) tying products or imposing unreasonable trading conditions at the time of trading without any justifiable cause; (6) applying dissimilar prices or other transaction terms to counterparties with equal standing; (7) other conduct determined as abuse of a dominant position by the Anti-monopoly Authority under the State Council For the purposes of this Law, “dominant market position” refers to a market position held by a business operator having the capacity to control the price, quantity or other trading conditions of commodities in relevant market, or to hinder or affect any other business operator to enter the relevant market. Article 20: A concentration refers to the following circumstances: (1) the merger of business operators; (2) acquiring control over other business operators by virtue of acquiring their equities or assets; or (3) acquiring control over other business operators or possibility of exercising decisive influence on other business operators by virtue of contact or any other means. Article 28: Where a concentration has or may have effect of eliminating or restricting competition, the Anti-monopoly Authority under the State Council shall make a decision to prohibit the concentration. However, if the business operators

concerned can prove that the concentration will bring more positive impact than negative impact on competition, or the concentration is pursuant to public interests, the Anti-monopoly Authority under the State Council may decide not to prohibit the concentration.

Notes 1. This interpretation is most easily understood if the demand curve is assumed to be made up of many heterogeneous consumers with demands for at most one smartphone. Hence the individual with the highest valuation (or willingness-to-pay) for a smartphone is represented by the vertical intercept of the demand curve, 0A. The next highest valuation (for the second smartphone) is slightly less than 0A, and so forth. The person who actually has the marginal willingness-to-pay P* is the person who obtains a zero (individual) consumer surplus; all others who buy have positive surpluses. For example, the person with marginal willingness-to-pay of Q′F pays P* and thus has a surplus of FG. The key assumption necessary to make this interpretation generally valid is that the income effect for the good is “small.” See Robert D. Willig, “Consumer’s Surplus without Apology,” American Economic Review 66 (September 1976): 589–607, for support for this interpretation. 2. The monopolist sets marginal revenue MR equal to MC. MR is 100 − 2Q, and MC is 40. Equating and solving for Q gives Q = 30. The competitive equilibrium is found by setting P = MC. So 100 − Q = 40 gives Q = 60. In each case, substitute the equilibrium value of Q into the demand function to obtain the value of P. 3. On the role of antitrust with regard to inequality, see Jonathan B. Baker and Steven C. Salop, “Antitrust, Competition Policy, and Inequality,” Georgetown Law Journal 104 (2015): 1–28. 4. E. H. Chamberlin, The Theory of Monopolistic Competition (Cambridge, MA: Harvard University Press, 1933). 5. E. H. Chamberlin, “Product Heterogeneity and Public Policy,” American Economic Review 40 (May 1950): 85–92. 6. Harvey Leibenstein, “Allocative Efficiency vs. X-Inefficiency,” American Economic Review 56 (June 1966): 392–415. 7. By the same logic, government protection of an industry could create X-inefficiency, as it also constrains the pressures of competition. This issue is discussed in Roger Frantz, “Antitrust and X-Efficiency,” Antitrust Bulletin 60 (2015): 221–230. 8. The pioneering work on rent-seeking behavior is Gordon Tullock, “The Welfare Costs of Tariffs, Monopolies and Theft,” Western Economic Journal 5 (1967): 224–232. A more relevant piece for our analysis is Richard A. Posner, “The Social Costs of Monopoly and Regulation,” Journal of Political Economy 83 (August 1975): 807–827. 9. Wlliam P. Rogerson, “The Social Costs of Monopoly and Regulation: A Game-Theoretic Analysis,” Bell Journal of Economics 13 (Autumn 1982): 391–401. 10. Michael A. Salinger, “Tobin’s q, Unionization, and the Concentration–Profits Relationship,” Rand Journal of Economics 15 (Summer 1984): 159–170. 11. Arnold C. Harberger, “Monopoly and Resource Allocation,” American Economic Review 44 (1954): 77–87. 12. Keith Cowling and Dennis C. Mueller, “The Social Costs of Monopoly Power,” Economic Journal 88 (December 1978): 727–748. For a summary of many of these studies, see Paul Ferguson, Industrial Economics: Issues and Perspectives (London: Macmillan, 1988). 13. See, for example, Wesley M. Cohen and Richard C. Levin, “Empirical Studies of Innovation and Market Structure,” in Richard Schmalensee and Robert D. Willig, eds., Handbook of Industrial Organization, vol. 2 (Amsterdam: North-Holland, 1989), pp. 1059–1107. 14. This discussion is based on Kenneth Arrow, “Economic Welfare and the Allocation of Resources,” The Rate and Direction of Inventive Activity: Economic and Social Factors (Princeton, NJ: Princeton University Press, 1962). 15. There are two types of innovations, cost-reducing processes and new products. Here we focus on innovations that lower costs. Although a bit strained, the same analysis can be used for new products. Imagine that the demand exists but is everywhere below the current cost of production, so quantity purchased is zero. Now a cost-reducing innovation lowers cost such that positive quantities are purchased—and we have a new product. 16. A minor innovation reduces cost by a relatively small amount. The exact difference between a minor and a major innovation will become clear later in this section. 17. Jean Tirole, The Theory of Industrial Organization (Cambridge, MA: MIT Press, 1988), p. 392. Tirole also describes an efficiency effect. By assuming that there is rivalry to make the invention in the monopoly case, it can be argued that the

incumbent monopolist has a greater incentive to invent than a rival inventor who is a potential entrant into the industry. That is, if the rival inventor is successful and develops the low-cost process C1 first, a Cournot duopoly would result between the incumbent with cost C0 and the inventor. Hence, the incumbent monopolist’s incentive is the difference between winning the race and obtaining monopoly profit with C1 and its duopoly profit with C0. This is larger than the entrant’s duopoly profit. Also see R. Gilbert and D. Newbery, “Pre-emptive Patenting and the Persistence of Monopoly,” American Economic Review 72 (June 1982): 514–526. 18. For more examples, see Clayton M. Christensen, The Innovator’s Dilemma (Cambridge, MA: Harvard University Press, 1997). 19. Carl Shapiro, “Competition and Innovation: Did Arrow Hit the Bull’s Eye?,” in Josh Lerner and Scott Stern, eds., The Rate & Direction of Inventive Activity Revisited (Chicago: University of Chicago Press, 2012), pp. 361–409. 20. United States v. E. I. duPont de Nemours and Co., 351 U.S. 377 (1956). 21. George J. Stigler, “Introduction,” Business Concentration and Price Policy (Princeton, NJ: Princeton University Press, 1955), p. 4. 22. William Landes, “The Fire of Truth: A Remembrance of Law and Econ at Chicago,” Journal of Law and Economics 26 (1983): 163–234. 23. Robert H. Bork, The Antitrust Paradox (New York: Basic Books, 1978), p. 89. 24. These numbers are from Richard O. Zerbe and Sunny Knott, “An Economic Justification for a Price Standard in Merger Policy: The Merger of Superior Propane and ICG Propane,” in John B. Kirkwood, ed., Antitrust Law and Economics, Research in Law and Economics, vol. 21 (Bingley, UK: Emerald Group Publishing, 2004), pp. 409–444. 25. The ensuing discussion is based on Lukas Toth and Maarten Pieter Schinkel, “Balancing the Public Interest-Defense in Cartel Offenses,” Amsterdam Law School Legal Studies Research Paper 2016-05, Amsterdam Center for Law & Economics Working Paper Paper 2016-01. 26. Herbert Hovenkamp, The Antitrust Enterprise: Principle and Execution (Cambridge, MA: Harvard University Press, 2005), p. 48. 27. United States v. Container Corp. of America, 393 U.S. 333, 341 (1969). 28. This analysis is based on Oliver E. Williamson, “Economies as an Antitrust Defense: The Welfare Tradeoffs,” American Economic Review 58 (March, 1968): 18–36. 29. Before the Subcommittee on Antitrust, Competition Policy and Consumer Rights, Committee on the Judiciary, United States Senate Hearing on “Oversight of the Enforcement of the Antitrust Law,” March 9, 2016. 30. Steven C. Salop and Lawrence J. White, “Private Antitrust Litigation: An Introduction and Framework,” in Lawrence J. White, ed., Private Antitrust Litigation: New Evidence, New Learning (Cambridge, MA: MIT Press, 1989), Chapter 1. 31. www.justice.gov/atr/division-update/2014/criminal-program (accessed on July 18, 2016). 32. These figures are from Richard A. Posner, Antitrust Law (Chicago: University of Chicago Press, 1976). 33. John M. Connor and Robert H. Lande, “Not Treble Damages: Cartel Recoveries Are Mostly Less Than Single Damages,” Iowa Law Review 100 (2015): 1997–2023. 34. For more on this topic, the interested reader is referred to Eleanor M. Fox, “Monopolization and Abuse of Dominance: Why Europe Is Different,” Antitrust Bulletin 59 (Spring 2014): 129–152. 35. http://www.cecc.gov/resources/legal-provisions/prc-anti-monopoly-law-chinese-and-english-text on July 20, 2016

4 Oligopoly, Collusion, and Antitrust

Supreme Court Justice Antonin Scalia referred to collusion as the “supreme evil of antitrust.”1 In this chapter, we trace major judicial decisions from the passage of the Sherman Act in 1890 to the present to show the evolution of the current legal rules toward price fixing and thus how the government has strived to control this “evil.” Before beginning this task, however, we discuss the theories of collusive and oligopoly pricing. Oligopoly refers to a market structure with a small number of sellers—small enough to require each seller to take into account its rivals’ current actions and likely future responses to its own actions. Price-fixing conspiracies, or cartels, are not limited to a small number of sellers, although it is generally believed that a cartel is more effective when the number of participants is small. Our coverage proceeds in the following manner. To explore the theory of oligopoly and collusion, we will need to be properly tooled. To this end, an introductory discussion of game theory is provided. With that under our belts, we review imperfect competition when firms choose quantities, when firms choose prices, and when they collude rather than compete. To bring the theory of collusion to life, several price-fixing cartels are viewed through the lens of these models. The last section of this chapter discusses antitrust law; landmark price-fixing cases; and enforcement issues, such as the U.S. Department of Justice’s (DOJ’s) corporate leniency program. A very important assumption that underlies the analysis in this chapter is that potential entry is not an issue. We shall assume that the number of active firms is fixed, so that our focus is on the internal industry problems of firms reaching an equilibrium when the only competition comes from existing firms. Allowing for competition from entrants is examined in chapter 5. Game Theory Example 1: Advertising Competition Consider a duopoly in which firms do not compete on price because of collusion or regulation. Let the price be $15 and the quantity demanded be 100 units. If unit cost is $5, then profit per unit equals $15 − $5 = $10. Though it is assumed that firms have somehow been able to avoid competing on price, firms do compete through advertising. To simplify matters, a firm can advertise at a low rate (which costs $100) or at a high rate (which costs $200). Also for simplicity, assume that advertising does not affect market demand but only a firm’s market share; specifically, a firm’s market share depends on how much it advertises relative to its competitor. If both firms advertise an equal amount (whether low or high), then they equally share market demand—that is, each has demand of 50 units. However, if one firm advertises low and the other advertises

high, then the high-advertising firm has a market share of 75 percent. Given these data, we can calculate each firm’s profits net of advertising expenditure for all of the possible advertising rates. The result of these calculations is figure 4.1, which is called a profit (or payoff) matrix. This matrix comprises four cells. Each cell has two entries, the first of which is firm 1’s profit (net of advertising expenditure) and the second of which is firm 2’s profit (also net of advertising expenditure). If both firms advertise at a low rate, then each firm’s profit is $400. Each firm receives half of market demand, which is 50 units, and earns profit of $10 on each unit, which generates gross profit of $500. After subtracting the cost of advertising at a low rate, which is $100, we derive profit of $400. If instead firm 1 advertises at a low rate and firm 2 advertises at a high rate, then firm 1’s profit is $150 [= (10)(100)(0.25) − 100] and firm 2’s profit is $550 [= (10)(100)(0.75) − 200]. Finally, if both firms advertise at a high rate, then each receives profits of $300 [= (10)(100)(0.5) − 200].

Figure 4.1 Payoff Matrix for Two Competing Firms: Advertising

Suppose that the two firms simultaneously choose how much to advertise. That is, a firm decides whether to spend $100 or $200 on advertising at the same time that the other firm is choosing its advertising expenditure. How much should each firm advertise if it wants to maximize its profits? Let’s look at this decision from the perspective of firm 1. First notice that how much profit it earns depends on how intensively firm 2 advertises. If firm 2 advertises at a low rate, then firm 1 earns $400 from choosing a low rate of advertising and $550 from choosing a high rate. The higher profit from advertising at a high rate comes from the higher market share that firm 1 would receive, which is more than sufficient to offset the higher cost of advertising. Thus, if firm 1 believes that firm 2 will not advertise very much, then it should advertise intensively. If instead firm 1 believes that firm 2 will advertise at a high rate, then it earns $150 from low advertising but $300 from high advertising. Therefore, profit-maximizing behavior by firm 1 is to advertise intensively, regardless of how much it believes firm 2 will advertise. By the symmetry of this setting, firm 2’s decision process is identical. Our prediction as to how profit-maximizing firms would behave in this setting is that both firms would invest heavily in advertising. This is a stable outcome, as each firm is maximizing its profits. Note, however, that joint profits are not maximized. Joint profits are $600 when both advertise intensively but are $800 if both pursue minimal advertising. The problem is that each firm has an incentive to advertise heavily, but when they both do so, then each firm’s advertising is negated by the advertising of its rival.2 Example 2: Compatibility of Standards Prior to DVDs and the streaming of films using such services as Netflix, people would drive to their nearest video store and rent a film on a videocassette, which would be shown on their television set with the use of a videocassette recorder (VCR). In the early years of this industry, two competing formats were available: Sony’s Betamax (or Beta) and JVC’s VHS. Suppose firm 1 is a supplier of VCRs and firm 2 is a supplier of videocassettes. Each firm’s product can

have either the Beta format or the VHS format. Further suppose that firm 1’s cost of producing a VHS VCR is slightly less than its cost of producing a Beta VCR, while firm 2’s cost of producing Beta cassettes is slightly less than producing a VHS cassette. These two firms are the sole exporters of these products to a country that currently has no VCRs or videocassettes. Each firm must decide whether to manufacture its product with the Beta format or the VHS format. (Assume that it is too costly to manufacture both formats.) The consumers in this country are indifferent between the two technologies. All they care about is that the format of the videocassettes sold is the same as the format of the VCRs sold. Assume that the profit matrix for these two firms is as shown in figure 4.2. If both use VHS, then firm 1 earns $500 and firm 2 earns $200. If both use Beta, then firm 1 earns $400 and firm 2 earns $250. If they each supply a different format, then each earns zero profits, as demand is zero. What should firms do in this setting? If firm 1 believes that firm 2 is going to supply Beta cassettes, then firm 1 earns higher profits by also supplying Beta cassettes (compare $400 and 0). If firm 2 believes that firm 1 will supply Beta VCRs, then firm 2 is better off supplying Beta cassettes (compare $250 and 0). Both firms using the Beta format is then a stable outcome, as each firm is choosing the format that maximizes its profits given the format chosen by the other firm. No firm has an incentive to change its decision.

Figure 4.2 Payoff Matrix for Two Competing Firms: Beta versus VHS

There is another stable outcome, however, in which both firms choose the VHS format. Firm 1 prefers this stable outcome to the one in which the Beta format is chosen, while firm 2 prefers the outcome with the Beta format. We would predict that both firms would offer the same format, but which format that might be is unclear. In contrast to example 1, a firm’s decision in this setting depends on the decision made by the other firm. A firm’s profit-maximizing format is the one that matches the format that it believes the other firm is going to choose. This interdependence in firms’ decisions is an essential feature of oligopolies.3 The Strategic Form of a Game The two preceding examples are known as games. We are all familiar with the term “game,” but in fact there is a branch of applied mathematics known as game theory. In game theory, a game is a well-defined object. In this section, we describe what a game is and how one analyzes it. The reason for spending time on game theory is that it is a tool designed for investigating the behavior of rational agents in settings for which each agent’s best action depends on what other agents are expected to do. As a result, game theory will prove to be very useful in investigating firm behavior in oligopolies and, more generally, in providing insight concerning the strategic behavior of firms.4 Our goal is to understand the behavior of agents in a particular economic setting; for example, the decision by firms as to how much to advertise, how much to produce, or what technology to use. The first step in applying game theory is to define what the relevant game is (where “relevant” is defined by the researcher and the scientific community at large).

The strategic form (or normal form) of a game describes an economic setting by using three elements: (1) a list of the agents who are making decisions, (2) a list of the possible decisions that each agent can make, and (3) a description of the way in which each agent evaluates different possible outcomes. Those agents who are making decisions are called players. The decision rule of a player is called a strategy. A strategy tells a player how to behave in the setting being modeled. A player’s strategy set includes all possible strategies that a player could choose. Finally, a player’s payoff function describes how he evaluates different strategies. That is, given the strategies chosen by all players, a player’s payoff function describes the state of well-being (or welfare or utility) of players having played those strategies. In example 2 the players are firms 1 and 2, a player’s strategy is a format, and the strategy set of a player contains the two available formats: {Beta, VHS}. When a player is a firm, we typically assume that a player’s payoff is its profit. The payoff functions of the two players in example 2 are represented by figure 4.2.5 Nash Equilibrium Having modeled a particular economic setting as a game, one can use it to recommend to players how they should play or to make predictions about how they will play. The latter purpose is primarily how we will use it. A strategy is a decision rule that instructs a player how to behave over the course of the game. A strategy may be very simple, like a rate of advertising, or very complicated, like what price to set at the beginning of each month in response to last month’s sales and prices. On a conceptual basis, it is useful to think of players as choosing their strategies simultaneously before the start of the game. Once their strategies are chosen, the game commences, and players act according to their strategies. Assuming that players are rational, a player chooses the strategy that yields the highest payoff (if the player is a firm, this just means the firm is profit maximizing). As shown in figures 4.1 and 4.2, a player’s payoff depends not only on her strategy but also on the strategy of the other player. This relationship is an essential feature of games. Thus, in deciding which strategy is best, a player must take into account the strategies that she expects the other players to choose. To capture this interdependence, the concept of Nash equilibrium was developed. A list of strategies, one for each player, is a Nash equilibrium if each player’s strategy maximizes her payoff given the strategies chosen by the other players and if this condition holds simultaneously for all players.6 In example 1, the only Nash equilibrium is the strategy pair (High, High). It is a Nash equilibrium because given that firm 2 chooses High, firm 1 maximizes its profits (that is, its payoff) by also choosing High. In addition, given that firm 1 chooses High, High is optimal for firm 2. In other words, each firm is choosing a profit-maximizing strategy given the (profit-maximizing) strategy chosen by the other firm. At each of the other three strategy pairs, (Low, Low), (Low, High), and (High, Low), at least one firm can increase its profit by choosing a different strategy. For example, at (Low, High), firm 1 prefers to use High rather than Low, as it increases profit by 150. In example 2, both (Beta, Beta) and (VHS, VHS) are Nash equilibria, but (Beta, VHS) and (VHS, Beta) are not Nash equilibria. Note that the outcomes previously referred to as being stable in examples 1 and 2 are Nash equilibria. Oligopoly Theory An oligopoly is an industry with a small number of sellers. How small is small cannot be decided in theory

but only in practice. The relevant criterion is whether firms take into account their rivals’ actions when deciding on their own actions. In other words, the essence of oligopoly is recognized interdependence among firms. AT&T certainly considers the actions and likely future responses of Verizon, T-Mobile, Sprint, and the other providers of mobile phone services when it decides on price, advertising, capital investment, and other variables. However, a Kansas wheat farmer would be silly to worry about any effect of his planned output on the planned output of the farmer next door. Since the pioneering work of Augustin Cournot in 1838,7 many theories of oligopoly have been developed. As it has turned out, the first model of oligopoly is still one of the most widely used ones. In this section we review Cournot’s modeling of the oligopoly problem. Afterward, we discuss some other oligopoly models. The Cournot Model To simplify the analysis, consider the following numerical example. Assume marginal cost is constant at $40, and the inverse market demand function is

If industry supply is Q, then the price that equates supply and demand is 100 − Q. Prior to considering the oligopoly setting, let us review the monopoly case, as it will be a useful benchmark. Given equation 4.1, the marginal revenue of a monopolist is

Demand, marginal revenue, and marginal cost are depicted in figure 4.3. A monopolist maximizes its profit by setting Q so as to equate marginal revenue and marginal cost. This value for quantity is 30, and the price is $70. Monopoly profit equals $900.8

Figure 4.3 Monopoly Solution

Now suppose that there are instead two firms, denoted firm 1 and firm 2. Both firms have constant marginal cost of $40 and produce identical products. The distinguishing features of the Cournot model are that firms choose quantity (rather than price) and do so simultaneously. The price of the good is set in the market so as to equate industry supply (which equals the sum of the firms’ outputs) and demand. Thus, if q1 and q2 are the outputs of firms 1 and 2, respectively, then the resulting market price is

Though the Cournot model was developed more than a century before game theory, one can interpret the Cournot model as a game. In the Cournot game, the set of players is firms 1 and 2. The strategy of a firm is its quantity. Finally, a firm’s payoff is its profits. For our numerical example, the profits of firms 1 and 2 are, respectively,

These profits clearly demonstrate the interdependence that characterizes oligopoly. The profit of firm 1 depends not only on its own quantity but also on the quantity of firm 2 (and, similarly, firm 2’s profit depends on q1 and q2). In particular, the higher q2 is, the lower is π1 (holding q1 fixed). That is, the more your

competitor produces, the lower is the market price for the good, causing your revenue (and profit) to be lower. Having formulated the Cournot model, we next proceed to determine the behavior of firms implied by profit maximization. However, it is not sufficient to determine the quantity that maximizes firm 1’s profit without also considering what quantity maximizes firm 2’s profit, as the former depends on the latter. We need to find a quantity for each firm that maximizes its profits, given the quantity of its competitor. That is to say, we need to find a Nash equilibrium. As an initial step in deriving a Nash equilibrium, consider the problem faced by firm 1. It wants to select a quantity that maximizes π1, taking into account the anticipated quantity of firm 2. Suppose firm 1 believes that firm 2 plans to produce 10 units. Using equation 4.3, the market price when firm 1 produces q1 is then

When q2 = 10, it follows that firm 1’s revenue is (90 − q1)q1, and its marginal revenue is 90 − 2q1. Firm 1 wants to choose q1 to maximize its profits, which means equating firm marginal revenue with marginal cost. As shown in figure 4.4, the profit-maximizing quantity is 25.

Figure 4.4 Cournot Model: Profit Maximization by Firm 1

One can go through the same exercise to find the profit-maximizing output for firm 1 for each possible value of q2. For example, if q2 = 30, then firm 1’s revenue is (70 − q1)q1 and its marginal revenue is 70 − 2q1.

As shown in figure 4.4, the profit-maximizing output is now 15. Doing this calculation for all possible values of q2, one finds that the value of q1 that maximizes firm 1’s profit is

Equation 4.7 is known as firm 1’s best reply function, because it gives the value of q1 that is firm 1’s best (in the sense of maximizing profits) reply to firm 2’s output.9 In the same manner, one can derive the best reply function of firm 2:

In figure 4.5, firm 1’s best reply function is plotted. Note that it is downward sloping; the higher is firm 2’s quantity, the lower the profit-maximizing quantity of firm 1 will be. The intuition lies in figure 4.4. Note that when q2 is raised from 10 to 30, firm 1’s demand and marginal revenue curves shift in. This movement reflects the fact that for any value of q1, a higher value of q2 results in firm 1 receiving a lower price for its product. Because its demand is weaker, firm 1 produces less in response to firm 2’s producing more. Hence, firm 1’s best reply function is downward sloping; that is, firm 1 optimally produces less, the more firm 2 is anticipated to supply.

Figure 4.5 Cournot Model: Firm 1’s Best Reply Function

A Nash equilibrium is defined by a pair of quantities such that each firm’s quantity maximizes its profit

given the other firm’s quantity; hence, no firm has an incentive to unilaterally change its decision. The appeal of such a solution is that no firm has an incentive to change its output given what its competitor is doing. We have already shown that a firm maximizes its profits only when it produces according to its best reply function. A Nash equilibrium is then defined by a pair of quantities such that both firms are simultaneously on their best reply functions, which is shown in figure 4.6.10

Figure 4.6 The Cournot Solution

A Nash equilibrium has each firm producing 20 units. That is, given that one’s rival produces 20, an output of 20 maximizes a firm’s profit. You should convince yourself that any other output pair is not a Nash equilibrium, as at least one firm is not maximizing its profit. For example, at q1 = 30 and q2 = 15, firm 2 is maximizing its profits (it is on its best reply function), but firm 1 is not. Given q2 = 15, firm 1’s profitmaximizing output is 22.5 (see figure 4.5). To summarize, the Nash equilibrium of the Cournot game (which we also refer to as the Cournot solution) is

Note that the price at the Cournot solution exceeds the competitive price of 40 (which equals marginal cost) but is less than the monopoly price of 70. This feature is standard in the Cournot solution and is not

particular to our numerical example. The Cournot price exceeds marginal cost because firms do not act as price takers. Firms’ quantity decisions affect price, and they realize this fact. They know that the more they produce, the lower the market price will be. As a result, each firm supplies less than it would if it were a price taker, resulting in the Cournot price exceeding the competitive price. The Cournot price is less than the monopoly price, because each firm cares only about its own profits and not industry profits. When firm 1 considers increasing its quantity, it takes into account how this quantity increase affects π1 but ignores how it affects π2. As a result, in maximizing its own profit, a firm produces too much from the perspective of maximizing industry profit. Hence, the monopoly price (which is also the joint-profit-maximizing price under constant marginal cost) exceeds the Cournot price. Though each firm is maximizing its own profit at the Cournot solution, both firms could simultaneously raise their profits by jointly reducing their quantity from 20 toward the joint-profit-maximizing output of 15. The problem is that neither firm has an incentive to do so. Suppose that firms were to communicate prior to choosing their quantities and agreed to each produce 15. If firm 1 actually believed that firm 2 would go along with the agreement and produce 15, firm 1 would do better by reneging and producing 22.5 (reading from its best reply function). Given q2 = 15, q1 = 22.5 yields profits of 506.25 for firm 1 versus profits of 450 from q1 = 15. Of course, firm 2 is no fool and thus would never produce 15 anyway. Note the similarity with the problem faced by firms in the advertising game of example 1. In both games, the Nash equilibrium is Pareto inefficient in that firms could raise both of their profits by jointly acting differently.11 It can be shown that the Cournot solution predicts that the price-cost margin is inversely related to the number of firms, denoted n, and the absolute value of the elasticity of market demand, denoted η:

Recall that the elasticity of demand measures how responsive demand is to a change in price. According to this formula, as the number of firms increases, the right-hand side of equation 4.9 becomes smaller, implying that the price-cost margin shrinks. It also tells us that as the industry becomes perfectly competitive (the number of firms approaches infinity), the price-cost margin converges to zero, or, in other words, price converges to the competitive price of marginal cost.12 Other Models of Oligopoly In the more than 175 years since Cournot developed his pioneering analysis of oligopoly, many alternative models of oligopoly have been put forth. In 1934, Heinrich von Stackelberg proposed a modification of the Cournot model based on the observation that some industries are characterized by one firm being a leader in the sense that it commits to its quantity prior to its competitors doing so.13 The Stackelberg model is a game with sequential moves in which the “leader” (say, firm 1) chooses quantity and then, after observing firm 1’s quantity, the “follower” (firm 2) chooses its quantity. Equilibrium behavior in the Stackelberg model entails the follower, firm 2, acting the same way as in the Cournot model (though not choosing the same output). In place of firm 2’s conjecture as to what firm 1 will produce, firm 2 actually observes firm 1’s quantity. Given the observed output of firm 1, firm 2 chooses its output to maximize its profit, which is just given by its best reply function. In contrast, firm 1, being the leader, acts quite differently. Rather than take firm 2’s output as fixed (as it does in the Cournot model), firm 1 recognizes that firm 2 will respond to the output choice of firm 1. Taking into account the response of the

follower, the leader chooses its quantity to maximize its profit. Using the example from the preceding section, the Stackelberg leader’s problem is to choose output so as to maximize:

This expression is firm 1’s profit but where we have substituted firm 2’s best reply function, 30 − 0.5q1, for its quantity, q2. This substitution reflects the fact that firm 1 influences firm 2’s quantity choice. Solving for the value of q1 that maximizes equation 4.10, the leader produces 30 units and the follower responds with a quantity of 15 (= 30 − 0.5 · 30). Compared to the Cournot solution, firm 1 produces more and firm 2 produces less, and thus the leader ends up with a higher market share. Firm 1 produces above the Cournot quantity, because it knows that firm 2 will respond by producing less (recall that firm 2’s best reply function is downward sloping). In other words, firm 1 takes advantage of moving first by committing itself to a higher quantity, knowing that it will induce its rival to produce less. A second class of oligopoly models assumes that firms choose price rather than quantity. The first piece of work in this line is that of Joseph Bertrand.14 In a critique of Cournot’s book, Bertrand sketched a model in which firms make simultaneous price decisions. When firms offer identical goods and have constant marginal cost, a unique Nash equilibrium emerges, and it has both firms pricing at marginal cost. The Bertrand model yields the surprising result that oligopolistic behavior generates the competitive solution! To understand how this striking finding emerges, suppose each firm’s marginal cost is 40 and consider both firms pricing above marginal cost at, say, 44. Assume market demand is 100 − P so, at a price of 44, demand equals 56. With identical prices and identical goods, consumers are indifferent to which firm to buy from, so we will suppose that half of the consumers buy from firm 1 and half from firm 2. Each firm then has profits of (44 − 40)(1/2)(100 − 44) = 112. Now consider firm 1 undercutting firm 2’s price of 44 by pricing at 43. All consumers will buy from firm 1, as its price is below that of firm 2 (recall that the firms offer identical products). Market demand is 57 (= 100 − 43) so firm 1’s profit is (43 − 40)57 = 171. Given that firm 1 earns higher profits from a price of 43, 44 is not firm 1’s best reply to firm 2’s price of 44 which means both firms pricing at 44 is not a Nash equilibrium. In fact, any prices above the marginal cost of 40 is not a Nash equilibrium, because at those prices, there will be a firm that has an incentive to slightly undercut its rival’s price and win the entire market. Only when both firms price at cost is it a Nash equilibrium. Given that the other firm prices at 40, a firm’s profit is zero from a price of 40 (as its profit margin is zero), zero from a price above 40 (as its demand is zero), and it incurs losses from a price below 40 (as its price is less than cost). The competitive solution is then the Nash equilibrium for the Bertrand model.15 Product Differentiation One of the most significant ways in which firms compete is by trying to make their products distinct from other products in the market. The reason is that the more differentiated one’s product is, the more one is able to act like a monopolist. That is, a firm can set a higher price without inducing large numbers of consumers to switch to buying competitors’ products. To consider the role of product differentiation, let us follow the suggestion of Bertrand and assume that firms make simultaneous price decisions with constant marginal cost—though we will now assume that firms’ products are heterogeneous. Consumers perceive these products as being imperfect substitutes. That is, some consumers are willing to buy one firm’s product even though it is priced higher than its

competitors’. It also typically means that a small change in a firm’s price causes a small change in its demand. For a market with two firms, let Di (p1, p2) denote the number of units demanded of firm i’s product when the prices are p1 and p2 (i = 1, 2). An example of firm demand curves when products are differentiated is

Note that a firm’s demand is decreasing in its own price but increasing in the price of its rival. The latter property reflects products being substitutes, so that when firm 2 raises its price, some consumers who previously purchased firm 2’s product decide to switch to buying firm 1’s product. Another notable property is that even if p1 > p2, firm 1 still has positive demand (as long as the difference between prices is not too great). Because firm 1’s product is distinct from that of firm 2, some consumers are willing to pay a premium for firm 1’s product. Finally, note that a firm’s demand is affected more by a change in its own price than by a change in the price of its rival. If firm 1’s price rises by 10, then its demand falls by 10, while if firm 2’s price rises by 10 then firm 1’s demand rises by only 5. If each firm is assumed to have a constant marginal cost of 20, then the firms’ profit functions are

where firm i earns profit of pi − 20 on each unit sold. To derive a Nash equilibrium for the differentiated products price game, we can use the same method that we used for the homogeneous goods quantity game (that is, the Cournot model). The first step is to derive each firm’s profit-maximizing price given the price of its competitor (that is, the best reply function). For the profit functions 4.13 and 4.14, one can show that the best reply functions are

In contrast to the Cournot game, a firm’s best reply function is upward sloping (see figure 4.7). The reason is as follows. Firm 1’s demand rises in response to firm 2 charging a higher price, as some of firm 2’s consumers would decide to buy from firm 1. Generally, the stronger a firm’s demand is, the higher its profitmaximizing price will be. It follows that the higher firm 2’s price becomes, the higher is the price that maximizes firm 1’s profit, so its best reply function is upward sloping.

Figure 4.7 Differentiated Products Price Game

As with the Cournot game, Nash equilibrium occurs where the best reply functions intersect so that each firm is choosing a price that maximizes its profit given the other firm’s price. Equilibrium then has firms pricing at 80. To convince yourself that this is true, plug 80 for p2 in equation 4.15, and you will find that the resulting price for firm 1 is 80; and if you plug 80 for p1 in equation 4.16, the resulting price for firm 2 is 80. Given that one’s rival prices at 80, it maximizes a firm’s profit to also price at 80. Since this argument applies to both firms, both firms pricing at 80 is a Nash equilibrium. An important property of equilibrium in models of imperfect competition is that prices rise as products become more differentiated. To understand the underlying intuition, recall the analysis for the Bertrand model with identical products. Each firm wants to undercut its rival’s price, because even a very slight discount will significantly increase the firm’s demand compared to matching the rival’s price. This pricecutting incentive drives price down to marginal cost. Only when prices equal cost is there no incentive to undercut (as a lower price would mean incurring losses). As products become differentiated, the rise in demand from undercutting a competitor’s price is lessened. Some consumers are willing to buy a competitor’s product even at a price above that of other firms’ products (which is the definition of product differentiation). Because a firm does not get as much of its competitor’s demand from lowering its price, there is reduced incentive for firms to engage in price undercutting. Equilibrium prices are then higher when firms’ products are more differentiated. Product differentiation softens price competition. While equilibrium prices are above cost, they are still below the levels that a monopolist would set if it controlled both products. One can show that joint profits from the two products are maximized when each is

priced at 90. A product’s profits are (90 − 20)(100 − 90 + 0.5 · 90) = 3,850 and industry profits equal 7,700. In contrast, Nash equilibrium prices yield a firm profit of (80 − 20)(100 − 80 + 0.5 · 80) = 3,600 and an industry profit of only 7,200. Just as for the Cournot model, the equilibrium is Pareto inefficient: both firms can be made better off by jointly raising their prices. Though there is a collective incentive to raise price from 80 to 90, there is no individual incentive to do so. Given that its rival prices at 80, a firm would reduce its profit from 3,600 to 3,500 (= (90 − 20)(100 − 90 + 0.5 · 80)) by raising price from 80 to 90. Collusion Using the Cournot solution, we found that oligopolistic competition results in firms jointly producing too much. Although each firm is individually maximizing its profits, both firms are aware that industry profits are not maximized. Going back to the example in the preceding section, the Nash equilibrium entails each firm producing 20 units and receiving profits of 400. If instead they each produced 15 (which is half of the monopoly output of 30), then profits would increase to 450. No individual firm has an incentive to do so, because producing 20 units is optimal, given that the other firm is anticipated to supply 20 units. However, if firms could somehow manage to coordinate so as to jointly reduce their outputs, they could increase profits for everyone. The lure of higher profits through coordination of their behavior is what collusion is all about. Historically, in many cases firms have successfully coordinated their quantity and price decisions. It has occurred in a wide array of markets, including sugar, vitamins, cement, chemicals, gasoline, turbine generators, memory chips, and toy retailing. Since the Cournot solution predicts that firms do not maximize joint profits, how are we to explain the fact that, in some industries, firms are able to collude? What keeps each firm from deviating from a collusive agreement by instead producing at a higher rate and thereby earning higher profits? A Theory of Collusion The inadequacy of the Cournot solution lies in the limitations of the Cournot model. A critical specification is that firms make quantity decisions only once. In reality, firms live for many periods and are continually making quantity decisions. To correct for this weakness in the Cournot model, consider an infinitely repeated version of that model.16 In each period, firms make simultaneous quantity decisions and expect to make quantity decisions for the indefinite future. For simplicity, assume that the demand curve and cost functions do not change over time. The standard (one-period) Cournot game and the infinitely repeated Cournot game have several important differences. First, because a firm chooses output more than once, a strategy is a more complicated object than simply some number of units. Rather than generally define a strategy, we will consider some examples. Second, and this is very important, each firm will receive information over time in the form of past prices and quantities. Though firms choose quantity simultaneously in a given period, each firm is assumed to observe the other firm’s past quantities as well as the resulting market price. And third, each firm acts to maximize the sum of its discounted profits rather than just today’s profit. Let r denote the interest (or discount) rate for each firm. Let and denote the period t quantity of firm 1 and firm 2, respectively, where t = 1, 2, 3, …. We first want to show that one Nash equilibrium for this game has each firm produce the Cournot quantity in every period: and t = 1, 2, 3, …. Recall that this example has each firm’s per-period profit as 400. A firm’s payoff, which is just the sum of

its discounted profits, is then17

For example, if the discount rate r is 10 percent, then a firm’s payoff (and its market value) is $4,000. Now consider firm 1 choosing a strategy in which its quantity differs from 20 for one or more periods. In all those periods for which profits will be lower (as 20 units maximizes current profit), while profits in periods in which the firm continues to produce 20 remain the same. Hence, the sum of discounted profits must be lower. It is then optimal for a firm to produce 20 in each period when it expects its competitor always to do so. One Nash equilibrium for the infinitely repeated Cournot game is then just a repetition of the Cournot solution. Alternatively stated, repetition of a Nash equilibrium for the single-period game is a Nash equilibrium for the infinitely repeated game. We have found a Nash equilibrium for the infinitely repeated Cournot game, but it does not put us any closer to understanding how firms can sustain a collusive outcome like the joint-profit maximum. So let us consider a very different strategy from the one just described. In particular, consider a strategy that allows a firm’s output to depend on how much its competitor produced in the past. An example of such a strategy (for firm 1) is

This strategy says that firm 1 should produce 15 in period 1. Recall that 30 is the monopoly quantity, so that each firm producing 15 maximizes joint profit. In any future period, it should produce 15 if and only if both firms produced 15 in all previous periods. Alternatively, if one or more firms deviated from producing 15 in some past period, then firm 1 should produce 20 (the Cournot quantity) in all remaining periods. This strategy is called a trigger strategy, because a slight deviation from the collusive output of 15 triggers a breakdown in collusion. The strategy for firm 2 is similarly defined:

If both firms use these strategies, then each will produce 15 in period 1. Because each produced 15 in period 1, then, as prescribed by these strategies, each firm will produce 15 in period 2. By the same argument, the two firms will produce 15 in every period. Hence the monopoly price is observed in all periods. If we can show that these strategies form a Nash equilibrium (that is, no firm has an incentive to act

differently from its strategy), we will have a theory that explains how profit-maximizing firms can sustain collusion. Given that firm 2 uses the strategy in equation 4.19, firm 1 receives profit of 450 in each period from using equation 4.18, so that its payoff is 450/r. Now consider firm 1 choosing a different strategy. Any meaningfully different strategy must entail producing a quantity different from 15 in some period. There is no loss of generality from supposing that this occurs in the first period. If then firm 2 learns after period 1 that firm 1 deviated from the collusive output. According to firm 2’s strategy, it will respond by producing 20 in all future periods. Because firm 1 is aware of how firm 2 will respond, firm 1 will plan to produce 20 in all future periods after deviating from 15 in the current period, as doing anything else would lower firm 1’s payoff from period 2 onward. It follows that if then firm 1’s payoff is

The first term is period 1 discounted profits, while the second term is the sum of discounted future profits. Given that firm 1 deviates output from the collusive output level of 15 in the first period, expression 4.20 shows that the amount by which differs from 15 affects current profits but not future profits, because the punishment for cheating is independent of how much a firm cheats. Therefore, expression 4.20 is maximized by setting as that output maximizes current profits (reading off firm 1’s best reply function in figure 4.5 when q2 = 15). Substituting 22.5 for in expression 4.20, we derive the highest payoff that firm 1 can earn from choosing a strategy different from equation 4.18:

Figure 4.8 shows the time path of profit from going along with the collusive agreement—earn 450 every period—and cheating on the cartel—earn 506.25 in period 1 and 400 every period afterward. Thus, cheating gives higher profits up front but lower profits in the future because it intensifies competition.

Figure 4.8 Profits from Colluding versus Cheating

Given that firm 2 uses the strategy in equation 4.19, firm 1 earns 450/r from using the strategy in equation4.18, while the highest payoff it can get from choosing a different strategy is expression 4.21. Therefore, the strategy in equation 4.18 maximizes firm 1’s payoff if and only if

Working through the algebra, expression 4.22 holds if and only if r ≤ 0.89. In other words, if firm 1 sufficiently values future profits (r is sufficiently small), it prefers to produce 15 each period rather than cheat and produce more than 15. By the same argument one can show that firm 2’s strategy in equation 4.19 maximizes the sum of its discounted profits if and only if r ≤ 0.89. We conclude that equations 4.18 and 4.19 define a Nash equilibrium when the discount rate is sufficiently low.18 In contrast to the one-period game, we have shown that firms are able to produce at the joint-profit maximum without either firm having an incentive to cheat. Critically for this result, a firm’s quantity decision depends on the past behavior of its competitor and firms anticipate interacting in the future. If firm 1 ever cheats on the collusive agreement by producing something other than 15 units, then firm 2 will respond by not restricting supply to the collusive level (perhaps forever, as specified above, or for some finite length of time) and instead producing the higher output of 20. As this response to a deviation lowers firm 1’s future profits, a firm that contemplates cheating will expect to incur the future punishment of forgone collusion. As long as future interaction is possible and a firm sufficiently values future profits, it will prefer not to cheat. The prospect of higher profits today is offset by the anticipation of lower profits in the future. Note that if r > 0.89, then these trigger strategies do not form a Nash equilibrium, which means that they cannot sustain the joint-profit maximum. In that event, each firm prefers to cheat and receive higher profit today. However, trigger strategies with a higher collusive output should form a Nash equilibrium. For

example, if 0.89 < r ≤ 1.33, one can show that firms can use trigger strategies to support an output of 16 though not an output of 15. Although price is below the monopoly level, it is still above the Cournot price. With this model, collusion becomes more difficult as the number of firms grows: as the number of firms increases, each firm must have a lower discount rate for collusion to be stable. As the number of firms increases, two things happen. First, each firm has a smaller share of the market at the joint-profit maximum. A smaller market share provides a firm with a bigger increase in current profit from cheating on the collusive agreement, which makes it more difficult to sustain collusion. However, the punishment for cheating—the Cournot solution—is more competitive when there are more firms. Because price then falls more after cheating, the loss from cheating is greater when the number of firms is higher. This second effect works in the opposite direction to make collusion less difficult. One can show that the first effect dominates —and therefore, cheating is more tempting—when more firms are colluding. As collusion is then more difficult to sustain, it either means firms will be unable to collude or they will have to coordinate on a lower collusive price. Finally, note that the above trigger strategy can be modified to sustain collusion when firms choose prices rather than quantities. Returning to our model when firms have differentiated products, competition produces Nash equilibrium prices of 80 for each of the two firms, while their profits are jointly maximized if they were to coordinate on a price of 90. A trigger strategy has firms initially price at 90 and continue to do so as long as firms priced at 90 in past periods. If there is ever a deviation from pricing at 90, then firms price at 80 thereafter. The condition for this trigger strategy to be a Nash equilibrium is

From our earlier calculations, 3,850 is firm profits when both price at 90, so the left-hand side of expression 4.23 is the collusive payoff from receiving profit of 3,850 in every period. On the right-hand side of expression 4.23 is the highest payoff from cheating. If a firm considers deviating from a price of 90, it will optimally price at 82.5 according to its best reply function in equation 4.15: 60 + .25 · 90 = 82.5. The associated profits are (82.5 − 20)(100 − 82.5 + 0.5 · 90) = 3,906.25. After cheating occurs, firms will permanently return to pricing at 80, which was shown to yield a profit of 3,600. Thus, 3600/[r(1 + r)] is the future payoff from having cheated. It is straightforward to show that expression 4.23 is equivalent to r ≤ 4.44, in which case firms can sustain a collusive price of 90 as long as the discount rate is below 4.44. Challenges to Collusion Effective collusion requires that firms solve two challenges: coordination and compliance. Coordination refers to firms having a mutual understanding that they are to collude in their setting of prices and quantities. It also involves achieving a common set of beliefs as to the exact prices and quantities, for there are potentially many collusive outcomes. For example, one can show that the strategies in equations 4.18 and 4.19 constitute an equilibrium not just for a collusive quantity of 15 but also for any quantity between 15 and 20. If firms are currently competing—and thereby each is producing 20—how do they coordinate to reduce supply? And among all those collusive outcomes that they could support, how do they settle on a particular one? The first question is a problem of coordination, while the latter throws in an element of bargaining. Having agreed to some prices and quantities, compliance is about abiding by those prices and quantities.

The analysis in the preceding section focused on compliance in making sure that it was in each colluding firm’s interests to price or produce appropriately. The purpose of this subsection is to enrich that analysis by considering other issues related to compliance and to discuss how firms may coordinate. To do so, we draw on actual cartel episodes. Coordination and bargaining When describing how firms, in practice, have solved the coordination problem, it is first useful to draw a distinction between two forms of collusion. Explicit collusion refers to when firms directly and expressly communicate about plans to coordinate their behavior in term of prices, quantities, allocation of customers, and the like. As described later in our discussion of antitrust law, the very act of communicating with such an intent is illegal in the United States and many other jurisdictions. Tacit collusion is when firms coordinate their behavior without expressly communicating. It encompasses many methods of communication, some of which we discuss below. The legality of tacit collusion is problematic; various types often elude prosecution, and other types are viewed as legal. Though the ultimate outcome on welfare may be the same —price is higher whether achieved through explicit or tacit collusion—they are treated far differently in the eyes of the law, but more on that later. The canonical form of explicit collusion involves natural language in either oral or written form with the intent to coordinate a shift from competition to collusion. In February 1982, Robert Crandall, as the CEO of American Airlines, decided that price competition was too intense in the airline industry, so he telephoned the CEO of Braniff Airlines, Howard Putnam. Unbeknownst to Crandall, Putnam was taping the conversation: Crandall: I think it’s dumb as hell for Christ’s sake, all right, to sit here and pound the **** out of each other and neither one of us making a ****ing dime. Putnam: Do you have a suggestion for me? Crandall: Yes. I have a suggestion for you. Raise your goddamn fares twenty percent. I’ll raise mine the next morning. You’ll make more money and I will too. Putnam: We can’t talk about pricing. Crandall: Oh bull ****, Howard. We can talk about any goddamn thing we want to talk about.19 Actually, Crandall cannot talk about “any goddamn thing,” at least if he wants to avoid breaking the law. If Putnam had accepted this invitation to collude, they would have both been guilty of violating Section 1 of the Sherman Act.20 A recent case involving sellers of barcodes instead used email to attempt to coordinate. In August 2013, Jacob Alifraghis of InstantUPCCodes.com sent this email to two of its competitors: All 3 of us—US, YOU and [Competitor A] need to match the price that [Competitor B] has.… I’d say that 48 hours would be an acceptable amount of time to get these price [increases] completed for all 3 of us. The thing is though, we all need to agree to do this or it won’t work.21

Once firms have agreed to collude, they must still settle on the collusive outcome with regard to prices and quantities. As is common for many cartels selling intermediate goods (such as chemicals, cement, and industrial equipment), members of the lysine cartel agreed to a price and an allocation of market demand, whereby each firm was issued a sales quota that they were not to exceed.22 Those annual sales quotas are reported in Table 4.1.

Table 4.1 Sales Quotas for the Lysine Cartel (tons per year) Company

Global Sales Quota

European Sales Quota

Ajinomoto Archer Daniel Midlands Kyowa Sewon Cheil

73,500 48,000 37,000 20,500 6,000

34,000 5,000 8,000 13,500 5,000

Source: Official Journal of the European Union, L 152/24, 7.6.2001, Case COMP/36.545/F3—Amino Acids, Decision of June 7, 2000.

From 1992 to 1995, the lysine cartel held ten formal meetings and conducted almost two dozen other face-to-face meetings.23 Why so many meetings? One reason was to monitor for compliance with the agreement but also to adjust collusive prices to changing market conditions, such as exchange rate fluctuations. For example, if the U.S. price is set at $1.16 per pound and the price in Japan is set at 145 yen per pound, then at an exchange rate of 125 yen to $1, the prices are equal (that is, 145/125 = 1.16). But now suppose the yen depreciates by 20 percent, so that the exchange rate is 150 yen to $1. If the stated lysine price in Japan were to remain at 145 yen, Japanese manufacturers would be effectively selling at 97 cents (=145/150). This price difference could create all sorts of problems for maintaining collusion. Some U.S. buyers may choose to buy in Japan and transport the product to the United States, which would result in the allocation of sales across cartel members differing from the agreed-on allocation. Cartel members then needed to communicate and realign prices, and that is what they did. More inventive forms of communication are used when firms engage in tacit collusion. While avoiding express communication is less effective in achieving the mutual understanding required to coordinate, these indirect means leave less of an evidentiary trail and may even be legal. Here are a few of the ways in which firms have tacitly colluded, all of which involve public announcements. One approach is for a firm to publicly announce a change in corporate strategy which is really an invitation to collude. This type of communication occurred in the market for freestanding inserts, which are multipage booklets inserted into newspapers.24 The market had two primary suppliers, Valassis Communications and News America Marketing. During 2002–2004, Valassis engaged in a price war in an announced attempt to achieve a 50 percent market share. But then, during a July 2004 earnings call by Valassis’ CEO with stock market analysts, he described a strategy to increase price to $6.00 per insert page per thousand booklets, which involved abandoning its 50 percent market share goal and resuming the price war if News America continued to compete for Valassis’ customers. In short, Valassis was inviting News America to raise prices on the threat of a price war if the latter did not do so. Another common device used to coordinate on higher prices is the use of advance price announcements. In the early 1990s, the airlines engaged in a practice of submitting fare changes to the Airline Tariff Publishing Company, which were then disseminated through computer reservation systems to rival airlines (and also consumers).25 A fare change with a future first ticket date was an announcement of a future price change, in that a consumer could not buy a ticket at that price until the first ticket date. If other airlines matched the announcement, then the proposed price increase was enacted. If other airlines did not match, then the proposed price increase was retracted. In this way, a firm could act as a price leader without incurring the cost of losing business until other firms followed (if they even did). In a consent decree with the Antitrust Division of the DOJ in 1994, the airlines agreed not to engage in this practice for ten years. The use of advance price announcements to collude has been an antitrust issue of increasing concern. The European Commission (EC) investigated fifteen container liner shipping companies who were making

regular public announcements of proposed price increases through press releases. The EC stated that “this practice may allow the companies to signal future price intentions to each other and may harm competition and customers by raising prices.”26 In an EC decision in February 2016, carriers agreed to restrict their announcements for the next three years. One final example of tacit collusion warrants mention, as it conveys the cleverness of colluding firms.27 The German government was conducting an auction of ten blocks of frequency spectra (used for transmitting mobile telephone calls). There were two bidders: Mannesman and T-Mobile. Mannesman opened with a bid of 20 million deutsche mark per megahertz for blocks 1–5 and 18.18 million for blocks 6– 10. Why 18.18? A rule of the auction was that any subsequent bid had to be at least 10 percent higher than the prevailing bid. Given that 10 percent higher than 18.18 is 20, was Mannesman signaling to T-Mobile that it should follow up with a bid of 20 million on blocks 6–10, so that each would receive five blocks at this low price? Well, that is exactly what happened. T-Mobile bid 20 million on blocks 6–10, and there were no subsequent bids. In most jurisdictions, that behavior is legal. If, however, the two CEOs had communicated this plan prior to the auction, then their behavior would have been illegal. The outcome is the same, but how firms coordinate matters when it comes to the law. Even if firms have an effective means of communication at their disposal, such as express communication, challenges can remain when firms differ in their preferred prices and market allocation. Firms typically display vast differences in cost and capacity (among other traits), which translate into different preferences over the collusive outcome. To illustrate how cost differences can create difficulties in agreeing on a particular price, consider the following duopoly setting. Suppose that firms sell identical products, but firm 1 has higher cost than does firm 2. Assume that they agree to split the market equally. As shown in figure 4.9, each firm faces the same demand curve d(P), which is just half the market demand curve D(P).

Figure 4.9 Preferred Collusive Prices of Duopolists Sharing the Market Equally

The profit-maximizing price for the low-cost firm, PL, is determined by the intersection of marginal revenue and its marginal cost MCL. In contrast, the high-cost firm prefers the higher price of PH. Because both firms must set the same price (otherwise one of the firms would have zero demand), there is an obvious conflict. Firms must negotiate to resolve this difference. Thus, if firms have different cost functions, different capacities, or different products, the cartel has a bargaining problem that it must solve. This compounds the usual coordination problem described earlier. As an example, consider a cartel among gasoline (or petrol) stations that occurred in several towns in Quebec.28 One source of differences among these retailers was their per unit cost of gasoline. Larger retailers, such as Bilodeau-Shell, were able to negotiate a lower wholesale price; and Ultramar was vertically integrated with its own refinery, which meant it had a lower cost. A second asymmetry was that some stations sold complementary products, such as Loblaws (a supermarket chain) and Couche-Tard (a chain of convenience stores). Both of these asymmetries implied different preferences on price. Those firms with lower cost or complementary products (in which case gasoline may be a loss leader) would prefer a collusive price not as high as that desired by other cartel members. Consistent with significant heterogeneity in preferences, it took a lot of negotiation to settle on a price. Thanks to wiretapping of their phones by the competition authority, we know that agreeing to a price increase took about sixty-five phone calls. The bigger obstacle to an agreement concerns the allocation of supply among cartel members. While all firms can see the merit in raising price, they will differ as to how much each firm ought to constrain supply in order to achieve that higher price. That is a zero-sum game, as more output for one firm means less output for another. To flesh out this bargaining issue, let us return to the model used in an earlier subsection (A

Theory of Collusion) but now suppose that the firms have different costs: firm 1 has a constant marginal cost of 35, and firm 2 has a constant marginal cost of 45. The best reply functions are now

which are depicted in figure 4.10. The noncollusive solution has production levels of 25 and 15 for firm 1 and firm 2, respectively, and with an inverse market demand curve of 100 − Q, the price is 60.

Figure 4.10 Selecting a Stable Collusive Outcome When Firms Have Different Costs

Suppose firms consider colluding using the general mechanism described in equations 4.18 and 4.19, though the quantities will now be different. Rather than specify the collusive quantities, let denote what firm 1 is to produce and denote what firm 2 is to produce.29 The punishment quantities are now 25 for firm 1 and 15 for firm 2 (instead of 20). For what values of and is this an equilibrium? The analogous inequality to expression 4.22 for firm 1 is

and for firm 2 it is

One can show that the set of values and that satisfy expressions 4.26 and 4.27 occupy the shaded 30 area in figure 4.10. That is, if and lie in the gray area, then firms can sustain those quantities as part of a collusive equilibrium. But which pair of quantities will firms settle on? To make the task easier, suppose firms have decided on raising the price from 60 to 70, which means that industry supply must be reduced from 40 to 30. The line running from the horizontal axis at q1 = 30 to the vertical axis at q2 = 30 represents all pairs of quantities for which the total supply is 30. The bold portion of that line is the part that intersects the gray area and thus represents all values for and that are sustainable and produce a price of 70. We have thus narrowed down the problem to firms choosing from a pair of quantities on that bold line segment. As one moves from the upper left end to the lower right end of that segment, firm 1 is producing a larger share of those 30 units, something which is preferred by firm 1 but not by firm 2. We have now reached the point of conflict among colluding firms. Many cartel meetings wrestle over this issue, and many a cartel has collapsed from an inability to reach a consensus. One common solution (which, as we shall see later, was used for the citric acid cartel) is historical precedent. Specifically, collusive market shares are set equal to recent (noncollusive) market shares. If, prior to forming a cartel, the noncollusive solution prevailed, then any pair of quantities on the line running from (0, 0) to (25, 15) results in the historical market share of 62.5 percent for firm 1 and 37.5 percent for firm 2. If firms have agreed on a collusive price of 70, the solution is then the intersection, which yields and The lysine cartel exemplifies the difficulty in achieving an agreement on market shares as well as the consequences of failing to do so. At a meeting in June 1992, they agreed to raise price, which was initially successful in moving price from around 60 cents per pound to 98 cents per pound. However, they had failed to reach a consensus as to the allocation of supply and this ultimately led to the breakdown of collusion. By June 1993, price undercutting had resulted in price falling to 62 cents. A stable collusive outcome was only obtained in October 1993, when firms hammered out an agreement on sales quotas (see table 4.1). Prices then steadily rose over the next six months to $1.15 and remained around that lofty level until June 1995, when the FBI raided the offices of Archer Daniels Midland, the U.S. member of the lysine cartel. Compliance and stability If we go back to the theory of collusion, what made it stable for firms to produce low quantities or charge high prices was that any cheating by a firm would be immediately punished. That threat of low future profits serves to discipline cartel members. A key presumption, however, is that a deviation is detected so that it can be punished. But for many cartels, monitoring compliance is a challenging task. Consider an intermediate good, such as citric acid, which is an organic chemical used as an additive in foods and detergents. While a supplier of citric acid might post a list price that is publicly observed, it is standard in markets with industrial customers for them to solicit discounts from sellers. This private negotiation creates a challenge when it comes to collusion, because a firm can cheat on the agreement by offering secret discounts. Furthermore, it can be problematic to infer when cheating has occurred. If citric acid supplier Haarman & Reimer finds its sales are low this quarter, is it because fellow cartel member Jungbunzlauer stole some business by offering secret discounts or because industrial buyers’ demands were weak this quarter? If the former, then Jungbunzlauer ought to be punished by the other cartel members.

The citric acid cartel’s solution to this monitoring problem is one that is common to many other intermediate goods cartels, including those in lysine, vitamins, and sorbates.31 The strategy is to agree on a price and a market allocation and then to monitor sales, not prices, and to compare a firm’s actual sales with their allotted sales. For the citric acid cartel, each firm was assigned a market share based on their historical market share over the three preceding years. This allocation gave 34 percent of the market to Haarman & Reimer, 27. 5 percent to Archer Daniels Midland, 24 percent to Jungbunzlauer, and 14.5 percent to Hoffman-La Roche. To monitor the agreement, each firm submitted monthly sales volumes to a representative of Hoffman-La Roche, who would then compile these statistics and distribute the information among all cartel members. In the event that realized market shares did not match the allocated shares, a buyback system was put in place to even things out. Specifically, if a firm’s market share exceeded its quota, then it was required to buy output from a firm whose market share was below its quota. For example, in late 1991, it was determined that Haarmann & Reimer needed to buy 7,000 tons of citric acid from Archer Daniels Midland. During the four years in which the cartel was operating, full-scale meetings occurred about every eight weeks, with some of them taking place during the meetings of the European Citric Acid Manufacturers’ Association. It is quite typical to use trade association meetings as a pretense for firms’ representatives to get together, though the cartel meeting is certainly not on the formal agenda of the association! At these meetings, the latest cartel sales report would be discussed—to monitor the agreement—as well as trends in demand and cost, which would result in a decision on the new cartel price. As shown in figure 4.11, the cartel was successful. The price of citric acid rose by 35 percent, though it took about two years of gradual price increases to reach that level.

Figure 4.11 List and Contract Prices of Citric Acid, 1987–1997 Source: John M. Connor, Global Price Fixing: Our Customers Are the Enemy (Boston: Kluwer Academic, 2001), figure 5.1, p. 145.

The electrical equipment price-fixing cases of the early 1960s provide an example in which imperfect monitoring resulted in cheating and ultimately the breakdown of collusion. The four companies involved

were General Electric, Westinghouse, Allis-Chalmers, and Federal Pacific. Their arrangement was to rotate the business on a fixed percentage basis among the four firms. Sealed bids were made so that each firm would submit the low bid a sufficient number of times to obtain the agreed-on market share. In 1957, the chairman of Florida Power & Light Company, Mac Smith, was making a large purchase of circuit breakers and was pressuring General Electric and Westinghouse for price breaks: “Westinghouse had proposed to Florida Power that it add all its circuit-breaker orders to its order for Westinghouse transformers. In return, Westinghouse would take 4 percent off circuit-breaker book and hide the discount in the transformer order.”32 The problem arose when Smith decided to split the circuit-breaker order between General Electric and Westinghouse. General Electric discovered the attempt by Westinghouse to cheat and then matched the discount. A series of price cuts ensued with discounts ultimately reaching 60 percent. Thus far, the source of cartel instability has been internal to the cartel, in that it involves cheating by cartel members. An external source of instability comes from increased supply and low prices from outside the cartel. This could be due to some suppliers not being members of the cartel or entry into the industry after the cartel’s formation (often in response to the higher prices charged by the colluding firms). Returning to the citric acid cartel, note that in figure 4.11, the collusive price was declining toward the end of the cartel’s life. This decline is at least partially due to expanded supply from Chinese manufacturers, who were not members of the cartel. In response to the cartel members’ higher prices, the volume of Chinese imports to the United States rose by 90 percent, which depressed prices. Expansion of noncartel supply was also the reason for the collapse of the vitamin C cartel of the early 1990s.33 Formed in 1991, it comprised the four largest producers, who in aggregate had 87 percent of global sales. Again, Chinese manufacturers were excluded from the cartel but only had a market share of 8 percent at the time. The cartel implemented a 30 percent increase in prices from 1990 to late 1993, in response to which it lost 29 percent of global sales to Chinese suppliers and other fringe producers. With the erosion of the cartel’s share of the global market, prices subsequently fell by 33 percent from the end of 1993 to 1995. The cartel’s last formal meeting occurred in August 1995. As an ironic twist, the former cartel members eventually exited the market, leaving the production of vitamin C to Chinese manufacturers, who then formed a cartel! A German cement cartel that ran from 1991 to 2002 suffered from both internal and external instabilities.34 At the time, the transition to market economies of most Eastern European countries opened up the possibility of low-priced imports into Germany from cement manufacturers in such countries as the Czech Republic, Poland, and Slovakia. The cartel managed to control this potential source of instability by buying up many of those plants and bribing intermediaries not to source cement from outside Germany. While they managed to effectively deal with an external source of instability, it was an internal one that ultimately caused the cartel’s collapse. As a result of cement demand from construction activities in East Germany falling below expectations, some cartel members’ experienced low sales. With underutilized capacity, Readymix chose to lower its price to raise output and, after several such episodes, the cartel collapsed. Case Studies of Collusion Before moving on to examining how antitrust law and enforcement deals with this type of anticompetitive behavior, let us look at a few case studies of cartels. Each case study highlights an important point related to collusion. The railroad cartel of the nineteenth century exemplifies how cartels can routinely break down only to come back. Though the source of periodic reversions to competition is not known for sure, observed

behavior appears to conform to what happens when there is imperfect monitoring of collusive agreements. Nasdaq market makers offer an example of tacit collusion and how it can emerge even in markets with many firms if ample price transparency allows for effective monitoring. The third case involves collusion among toy retailers in London that shows how an upstream firm—in this case, a toy manufacturer—can play a pivotal role in promoting coordination. Railroads and Price Wars Due to Imperfect Monitoring In 1879, the railroads entered a cartel agreement to stabilize price. At the time, cartels were legal, though a collusive arrangement between firms was not enforceable by the courts. This agreement created the Joint Executive Committee, which had the task of setting the rail rate of eastbound freight shipments from Chicago to the Atlantic seaboard. Figure 4.12 shows the movement in the rate from 1880 to 1886. Most noticeable is that rates would periodically shift from being high to being low. To understand what might be driving this pattern, let us put forth a particular theory of collusion.35

Figure 4.12 Cartel Pricing of Rail Rates for Shipment of Grain from Chicago to the Atlantic Seaboard, 1880–1886

To set the stage, return to the Cournot model examined earlier in the chapter, where the inverse market demand for a duopoly is P = 100 − q1 − q2 (see equation 4.3). Now suppose that the price depends not only on firms’ supply decisions but also some random shock. In the current context, this shock could be due to changes in income of those living in the eastern United States, which influences their demand for grain and, therefore, the demand for rail services to transport grain. The determination of price is now described as P = 100 − q1 − q2 + ε, where ε is the random shock. On average, ε equals zero, though it could be positive or negative in any given period. Consider these two firms colluding by each agreeing to limit their supply to 15 units. On average, price

should be 70. However, due to the random shock, price could be above or below 70. Here is the key assumption: Each firm does not observe the quantity choice of the other firm, does not observe the random shock, but does observe the price in the market. This informational setting creates a monitoring problem for collusion. Suppose the price in the market is, say, 60. Firm 1 knows it produced the agreed-on collusive quantity of 15, but did firm 2? Is the low price of 60 because firm 2 overproduced and sold 25 (and ε turned out to be zero) or did firm 2 comply and produce 15 but there was a negative shock to demand (specifically, ε = −10)? If it is the former, then firm 2 should be punished and, if it is to be dissuaded from cheating in the future, needs to be punished. A collusive scheme that has been shown to work in this informational setting is to have a few periods of intense competition in response to a low price, even without knowing whether a deviation occurred. Such a punishment scheme would serve to discipline firms, as they would know that overproduction increases the chance of depressing price, which would make triggering a temporary price war more likely. Such a threatened punishment in response to a low price could induce firms to abide by the collusive agreement. Thus the key prediction of this theory is that episodes of high prices will be interrupted by episodes of low prices. That is, firms will periodically swing from collusion to competition and then back to collusion. A study investigated the theory’s relevance to the Joint Executive Committee by assessing whether periodic switches took place between collusion and competition.36 Returning to figure 4.12, note that price was relatively high in weeks 0–80 and 120–220. Breakdowns in collusion (“price wars”) seem to have occurred in weeks 80–120 and periodically over weeks 220–360. The dashed line below the grain rate indicates periods in which it was inferred that collusion broke down. Consistent with the theory, the railroad industry appears to have experienced collusive prices intermixed with periodic price wars. It is important to remember that competition produces stable low prices, not temporary low prices. Price wars are a by-product of collusion. Nasdaq Market Makers and Price Transparency On the New York Stock Exchange, stocks are traded by physically bringing together traders and having prices set through an auction. In contrast, the National Association of Securities Dealers Automated Quotation (Nasdaq) system operates in a decentralized manner through a network of market makers (also called dealers). At least two market makers are associated with each Nasdaq stock. Each market maker posts the price at which it is willing to buy a stock (known as the bid price) and the price at which it is willing to sell a stock (known as the ask price). Bid and ask prices are posted on a network and are observed by all market makers. The difference between the lowest ask price and the highest bid price quoted by dealers forms the “inside spread” and is how dealers earn a return on their services. For example, if the lowest ask price is $103/8—so that the next dealer sale will be at $103/8—and the highest bid price is $10—so that the next dealer purchase will be at $10—then the spread is 3/8. Dealers make a gross return of 3/8 on each share transacted. Each dealer competes for investors’ buy and sell orders by posting lower ask prices and higher bid prices than those of competing dealers. More intense competition should result in more compressed spreads. By the rules of Nasdaq (prior to June 1997), the minimum spread was 1/8 for stocks whose bid price exceeded $10; that is, Nasdaq required price fractions for bid and ask quotes to be multiples of 1/8. For example, quotes could be $101/8 and $101/4 but not $10 3/16. While engaging in some academic research on Nasdaq prices, William Christie and Paul Schultz noticed something odd (or, as we will see, not odd enough).37 Many of the Nasdaq stocks they were examining were rarely quoted in prices with odd eighths. For these stocks, almost all quotes were in even eighths, for

example, $10 and $101/4 but not $101/8. Examining the hundred most actively traded Nasdaq stocks in 1991, the market makers for seventy of them rarely used odd-eighth quotes. The affected markets included such highly traded stocks as Apple, Intel, and Microsoft, whose stocks often attracted up to sixty market makers at any one time. What could cause such an unusual pattern of behavior? One possibility is that it was simply a product of the transactions technology. Perhaps dealers had to earn a spread of at least 1/4 to cover their costs and, for whatever reason, an arbitrary rule of dealing in quarters became the norm. Further study raised doubts about this explanation. Comparing these hundred Nasdaq companies with a hundred companies trading on the New York and American Stock Exchanges that were of similar price and market capitalization (and where odd-eighth price fractions were used with about the same frequency as even eighths), Christie and Schultz found the average spread on the Nasdaq stocks to be significantly higher. This finding suggested another hypothesis: The dealers of these stocks had developed a collusive norm of avoiding odd-eighth quotes with the intent of imposing a minimum inside spread of 1/4. Evidence consistent with this hypothesis was the response of spreads to the public release of this study in the Los Angeles Times on May 26, 1994. On the very next day, the number of market makers of Microsoft who exclusively used even eighths dropped from forty-one to one.38 Figure 4.13 shows the precipitous drop in the average inside spread. Although it had been consistently above 1/4, it was now well below it. One could argue that market makers, recognizing that their collusive practices had been revealed, chose to return to normal competitive behavior out of fear of adding to the damages they would already be likely to pay.

Figure 4.13 Daily Average Inside Spreads for Microsoft, January 1, 1993–July 29, 1994 Source: William G. Christie, Jeffrey Harris, and Paul H. Schultz, “Why Did Nasdaq Makers Stop Avoiding Odd-Eighth Quotes?” Journal of Finance 49 (December 1994): 1841–1860.

This pattern of pricing is consistent with tacit collusion by Nasdaq market makers, but no evidence indicates that they explicitly communicated about avoiding odd-eighth quotes. Are industry conditions ripe for collusion? It is not clear. For many of these markets, the number of dealers is quite large. The average number of market makers for a Nasdaq security was around ten, and some of the large, most actively traded stocks had in excess of sixty. Such a large number of firms could make collusion quite difficult to sustain. Compounding this factor is the ease of entry that makes the arrival of a new market maker another source of instability for a collusive arrangement. However, a feature conducive to collusion is the high level of price transparency, which allowed for near-perfect monitoring of firm behavior. Bid and ask quotes are instantly disseminated to all dealers, so that any deviation from the collusive norm would be detected within minutes and, in principle, punished. Also, given the perfect substitutability of the service provided (a buyer or seller of shares does not even know the identity of the market maker who makes the trade), punishment could be severe if the other dealers decided to price a deviator’s business away. To appreciate the tradeoff between the number of firms and the rapidity of the response to a deviation, consider the following model. Assume n firms set prices and, as in the case of Nasdaq, they offer a homogeneous service. Market demand is D(p), and all firms have a common marginal cost c. As explained earlier in the chapter, Nash equilibrium has all firms pricing at cost. To examine collusion in this setting, suppose firms interact in an infinitely repeated setting and are interested in sustaining a collusive price p′ using a trigger strategy. Each firm prices at p′ as long as all firms have priced at p′ in the past and otherwise price at c. This trigger strategy is a Nash equilibrium if the following condition holds:

On the left-hand side of expression 4.28 is the payoff from an infinite stream of collusive profits, (p′ − c)D(p ′)/n, which has the n firms equally sharing demand. On the right-hand side is the highest payoff that a firm could receive by cheating. The optimal cheating strategy is to price just below p′ and take all of the business (recall that the service provided by all firms is the same). That yields profits of approximately (p′ − c)D(p′). Thereafter, profits are zero, as firms revert to competitive prices. Simplifying and rearranging this expression, we find: r ≤ 1/(n − 1). This relationship is plotted in figure 4.14. If firms’ discount rate is below the curve, then firms value future profits enough to sustain collusion, otherwise they cannot. Returning to the Nasdaq case, if n = 60, then the condition becomes r < 1/59, so r has to be really low. A really low value for r means that profits in the next period are valued almost as much as profits in the current period. In the case of Nasdaq, the length of the period is the time it takes for firms to change their bid and ask prices in response to a deviation, which is actually just a matter of minutes. Hence, r is indeed really small, as profits in 15 minutes are valued almost the same as profits right now. It is then not surprising that 60 dealers could sustain collusion.

Figure 4.14 Sustaining Collusion: Number of Firms and Discount Rate

As a closing remark, a class action suit was filed in 1994 by investors alleging that thirty-seven Nasdaq dealers colluded to keep spreads artificially high. In December 1997, thirty-six of those dealers agreed to an out-of-court settlement of a little more than $1 billion. In addition, in response to a case brought by the DOJ, dealers agreed to two years of government monitoring.39 Toy Stores and Hub-and-Spoke Collusion In the canonical cartel, competing firms expressly communicate with one another about what prices to charge. Contrary to that structure, some cartels have involved competitors communicating through another firm in the production chain. For example, the Federal Trade Commission pursued a case against Toys “R” Us for coordinating collusion among toy manufacturers.40 The manufacturers never directly communicated but did communicate through Toys “R” Us, often at the annual Toy Fair. Figure 4.15 depicts the communication flows. This form of collusion is known as hub-and-spoke where the “hub” is the party that communicates with all of the other parties, which are the “spokes.”

Figure 4.15 Hub-and-Spoke Collusion in the Toy Manufacturing and Retailing Industries

The case that interests us involved the toy manufacturer Hasbro, which orchestrated a price-fixing agreement among London toy retailers Argos and Littlewoods.41 Starting in 1999, Ian Thomson and Neil Wilson of Hasbro introduced a “pricing initiative” that raised retail margins by coordinating retailers to charge a recommended retail price. Account managers for Hasbro would undertake periodic audits of toy retailers to ensure they were pricing at the (higher) recommended prices. All information about retailers’ pricing intentions went through Hasbro, and no evidence suggested direct communication between retailers. While hub-and-spoke collusion does avoid any evidence that competitors directly communicated, this form of communication is still unlawful. Indeed, that fact was recognized by Hasbro Sales Director Mike Brighty, when, after learning of this pricing initiative, he sent the following email to Ian Thomson: Ian.… This is a great initiative that you and Neil have instigated!!!!!!!!! However, a word to the wise, never ever put anything in writing, its highly illegal and it could bite you right in the arse!!!!! suggest you phone Lesley and tell her to trash? Talk to Dave. Mike42

If all cartelists were that dumb, we wouldn’t have to worry about collusion! Antitrust Law and Enforcement with Respect to Price Fixing The previous section described what it means to collude, but what does it mean to unlawfully collude? That is the question to be addressed here as we review jurisprudence regarding section 1 of the Sherman Act. This discussion involves an examination of liability (that is, what is illegal?) and evidentiary standards (that is, what does it take to prove illegality?). After reviewing the law, we turn to an examination of the penalties that can be levied on companies found guilty of price fixing. While the focus is on the United States, some cases and forms of penalties from the European Union and other jurisdictions will also be covered.43 Fundamental Features of the Law Section 1 of the Sherman Act outlaws “every contract, combination … or conspiracy in restraint of trade.” If interpreted literally, this language would make nearly every type of business agreement or contract illegal.

Two lawyers who decide to form a partnership would be illegally in restraint of trade, since they eliminate competition between themselves. However, early on in section 1 jurisprudence, it was settled that only unreasonable restraints of trade were prohibited. This view persists to this day: “although the Sherman Act, by its term, prohibits every agreement ‘in restraint of trade,’ this Court has long recognized that Congress intended to outlaw any unreasonable restraints.”44 The next question to ask is: What is unreasonable? In Trans-Missouri Freight (1897),45 the railroads sought to argue that they coordinated on “reasonable” rates, but the Court made clear that does not make the restraint of trade reasonable. The Court noted that “the subject of what is a reasonable rate is attended with great uncertainty” and wisely chose not to venture into what is ultimately a regulatory issue. Thus, the price levels are not relevant for determining whether the constraint is unreasonable. An appropriately sweeping notion was put forth in Addyston Pipe (1899),46 which professed that any restraint intended “to avoid the competition which it has always been the policy of the common law to foster” is illegal. They went on to say: “But where the sole object of both parties in making the contract as expressed therein is merely to restrain competition, and enhance or maintain prices, it would seem that there was nothing to justify or excuse the restraint.” Recall from chapter 3 that an anticompetitive act can be judged by the per se rule—so it is illegal regardless of its effects—or the rule of reason—so its illegality is dependent on its effect (which, in the United States, would be judged by the consumer welfare standard). The appropriateness of the per se rule for price fixing cases was firmly established in Trenton Potteries (1927).47 Some twenty-three manufacturers of sanitary pottery belonged to an association that attempted to fix the prices of their products. The defendants had roughly 82 percent of the market. In its decision, the Court concluded: The aim and result of every price-fixing agreement, if effective, is the elimination of one form of competition. The power to fix prices, whether reasonably exercised or not, involves power to control the market and to fix arbitrary and unreasonable prices. The reasonable price fixed today may through economic and business changes become the unreasonable price of tomorrow.… Agreements which create such potential power may well be held to be in themselves unreasonable or unlawful restraints, without the necessity of minute inquiry whether a particular price is reasonable or unreasonable as fixed and without placing on the Government in enforcing the Sherman Law the burden of ascertaining from day to day whether it has become unreasonable through the mere variation of economic conditions.

Though Appalachian Coals (1933) departed somewhat from the per se rule with regard to price fixing, that ruling proved to be an aberration.48 Socony-Vacuum (1940) reaffirmed the per se rule, and courts have not veered since from it.49 Socony-Vacuum involved independent oil refiners dumping gasoline at very low prices. During 1935–1936 more than a dozen major oil refiners, including Socony-Vacuum (later known as Mobil, which then merged with Exxon), agreed to a coordinated purchasing program to keep prices up. Each major refiner selected “dancing partners” (that is, independent refiners) and was responsible for buying the surplus gasoline placed on the market by its partners. On appeal, the Supreme Court sustained the verdict of guilty and stated that “price-fixing agreements are unlawful per se under the Sherman Act and that no showing of so-called competitive abuses or evils which those agreements were designed to eliminate may be interpreted as a defense.” Since that decision, the per se rule toward price fixing has been the most unambiguous antitrust rule of law. Nevertheless, every rule has exceptions. With regard to section 1, if a market would not exist but for an agreement, then it is not per se illegal and could well be legal. Exemplifying this principle is BMI (1979).50 The origin of the case is in a fundamental problem faced by a radio station. The station wants to have access to a large collection of music, but to individually negotiate with copyright holders for every recording would be excessively costly. To solve this problem, the American Society of Composers, Authors and Publishers

(ASCAP) and later Broadcast Music, Inc. (BMI) were formed. These organizations have copyright holders as members, offer a blanket license for all their music at a fixed fee to a radio station (or anyone else) and then share the revenues among its members. The Court of Appeals viewed these organizations as engaging in illegal price fixing. The Supreme Court took a different view. It saw the primary purpose of BMI was not to stifle competition but rather to increase efficiency by offering a blanket license that “was an obvious necessity if the thousands of individual negotiations, a virtual impossibility, were to be avoided.” Prohibiting this practice would not mean more competition among copyright holders for their music to be played on radio stations but rather little or no music being played on radio stations. The market as we know it would not exist but for this practice. The Court wisely concluded that it is better to have access to a service at a high price than not to have access at all. The Concept of an Agreement If firms in a market expressly communicate for the purpose of coordinating to suppress competition, that is a per se violation of the law. If firms’ prices suggest collusion but evidence of express communication is lacking (or the documented communication is less than express), then the per se rule does not apply. What exactly is illegal, and what is sufficient to prove illegality? Toward this end, the courts have effectively replaced the reference to “contracts, combinations, and conspiracies” that is in the law with the concept of an “agreement.” It is now understood that firms are in violation of section 1 when an agreement exists among competitors to restrain competition. Though the term “agreement,” which is now so integral to defining liability, does not appear in the Sherman Act, many jurisdictions that later instituted their competition laws have chosen to use it. For example, Article 101 (1) of the Treaty on the Functioning of the European Union states: “The following shall be prohibited: all agreements between undertakings, decisions by associations of undertakings and concerted practices which … have as their object or effect the prevention, restriction or distortion of competition.” So, what is an “agreement”? The Supreme Court has defined an agreement as a “unity of purpose or a common design and understanding, or a meeting of minds”51 or “a conscious commitment to a common scheme designed to achieve an unlawful objective.”52 The General Court of the European Union has followed a similar path, defining an agreement as requiring “joint intention”53 or a “concurrence of wills.”54 In sum, firms must have mutual understanding that they are not to compete: By operationalizing the idea of an agreement, antitrust law clarified that the idea of an agreement describes a process that firms engage in, not merely the outcome that they reach. Not every parallel pricing outcome constitutes an agreement because not every such outcome was reached through the process to which the law objects: a negotiation that concludes when the firms convey mutual assurances that the understanding they reached will be carried out.55

Four cases between 1939 and 1954 served to define the boundaries for what it means to have an agreement that violates section 1 of the Sherman Act. Interstate Circuit (1939) involved the manager of a motion picture chain in Texas and eight motion picture distributors.56 The exhibitor sent identical letters to the distributors, naming all eight as addressees, and demanding certain restrictions. For example, the manager demanded that the distributors not release their first-run films to theaters charging less than 25 cents admission. After the letters were mailed, the distributors did exactly what the exhibitor had demanded. However, there was no evidence of meetings or other communications among the distributors. The parallel behavior of the distributors, plus the letter, was sufficient for the Court to find illegal conspiracy: It taxes credulity to believe that the several distributors would, in the circumstances, have accepted and put into operation

with substantial unanimity such far-reaching changes in their business methods without some understanding that all were to join, and we reject as beyond the range of probability that it was the result of mere chance.

In American Tobacco (1946), the Court made clear that “no formal agreement is necessary to constitute an unlawful conspiracy.”57 It was sufficient to establish that firms had a “meeting of minds,” and no express exchange of assurances to coordinate on higher prices is needed. Two years later, this perspective was affirmed in Paramount Pictures (1948), when the Court said “it is not necessary to find an express agreement in order to find a conspiracy. It is enough that a concert of action is contemplated and that the defendants conformed to the arrangement.”58 Those cases served to expand the set of practices that were deemed unlawful. Theatre Enterprises (1954) constrained that boundary by specifying practices that were lawful.59 Conscious parallelism is a term used to refer to parallel behavior among firms without any evidence of communication among them. In Theatre Enterprises, the Court made clear that conscious parallelism falls into the “legal” category: This court has never held that proof of parallel business behavior conclusively establishes agreement or, phrased differently, that such behavior itself constitutes a Sherman Act offense. Circumstantial evidence of consciously parallel behavior may have made heavy inroads into the traditional judicial attitude toward conspiracy; but “conscious parallelism” has not yet read conspiracy out of the Sherman Act entirely.

The view that more is needed than firms acting in unison also prevails in the European Union and was prominent in a case concerning the wood pulp industry. In 1984, the EC found forty wood pulp producers guilty of violating (what is now) Article 101, only to find their decision later annulled by the European Court of Justice.60 The EC’s case rested on parallel behavior among wood pulp producers involving nearsimultaneous quarterly price announcements that were identical. The Court commented that “parallel behavior cannot be regarded as furnishing proof of concertation [that is, coordinated actions] unless concertation constitutes the only plausible explanation for such conduct.” As defendants offered expert reports testifying that this parallel behavior could well be the outcome of recognized interdependence among competitors, the Court ruled the evidence inadequate for a conviction. Parallelism Plus Out of these decisions surrounding conscious parallelism came the evidentiary standard of “parallelism plus.” The courts have decided that it is insufficient to show that firms recognized their interdependence and engaged in similar behavior; for example, parallel price increases are not enough. For, as Judge Richard Posner has noted: “section 1 of the Sherman Act … does not require sellers to compete; it just forbids their agreeing or conspiring not to compete.”61 There must be more than parallel behavior. This additional evidence has been referred to as the “plus factor.” A plus factor is actions or outcomes that are largely inconsistent with firms acting independently and largely consistent with explicit coordination among firms. In Interstate Circuit, the plus factor was the letter circulated among the distributors. Parallel pricing behavior would be insufficient under the parallelism plus doctrine. Also needed is the plus factor of the written communication that would cause coordinated actions. Another case involved realtors accused of fixing real estate commissions.62 Realtor Jack Foley hosted a dinner party where the guests were nine competing realtors. At this party, Foley announced that his firm was raising its commission rate from 6 to 7 percent. In the following months, all defendants adopted a 7 percent rate. While the parallel movement in commission rates would have been insufficient for a conviction, the plus factor of the announcement at an industry gathering proved sufficient for a section 1 conviction. A relevant case also arose in the production of cardboard cartons (Container Corp., 1969).63 There was

evidence of information sharing among eighteen companies that in aggregate supplied 90 percent of sales. No centralized information exchange took place, but companies informed one another about prices currently or last quoted to particular customers. Prices were subsequently matched by competitors. The Court concluded that the firms had agreed to exchange price information, because, absent anticipation of reciprocity, such exchange would not occur. An agreement to share price information, as opposed to an agreement about the prices to charge, was not found to be per se illegal. But when this information sharing agreement was buttressed with evidence that it had served to raise prices, the Court concluded that firms had violated the Sherman Act: The exchange of prices made it possible for individual defendants confidently to name a price equal to that which their competitors were asking. The obvious effect was to stabilize prices by joint arrangement.… I cannot see that we would be justified in reaching any conclusion other than that defendants’ tacit agreement to exchange information about current prices to specific customers did in fact substantially limit the amount of price competition.

Legal Procedures Consider an industrial buyer who has noticed a series of parallel price increases by the suppliers of some input. As the buyer is unaware of any common cost or demand changes to rationalize the rising prices, it suspects the input suppliers are colluding and decides to pursue private litigation against them. As we know from our previous discussion, evidence of parallel price movements is not sufficient to convict them of a section 1 violation. However, at this stage, the buyer is simply stating a claim to the court for the purpose of engaging in discovery. Discovery is a pretrial process by which a plaintiff can acquire documents from defendants. In the context of a section 1 case, a plaintiff is hoping that discovery will reveal the “smoking gun,” such as emails between defendants discussing a coordinated price increase. Until a U.S. Supreme Court ruling in 2007, it was relatively easy for the court to permit a plaintiff to state a claim so that it could gain access to potential evidence through discovery. The Twombly (2007) ruling dramatically changed that situation, as it raised the standard for pleading a case.64 As applied to section 1 cases, a complaint must now include enough facts to make it plausible, not simply possible or conceivable, that the plaintiff will be able to prove that firms had an agreement. It is generally insufficient to state a claim based on parallel price movements; some evidence must suggest that firms agreed not to compete. This heightened standard has thwarted many cases, because the judge ruled the complaint lacked a sufficiently plausible case of illegal collusion. The justification for raising the pleading threshold is a worthy one. Discovery is a costly process for defendants, as they are typically asked to search, organize, and deliver massive volumes of documents. Avoiding such a costly process for dubious cases is surely desirable. However, Twombly can create a bit of a catch-22.65 Firms engaging in express communication will seek to hide those communications, and only discovery may be able to reveal evidence of them. If discovery is not allowed until there is some evidence that such communication exists, then the plaintiff is put in a challenging situation. However, let us suppose the case surpasses the Twombly hurdle, so it goes to trial. Before the trial comes to a conclusion with a verdict, either the plaintiffs or the defendants can put forth a motion for the judge to issue a summary judgment. In response to such a motion, a judge can decide the case when the opposing parties agree to the facts and, on the basis of those facts, one side is entitled to judgment as a matter of law. This motion is often used by defendants when they feel that the plaintiffs have failed to deliver evidence that firms agreed not to compete: Conduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an inference

of antitrust conspiracy.… To survive a motion for summary judgment or for a directed verdict, a plaintiff … must present evidence that “tends to exclude the possibility” that the alleged conspirators acted independently.66

It is not enough that coordinated behavior is more likely than independent behavior to explain the evidence; in addition, the latter must not be a credible explanation. Otherwise, the defendants will receive summary judgment in their favor. It is because of such high legal standards for guilt that type II (“false positive”) errors for section 1 convictions appear to be low in the U.S. legal system. A recent case illustrates the principles at work with regard to the standards for a plaintiff to plead a case and to survive summary judgment.67 Prior to cellphone companies offering unlimited text messages as part of a monthly plan, they charged for each text message sent. During the transition between these two types of plans, both were offered, and private litigation accused AT&T, T-Mobile, Sprint, and Verizon of coordinating their pricing on pay-per-use text messages during 2005–2008. The plaintiffs’ claim survived a Twombly challenge by the defendants on the following facts. First, firms enacted similar price increases in the face of declining marginal cost. Prior to August 2005, Sprint and AT&T charged 10 cents per message, Verizon charged 2 cents for incoming and 10 cents for outgoing, and T-Mobile charged 5 cents. By March 2006, Verizon and T-Mobile had matched the price of 10 cents of Sprint and AT&T. From October 2006 to June 2007, all four raised price to 15 cents per message and then again to 20 cents from October 2007 to August 2008. Second, the plaintiffs argued that the industry structure was conducive to collusion, as the four defendants had 90 percent of U.S. text messaging services, offered identical services, and were able to monitor competitors. Third, high-level officers were alleged to have used trade association meetings to coordinate on price. The plaintiffs argued that “the most likely cause” of these price increases “was collusion by the defendants,” because “the price changes differed from the price trends [which] cannot be explained by unilateral profit-maximizing decisions by the carriers.” The plaintiffs were allowed to engage in pretrial discovery. They needed to find evidence that the defendants had explicitly agreed to raise prices rather than having done so tacitly by, for example, “following the leader” or “consciously parallel” pricing. Their best evidence was a pair of emails between executives of T-Mobile: Email #1: “Gotta tell you but my gut says raising messaging pricing again is nothing more than a price gouge on consumers. I would guess that consumer advocates groups are going to come after us at some point. I know the other guys are doing it but that doesn’t mean we have to follow.” Email #2: “At the end of the day we know there is no higher cost associated with messaging. The move [the latest price increase by T-Mobile] was colusive [sic] and opportunistic.”68

The defendants motioned for summary judgment and won. The plaintiffs appealed the decision, and the Seventh Circuit Court affirmed the lower court’s ruling. Both courts found the emails did not exclude the possibility of firms acting independently when they raised prices. In particular, the evidence suggested that customers who continued to use pay-per-use messages texted very little and thus would probably not switch even if their current cellphone company’s prices were higher than other companies. With such priceinelastic demand, it made good business sense for a company to raise price, and such a price hike did not require a coordinated move to make it profitable. Enforcement Policy Having reviewed some enforcement issues in chapter 3, the focus here is on corporate penalties associated with price fixing and an enforcement tool—leniency programs—that has been widely adopted by many countries in recent years.

The prosecution of a suspected cartel is generally done by the competition authority; for example, the Antitrust Division of the DOJ in the United States and the Directorate-General for Competition of the EC in the European Union. On achieving a conviction, the competition authority can levy fines on the corporation, and in some countries (like the United States), it can impose fines and prison sentences on individuals. Complementing public enforcement is private enforcement, which allows customers harmed by the cartel to sue for damages in court. Private litigation has a long history in the United States, Canada, and a few other countries and is becoming more active elsewhere such as the in European Union. Section 4 of the Clayton Act specifies that any individual “who shall be injured in his business or property by reason of anything forbidden in the antitrust laws” may sue to recover “threefold the damages by him sustained.” Under this condition, known as treble damages, victims collect three times their calculated harm. If the DOJ obtains a conviction, then private litigation will almost certainly ensue. But many cases pursued by customers are not prosecuted by the government. These cases often involved suspected tacit, rather than explicit, collusion. Damages Before delving into the specifics of how fines and damages are calculated, let us think about how one would want to set them if the intent is to deter collusion. For this purpose, consider a simple model in which two firms may collude and, if they do, may be discovered. Each firm has a constant marginal cost of c. If they collude, then the price is with resulting industry quantity so that a firm’s collusive profit is because each produces half of If firms do not collude, then price is (which is less than ), with industry supply of and firm profit of Suppose that the probability of the cartel being discovered and successfully prosecuted is z, where 0 < z < 1. If the penalty for each firm is X, then the expected profit from colluding is where the expected penalty is zX. Collusion is then deterred if the profit from not colluding exceeds that from colluding, which is the condition:

Rearranging, it is equivalent to

so that deterrence is achieved when the penalty is sufficiently high. If we interpret the “damage” as the increase in profit due to collusion, deterrence then requires the penalty to be a multiple of damages, where this multiplicative factor is at least 1/z. For example, when the probability of the cartel being discovered and successfully prosecuted is only 10 percent, then the multiple must be 10. Since the cartel will only have to pay a penalty with some probability, if it was forced only to return its ill-gotten profit, then it would always be optimal to form a cartel. Only by making the penalty sufficiently large relative to the gain in profit can collusion be deterred. This is a rationale for the Clayton Act setting the damage multiple well above one.69 In practice, reality departs considerably from the above calculus and, in particular, as to how damages are calculated. Standard antitrust practice is to calculate damages not as the gain in profit from colluding but rather as the additional revenue on the units sold: The quantity is known as the “but for price,” which is the price that would have occurred had it not been for collusion, while is referred to as the

“overcharge.” To see how this formula differs from the gain in industry profit from collusion, consider figure 4.16. The gain to firms from colluding is rectangle A (which measures the higher profit earned on the units produced while colluding) minus rectangle B (which measures the forgone profit from producing fewer units). However, damages are calculated to include only rectangle A and thus overestimate the gain to firms by rectangle B. Looking at this from the perspective of consumers, the cost to consumers from collusion is rectangle A plus triangle C, which measures the forgone surplus from consuming fewer units. Interestingly, we then have that calculated damages exceed the amount by which firms benefit from collusion but fall short of how much consumers are harmed.

Figure 4.16 Calculating Damages in a Price-Fixing Case

Damages can compensate those customers who have been harmed but also penalize cartels and thereby serve as a deterrent to collusion. In the European Union, their primary purpose is compensation, while the United States puts more emphasis on deterrence. In the European Union, customers can only sue for single damages which, while sufficient to compensate, is less effective at deterrence than treble damages in the United States. In the European Union, all customers who are harmed can sue. In the United States, only direct purchasers can sue (at least in federal court). Even if the higher prices from collusion are passed from, say, smartphone manufacturers (who bought LCD screens from a cartel of LCD screen manufacturers) to retail stores and final consumers, only the smartphone manufacturers as direct purchasers can sue, and they can sue for the entire amount of damages created by the cartel. This policy promotes deterrence by incentivizing the buyers with the best information to monitor and litigate, and it avoids the complications of

apportioning damages between direct and indirect purchasers. Fines For the United States and the European Union, tables 4.2 and 4.3 report the highest fines collected (at the cartel level). Cartels have had to pay hundreds of millions and even billions of dollars (or euros) in fines in addition to customer damages which are often of a similar or greater magnitude. Of course, the extent to which these fines are “large” or “small” depends on how they compare to the additional profits from collusion. Table 4.2 Highest Cartel Fines, United States Industry

Year

Total Fines (millions)

LCD panels Air transportation (cargo) Vitamins Automobile parts Air transportation (cargo and passenger) DRAM Antivibration rubber products for automobiles Automotive wire harnesses and related products Graphite electrodes Automotive wire harnesses and electronic components

2009 2008 1999 2012 2007 2005 2014 2012 1998 2014

$1,241.00 $1,198.81 $899.50 $824.00 $650.00 $645.00 $556.00 $490.35 $411.50 $325.00

Source: U.S. Department of Justice, Sherman Act Violations Yielding a Corporate Fine of $10 Million or More, www.justice.gov/atr/sherman-act-violationsyielding-corporate-fine-10-million-or-more (accessed on August 1, 2016). Note: The year listed is the first year that a cartel member was fined.

Table 4.3 Highest Cartel Fines, European Union Industry

Year

Total Fines (millions)

Trucks TV and computer monitor tubes Car glass Automotive bearings Elevators and escalators Euro interest rate derivatives Vitamins Yen interest rate derivatives Gas insulated switchgear Natural gas (E.ON/Gaz de France Suez)

2016 2012 2008 2014 2007 2013 2001 2013 2007 2009

€2,926.50 €1,409.59 €1,185.50 €953.31 €832.42 €824.58 €790.52 €684.68 €675.45 €640.00

Source: European Commission, Cartel Statistics, July 2016, http://ec.europa.eu/competition/cartels/statistics/statistics.pdf (accessed on August 1, 2016).

In the United States, the Federal Sentencing Guidelines allow a base fine to equal 20 percent of the sales of the cartel members during the time of the conspiracy. With this base, a range for the fine is calculated by deriving a maximum and minimum multiplier. These multipliers depend on an organization’s “culpability score,” which is calculated by taking into account aggravating behavior (such as a history of misconduct or high-level employees being involved) and mitigating behavior (such as accepting responsibility). The cap on the fine is the maximum of $100 million and twice the profit gain to the cartel. In the case of Australia and Germany, the fine can be as high as triple the gain.

Here is an example from the vitamins cartel.70 For Hoffman-La Roche, the DOJ calculated sales during the conspiracy to be $3.28 billion, which meant a base fine of $656 million (= 3,280,000,000 × 0.2). Starting with the standard culpability score of 5, there was a 5-point upward adjustment, because high-level executives had participated and the organizational unit committing the offense had more than 5,000 employees. Another 2 points were added because the firm had a history of collusion, and yet another 3 points for obstructing the government’s investigation. It received a 2-point downward adjustment for accepting responsibility and fully cooperating. The net result was a culpability score of 13, which resulted in a minimum and maximum multiplier of 2 and 4, respectively. This meant that the recommended fine was between $1.3 billion (= 656,000,000 × 2) and $2.6 billion (= 656,000,000 × 4). What then often happens is that a final fudge factor is applied, which, in the case of Hoffman-La Roche, allowed it to pay a fine of $500 million. The European Commission Guidelines (2006) specifies a base penalty equal to SaT + Sb, where S is the value of the firm’s sales in the last full business year of the firm’s participation in the cartel, and T is the number of years of a firm’s participation in the cartel. Coefficient a lies between 0 and 0.3 and depends on the “gravity” of the offense, and b is between 0.15 and 0.25. Corporate leniency programs In 1978, the DOJ established a program whereby corporations and individuals who were engaging in illegal antitrust activity (such as price fixing) could receive lenient treatment if they fully cooperated in an investigation.71 Leniency means not being criminally charged for the activity being reported, which allows a corporation to avoid government fines (though it is still liable for private damages) and an individual to escape fines and prison sentences. In spite of the potential appeal of amnesty, the program was rarely used, and one likely reason is that its design left considerable uncertainty as to whether an application for leniency would be approved. In particular, leniency would be denied if the government could have “reasonably expected” that it would have learned of the cartel without the applicant’s assistance. Under the 1978 program, leniency applications averaged about one per year. Soon after its revision in 1993, applications were coming in at the rate of two per month. What led to such a radical increase? The DOJ made several substantive changes. It laid out a much clearer set of conditions for a leniency application to be approved, which served to reduce uncertainty. In addition, it allowed amnesty in cases for which an investigation had been started. While firms are unlikely to apply for leniency when the authorities do not even have a hint that collusion is occurring, there is a much stronger incentive if the authorities suspect a cartel exists, in which case the prospect of prosecution may be imminent. Finally, one of the conditions for leniency is that the DOJ “has not received information about the illegal activity being reported from any other source.” This meant that amnesty is limited to one firm per cartel, which can create a “race to the courthouse,” as a firm may apply for leniency simply out of fear that another firm will beat them to it. To appreciate the power of the leniency program, let us examine the incentives of a firm to apply for amnesty. Using our previous notation, let and denote the profit a firm earns from colluding and competing, respectively. Breaking the penalty into two parts—fines (denoted F) and damages (denoted D)— amnesty means avoiding F though still paying D. Suppose the market has two firms, they form a cartel, and the DOJ has become suspicious about collusion. Assume that, in the absence of a firm coming forward with evidence, each firm believes the DOJ will be able to successfully prosecute with probability w, where 0 < w < 1. Each firm must decide whether to apply for leniency. The situation that the two firms face is summarized by the payoff matrix in figure 4.17, where the first

number in a cell is the payoff to firm 1 and the second number is firm 2’s payoff. If one or both apply for leniency, then collusion falls apart; otherwise, it is maintained. If both choose not to apply, then they are not successfully prosecuted with probability 1 − w, in which case they continue to earn collusive profit, and are successfully prosecuted with probability w, in which case they receive noncollusive profit and pay damages and fines. This situation yields an expected payoff for a firm equal to If both apply for leniency, let us suppose the DOJ gives each a 50 percent reduction in the fine (or, alternatively, flips a coin to decide who gets full amnesty), so that a firm can expect to pay D + (F/2). Finally, if only one firm applies for leniency, then that firm pays penalties of D, while the other firm is stuck paying D + F.

Figure 4.17 The Corporate Leniency Game

In analyzing the Nash equilibria for this game, first note that both firms applying for leniency is always an equilibrium. If the other firm applies, then by also applying, a firm reduces its penalty from D + F to D + (F/2) and earns noncollusive profit in either case. In other words, if the other firm is expected to “rat to the Feds,” then it is best to rat as well, since the “gig is up.” Next note that it is never an equilibrium for only one firm to apply for leniency. This leaves as the remaining possibility that neither applies for leniency, in which case either collusion persists or both firms end up paying a penalty of D + F. Both not applying for leniency is an equilibrium when

which can be rearranged to give

If damages (D) are sufficiently large relative to fines (F) or if the incremental profit from collusion is sufficiently high then it is an equilibrium for firms not to turn evidence. But when that condition does not hold, the only solution is for both firms to apply for leniency, so there is a race to the courthouse. This situation occurs when the probability of successful prosecution without a firm acting as a witness is sufficiently high (w) or when the government fine (F) is sufficiently high. The power of the leniency program lies in taking advantage of each firm’s fear that the other firm may apply for amnesty first. The recent record suggests that this dynamic has played a critical role in the successful prosecution of many price-fixing cartels, including those in vitamins, air cargo, and fine arts auction houses. Summary This chapter has examined a variety of issues related to the behavior of firms in oligopolistic industries. An

oligopoly is characterized by having a relatively small number of firms. The significance of a “small” number of firms is that each firm takes into account the actions and future responses of its competitors when deciding how much to produce or what price to set. Both the Cournot (quantity) and Bertrand (price) models were examined when firms offer identical products. We also considered when firms choose prices and have differentiated products. With the exception of when firms compete on prices with identical goods, the Nash equilibrium price exceeds the competitive price (which equals cost) but falls short of the monopoly price. Firms jointly produce too much (or price too low) relative to the joint-profit maximum. While each firm is individually maximizing its profit, firms could raise all of their profits by jointly reducing output or raising price. It is this lure of higher profits that provides firms with the desire to collude. The problem that firms face in colluding is that each can increase its current profit by deviating from the agreed-on output or price. To explain and understand collusive behavior in practice, an infinite horizon extension of the Cournot model was developed. Based on this model, we found that collusion is consistent with each firm acting to maximize its sum of discounted profits. A firm is deterred from deviating from the collusive outcome by the threat that cheating will induce a breakdown in collusion. While cheating raises current profits, it lowers future profits by inducing greater competition in the future. Though there are many challenges to having an effective cartel—agreeing to price and a market allocation, monitoring firm behavior to ensure that cheating does not occur, dealing with growth in noncartel supply—many industries have succeeded in overcoming them. With an understanding of how firms can collude, we then turned to exploring antitrust law with respect to collusion or, as it is often called, price fixing. Although price fixing was made illegal with the Sherman Act (1890), the interpretation of this law took place only with key early cases like Addyston Pipe and Steel (1899) and Trenton Potteries (1927). These cases established the per se rule with respect to price fixing. This rule states that price fixing is illegal regardless of the circumstances. There is no allowable defense. Current law is such that to prove that firms are guilty of price fixing requires some evidence of communication. For example, it could be an email between competitors describing a plan to raise price or a single manager’s announcement at an industry gathering of an intent to raise price. Under the parallelism plus doctrine, it is insufficient to show that that firms’ prices rise over time in a similar manner or, more generally, are consistent with collusion. There must be additional evidence (“plus factor”) supporting the claim that behavior is based on an agreement and is not the product of independent choices. Of course, a well-developed law against price fixing is for naught if cartels are not discovered and properly punished. Recent enforcement policy has become more effective with the revision of the corporate leniency program and new sentencing guidelines. The former works to induce a firm to become a cooperating witness—thereby enhancing the prospects of a successful prosecution—and the latter permits the government to levy more severe penalties. In spite of the increase in these government fines in the United States, private damages awarded to the plaintiffs in an antitrust case generally remain the most serious financial penalty imposed on a corporation for price fixing. Questions and Problems 1. In 1971 the federal government prohibited the advertising of cigarettes on television and radio. Can you explain why this ban on advertising might have raised the profits of cigarette manufacturers? Hint: Use the Advertising Game. 2. The inverse market demand for mineral water is P = 200 − 10Q, where Q is total market output, and P is the market price. Two firms, A and B, have complete control of the supply of mineral water and both have zero costs.

a. Find the Cournot solution. b. Find an identical output for each firm that maximizes joint profits. 3. Continuing with problem 2, assume that each firm can choose only two outputs—the ones from parts a and b in problem 2. Denote these outputs by qa and qb, respectively. a. Compute the payoff/profit matrix showing the four possible outcomes. b. Show that this game has the same basic properties as the Advertising Game. In particular, each firm’s optimal output is independent of what the other firm produces. c. Now consider firms playing an infinitely repeated version of this game and consider the following strategy for each firm: (1) produce qb in period 1, (2) produce qb in period t if both firms produced qb in all preceding periods, and (3) produce qa in period t if one or more firms did not produce qb in some past period. Assume each firm acts to maximize its sum of discounted profits where the discount rate is r. Find the values for r such that this strategy pair is a Nash equilibrium. 4. Compare and contrast the Cournot, Bertrand, and Stackelberg models of oligopoly in terms of their assumptions. Assuming identical demand and cost assumptions, rank the equilibria of these models in terms of price. Which model predicts the highest price? lowest price? 5. Consider a duopoly with firms that offer homogeneous products where each has constant marginal cost of c. Let D(P) denote market demand. Firms make simultaneous price decisions. Letting p1 and p2 be the prices of firms 1 and 2, respectively, the demand function of firm 1 is specified to be

If firm 1’s price is lower than firm 2’s price, then all consumers buy from it, so that its demand equals market demand. If both firms charge the same price, then they equally split market demand. If firm 1’s price is higher than firm 2’s price, then all consumers go to firm 2. Firm 2’s demand function is similarly defined. Each firm chooses prices to maximize its profit. a. Show that both firms pricing at marginal cost is a Nash equilibrium. b. Show that any other pair of prices is not a Nash equilibrium. For parts (c) and (d), suppose that we limit firms to choosing price equal to c, 2c, or 3c. c. Compute the payoff/profit matrix. d. Derive all of the Nash equilibrium price pairs. 6. Assume that an industry with two firms faces an inverse market demand of P = 100 − Q. The product is homogeneous, and each firm has a cost function of 600 + 10q + 0.25q2. Assume firms agree to equally share the market. a. Derive each firm’s demand curve. b. Find each firm’s preferred price when it faces the demand curve in derived in part a. c. Now assume that firm 1’s cost function is instead 25q + 0.5q2 while firm 2’s is as before. (This assumption applies to the remaining parts.) Find each firm’s preferred price when it faces the demand curve derived in part a. d. Compute each firm’s profit when firm 1’s preferred price is chosen. Do the same for firm 2’s preferred price. Which price do you think firms would be more likely to agree on? Why? e. Show that neither price maximizes joint profits. f. Find the price that maximizes joint profits. Hint: It is where marginal revenue equals both firms’ marginal cost. g. Would firm 1 find the solution in part f attractive? If not, would a side payment from firm 2 to firm 1 of $500 make it attractive? 7. What are the benefits and costs of the per se rule?

8. What is the law concening parallel business behavior? Assume, for example, that three firms charge identical prices for a product and it is agreed by all observers that the price is unusually high compared to cost. Would this constitute a Sherman Act offense? Cite a relevant case to support your answer. 9. In March 2004, a class action suit was filed in Madison, Wisconsin, that accused twenty-four bars and the Madison-Dane County Tavern League of conspiring to fix prices on beer and liquor by agreeing to eliminate weekend happy hours. The suit, filed on behalf of three University of Wisconsin students, contends that the University of Wisconsin encouraged bars to collude as part of an antidrinking campaign. There is no evidence that the bar owners ever directly communicated. Do you think this is a violation of Section 1 of the Sherman Act? How would you go about calculating the damages incurred by the elimination of half-price drinks? 10. In 2004, the U.S. Congress passed the Antitrust Criminal Penalty Enhancement and Reform Act (ACPERA). ACPERA made a leniency recipient liable only for single, not treble, customer damages. In addition, the other members of the cartel are financially responsible for the lost double damages of the leniency recipient. Explain how each of these two features impacts the incentives to apply for leniency and the efficacy of the leniency program. 11. Whistleblower rewards are provided to an individual who is not involved with a cartel but reports a cartel to the government that is then convicted. These programs have been instituted in South Korea (2005), the United Kingdom (2008), Hungary (2010), and Taiwan (2015). In the case of Taiwan, a whistleblower can receive 5 to 20 percent of the fines collected by the government up to a maximum of around $1.5 million. The Antitrust Division of the DOJ has expressed opposition to whistleblower rewards because “jurors may not believe a witness who stands to benefit financially from successful enforcement action against those he implicated” (GAO Report, 2011).72 Is the DOJ right? Does it matter that rewards are only paid on conviction? Does a whistleblower program make a leniency program more or less effective? 12. In the United States, it is legal for firms to charge high prices; it is just not legal for them to do so by coordinating their prices. In contrast, high prices are illegal in the European Union, Israel, and South Africa when they are considered “excessive.” If you were asked to implement an excessive pricing law, how would you define “excessive”? Would it depend on firms’ cost? How would you convey to firms what prices are “excessive” so that they know when they are violating the law? 13. While section 1 of the Sherman Act is the dominant piece of legislation pertinent to prosecuting collusion, section 5 of the Federal Trade Commission (FTC) Act has also been used, because it prohibits “unfair methods of competition.” While an agreement must be established to prove a section 1 violation, that is unnecessary with a section 5 case. In particular, an invitation to collude, even if it is not accepted (so that, by the law, there is not an agreement), can be prosecuted under the FTC Act. What episodes described in this chapter could be prosecuted under section 5 of the FTC Act but may not be under section 1 of the Sherman Act?

Appendix Game Theory: Formal Definitions The strategic (or normal) form of a game is defined by three elements: (1) the set of players, (2) the strategy sets of the players, and (3) the payoff functions of the players. The set of players comprises those individuals making decisions. Let n denote the number of players. The strategy of a player is a decision rule that prescribes how she should play over the course of the game. All of the strategizing by a player takes place with regard to her selection of a strategy. The strategy set of player i, which we denote Si, comprises all feasible strategies. A player is constrained to choosing a strategy from her strategy set. The payoff function of player i gives player i’s utility (or payoff) as a function of all players’ strategies. An ntuple of strategies, one for each player, is referred to as a strategy profile. Player i’s payoff function is denoted Vi (·), where Vi(s1, …, sn) is the payoff to player i when player 1’s strategy is s1, player 2’s is s2, …, and player n’s strategy is sn. A strategy profile ( , …, ) is a Nash equilibrium if and only if each player’s strategy maximizes her payoff given the other players’ strategies. Formally,

for all si in Si and for all i = 1, …, n.

Notes 1. Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko LLP, 540 U.S. 398 (2004). 2. This game is commonly known as the Prisoners’ Dilemma and is the most widely examined game in game theory. There are literally hundreds of papers that investigate this game theoretically or test it experimentally in the laboratory using human subjects, typically college students. 3. This game is commonly known as the Battle of the Sexes. 4. For a more extensive treatment of game theory, see Joseph E. Harrington Jr., Games, Strategies, and Decision Making, 2nd ed. (New York: Worth Publishers, 2015). 5. In example 1, the set of players is firms 1 and 2, a strategy is a rate of advertising, a strategy set is {Low, High}, and the payoff functions of the players are represented by figure 4.1. 6. A formal definition of Nash equilibrium is provided in the appendix to this chapter. 7. Augustin Cournot, Research into the Mathematical Principles of the Theory of Wealth, English translation of the 1838 French edition, Nathaniel T. Bacon, trans. (New York: Kelley, 1960). 8. Because revenue is R = (100 − Q)Q, marginal revenue equals the first derivative of (100 − Q)Q with respect to Q: dR/dQ = 100 − 2Q. Equating MR and MC, 100 − 2Q = 40, and solving for Q yields the profit-maximizing quantity of 30. 9. To derive firm 1’s best reply function, find the value of q1 that maximizes π1. This quantity is where marginal profit is zero: ∂π1/∂q1 = 100 − 2q1 − q2 − 40 = 0. Solving this expression for q1 yields q1 = 30 − 0.5q2. An analogous method yields firm 2’s best reply function. What we are calling a firm’s best reply function, Cournot called a firm’s “reaction function.” In his original treatment, Cournot provided a dynamic story to his static model that entails each firm’s reacting to the other’s output. However, the term “reaction function” is a misnomer, as firms make simultaneous output decisions in the Cournot model, so that there is no reaction. 10. The Nash equilibrium quantities can be solved for algebraically by finding the pair of quantities that satisfy equations 4.7 and 4.8. Use equation 4.8 to substitute for q2 in equation 4.7: q1 = 30 − 0.5(30 − 0.5q1). Then solve that equation for q1, which yields q1 = 20. Substituting 20 for q1 in equation 4.8 gives us q2 = 20. 11. Though it was assumed that firms offer identical products, results for the Cournot model are qualitatively similar when products are different. 12. The profit of firm i is

The first-order condition is

Adding the first-order condition over n firms gives P′Q + nP − nMC = 0, where Q is industry supply. Dividing through by n and using the formula for the absolute value of demand elasticity, η = −(1/P′)(P/q), one gets −(1/η) + n − nMC/P = 0. Rearranging, equation 4.9 emerges. 13. Heinrich von Stackelberg, Marktform und Gleichgewicht (Vienna: Julius Springer, 1934). 14. Joseph Bertrand, “Théorie mathématique de la richesse sociale,” Journal des Savants (1883): 499–508. 15. For a more complete discussion of oligopoly theory, see Jean Tirole, The Theory of Industrial Organization (Cambridge, MA: MIT Press, 1988), and Carl Shapiro, “Theories of Oligopoly Behavior,” in Richard Schmalensee and Robert D. Willig, eds., Handbook of Industrial Organization (Amsterdam: North-Holland, 1989). 16. It is not necessary that a firm literally live forever but rather that it can potentially live forever. In other words, there no known date in the future for which firms are certain that they will no longer be around. In fact, firms can be around for quite a long time. Currently, the oldest recorded firm still in existence is the Swedish firm Stora Kopparberg (the name means “Great Copper Mountain”). Documents show that it was in existence in 1288! 17. The present value of receiving $V every period equals V/r, where r is the interest (or discount) rate, and V is received at

the end of the period. For a derivation of this result, see chapter 2, note 9. 18. This theory of collusion is due to James W. Friedman, “A Non-cooperative Equilibrium for Supergames,” Review of Economics Studies 38 (January 1971): 1–12. The earliest argument that firms would be able to collude successfully is due to Edward H. Chamberlin, The Theory of Monopolistic Competition (Cambridge, MA: Harvard University Press, 1933). 19. Transcript from United States v. American Airlines (U.S. 5th Circuit Court, 1984). 20. The Antitrust Division of the DOJ chose to pursue it as a Section 2 violation (attempt to monopolize), and Crandall avoided any serious repercussions. 21. “Mr. Jacob J. Alifraghis, Also Doing Business as InstantUPCCodes.com,and 680 Digital, Inc., Also Doing Business as Nationwide Barcode, and Philip B. Peretz; Analysis to Aid Public Comment,” Washington, DC, U.S. Federal Trade Commission, July 29, 2014. 22. Lysine is an organic chemical used in animal feed to promote the growth of muscle tissue. 23. For details on the lysine cartel, see John M. Connor, “Global Cartels Redux: The Amino Acid Lysine Antitrust Litigation (1996),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 4th ed. (New York: Oxford University Press, 2004). The more complete story is provided in Kurt Eichenwald, Informant: A True Story (New York: Broadway Books, 2001), which reads like a spy thriller. 24. In the Matter of Valassis Communications, Inc., Federal Trade Commission, File 051 0008, Docket C-4160, April 28, 2006. 25. Severin Borenstein, “Rapid Price Communication and Coordination: The Airline Tariff Publishing Case (1994),” in Kwoka and White, The Antitrust Revolution, pp. 233–251. 26. European Commission Press Release, “Antitrust: Commission opens proceedings against container liner shipping companies,” November 22, 2013 27. The ensuing discussion is from Paul Klemperer, Auctions: Theory and Practice (Princeton, NJ: Princeton University Press, 2004). 28. Robert Clark and Jean-François Houde, “Collusion with Asymmetric Retailers: Evidence from a Gasoline Price-Fixing Case,” American Economic Journal: Microeconomics 5 (August 2013): 97–123. 29. In equations 4.18 and 4.19, these variables take the value 15. 30. We set r = 0.05 to derive the set of values shown in figure 4.10. 31. The discussion of the citric acid cartel is based on John M. Connor, Global Price Fixing, 2nd ed. (Berlin: Springer, 2008). For details on the other cartels, see Joseph E. Harrington, Jr., How Do Cartels Operate? (Boston: NOW Publishers, 2006); and Robert C. Marshall and Leslie M. Marx, The Economics of Collusion—Cartels and Bidding Rings (Cambridge, MA: MIT Press, 2012). 32. Richard A. Smith, Corporations in Crisis (Garden City, NY: Doubleday, 1966), p. 132. 33. Connor, Global Price Fixing, 2nd ed. 34. Joseph E. Harrington Jr., Kai Hüschelrath, Ulrich Laitenberger, and Florian Smuda, “The Discontent Cartel Member and Cartel Collapse: The Case of the German Cement Cartel,” International Journal of Industrial Organization 42 (2015): 106– 119; and Joseph E. Harrington Jr., Kai Hüschelrath, and Ulrich Laitenberger, “Rent Sharing to Control Non-Cartel Supply in the German Cement Market,” ZEW Discussion Paper 16-025, Center for European Economic Research, Mannheim, March 2016. The latter paper also discusses the various methods used by cartels to control noncartel supply, which are referred to as bribery, extortion, starvation, and takeover. 35. This theory is due to Edward J. Green and Robert H. Porter, “Noncooperative Collusion under Imperfect Price Information,” Econometrica 52 (January 1984): 87–100. 36. Robert H. Porter, “A Study of Cartel Stability: The Joint Executive Committee, 1880–1886,” Bell Journal of Economics 14 (Autumn 1983): 301–314. A related analysis was performed for the steel industry in Jonathan A. Baker, “Identifying Cartel Policing under Uncertainty: The U.S. Steel Industry, 1933–1939,” Journal of Law and Economics 32 (October 1989): S47–S76. 37. William G. Christie and Paul H. Schultz, “Why Do Nasdaq Market Makers Avoid Odd-Eighth Quotes?” Journal of Finance 49 (December 1994): 1813–1840. For a less technical presentation of the evidence, see William G. Christie and Paul

H. Schultz, “Did Nasdaq Market Makers Implicitly Collude?” Journal of Economic Perspectives 9 (Summer 1995): 199– 208. 38. William G. Christie, Jeffrey Harris, and Paul H. Schultz, “Why Did Nasdaq Market Makers Stop Avoiding Odd-Eighth Quotes?” Journal of Finance 49 (December 1994): 1841–1860. 39. For an alternative explanation for the odd-eighth quotes, see Paul E. Godek, “Why Nasdaq Market Makers Avoid OddEighth Quotes,” Journal of Financial Economics 41 (July 1996): 465–474. 40. “In the Matter of Toys ‘R’ Us,” Washington, DC, U.S. Federal Trade Commission, October 13, 1998. 41. “Agreements between Hasbro U.K. Ltd, Argos Ltd and Littlewoods Ltd Fixing the Price of Hasbro Toys and Games,” London, United Kingdom Office of Fair Trading, November 21, 2003. 42. Ibid, p. 37. 43. For the reader who, by the end of this chapter, desires to learn more about antitrust law with regard to price fixing, we recommend William E. Kovacic, “The Identification and Proof of Horizontal Agreements under the Antitrust Laws,” Antitrust Bulletin 38 (Spring 1983): 5–81; and Louis Kaplow, Competition Policy and Price Fixing (Princeton, NJ: Princeton University Press, 2013), particularly chapters 4 and 5. 44. State Oil Co. v. Khan, 522 U.S. 3 (1997). 45. United States v. Trans-Missouri Freight Association, 166 U.S. 290 (1897). 46. United States v. Addyston Pipe & Steel Co., 175 U.S. 211 (1899). 47. United States v. Trenton Potteries Company et al., 273 U.S. 392 (1927). 48. Appalachian Coals, Inc. v. United States, 288 U.S. 344, 360 (1933). 49. United States v. Socony-Vacuum Oil Co. et al., 310 U.S. 150, 218 (1940). 50. Broadcast Music, Inc.v. Columbia Broadcasting System, 441 U.S. 1 (1979). 51. United States v. American Tobacco Co., 221 U.S. 106 (1911). 52. Monsanto Co. v. Spray-Rite Serv., 465 U.S. 752 (1984). 53. Judgment of the Court, ACF Chemiefarma NV v. Commission of the European Communities, July 15, 1970. 54. Judgment of the Court of First Instance, Bayer AG v. Commission of the European Communities, October 26, 2000. 55. Jonathan B. Baker, “Two Sherman Act Section 1 Dilemmas: Parallel Pricing, the Oligopoly Problem, and Contemporary Economic Theory,” Antitrust Bulletin 38 (Spring 1993): 143–219, p. 179. 56. Interstate Circuit, Inc., et al. v. United States, 306 U.S. 208, 223 (1939). 57. United States v. American Tobacco Co., 328 U.S. 781 (1946). 58. United States v. Paramount Pictures, Inc., 334 U.S. 131 (1948). 59. Theatre Enterprises, Inc. v. Paramount Film Distributing Corp. et al., 346 U.S. 537 (1954). 60. Court of Justice, A. Ahlström Osakeyhtiö and others v. Commission of the European Communities, January 20, 1994. 61. In re Text Messaging Antitrust Litigation, 630 F.3d 622 (7th Cir. 2010). 62. United States v. Foley, 598 F. 2d 1323 (4th Cir. 1979). 63. U.S. v. Container Corporation of America, 393 U.S. 333 (1969). 64. Bell Atlantic Corp. v. Twombly, 550 U.S. 544 (2007). 65. From Joseph Heller’s novel Catch-22, the phrase refers to a paradoxical situation for which there is no solution because of mutually conflicting conditions. For example, to get a job you need experience, but the only way you can gain experience is from having a job. 66. Matsushita Electric Industrial Co., Ltd. v. Zenith Radio Corp 475 U.S. 574 (1986). 67. In re: Text Messaging Antitrust Litig., 599 F.3d 641 (7th Cir. 2010); In re: Text Messaging Antitrust Litig., 782 F.3d 867 (7th Cir. 2015). 68. In re: Text Messaging Antitrust Litig., 782 F.3d 867 (7th Cir. 2015). 69. Making collusion unprofitable is sufficient to deter it; however, it is not necessary. If penalties make collusion unstable

(that is, a firm would want to cheat), then that will deter collusion, even if penalties are not high enough to make it unprofitable. This point is explored in Joseph E. Harrington, Jr., “Penalties and the Deterrence of Unlawful Collusion,” Economics Letters 124 (2014): 33–36. 70. Gary R. Spratling, “Pyrrhic Victories? Reexamining the Effectiveness of Antitrust Remedies in Restoring Competition and Deterring Misconduct,” paper presented at the George Washington University Law School, Washington, DC, March 22, 2001. 71. For a statement of the conditions for corporate leniency, see www.justice.gov/atr/leniency-program. 72. Criminal Cartel Enforcement: Stakeholder Views on Impact of 2004 Antitrust Reform Are Mixed, but Support Whistleblower Protection,” GAO-11-615 (Washington, D.C.: U.S. General Accountability Office, 2011).

5 Market Structure and Dynamic Competition

Competition in markets comes from two sources: firms that are supplying the market and firms that could be supplying the market (that is, potential entrants). Chapter 4 focused on the behavior of existing firms while taking their presence as exogenously determined. This chapter extends this analysis in two critical ways. We first consider the determinants of the number of sellers and, more broadly, market concentration. After discussing the challenge of how best to measure market concentration, we turn to investigating some of the drivers of concentration, including scale economies and entry conditions. In doing so, the decision to enter a market is examined. The second part of the chapter looks at some of the ways in which firms dynamically compete and how this determines market structure in terms of the number of firms, market shares, and pricecost margins. A particular focus is how incumbent firms can neutralize the threat of entry and constrain the expansion of smaller rivals. Market Structure This section investigates two key elements of market structure—concentration and entry conditions. Our discussion of concentration is a continuation of our coverage of oligopoly theory and collusion in that its focus is on the role of actual competition. We then investigate the determinants of concentration by considering scale economies and entry conditions. Our discussion of entry conditions is concerned with understanding their role in determining the extent of actual competition (that is, the number of firms) and the extent of potential competition. Concentration In our analysis of oligopoly theory in chapter 4, firms were assumed to be identical—having the same products and same cost functions. As long as the industry is symmetric, the number of sellers accurately measures market concentration. While the abstract world is often specified to be symmetric in order to reduce the complexity of the analysis, in the real world there is typically great heterogeneity among firms. The implication of firms having different products, different cost functions, and different capacities is that they can have very different market shares. As a result, a simple count of the number of firms can be a very misleading measure of the degree of concentration. One of the traditional tasks in industrial organization has been to develop a statistic that allows a single number to reasonably measure the concentration of an industry. To construct a useful index of concentration, one first needs to understand its purpose. From a welfare and antitrust perspective, a concentration index should measure the ability of firms to raise price above cost. A higher value for a concentration index should indicate a higher price-cost margin or a higher likelihood of firms being able to

collude successfully. Note that a concentration index is exclusively concerned with actual competition and ignores potential competition. For this reason, a concentration index cannot fully assess the competitiveness of a particular industry. Some other information will have to be provided so as to account for the degree to which potential entry can constrain the market power of active firms. Concentration ratio Although economists have devised many indices to measure concentration, the measure with the longest history is the concentration ratio. The m-firm concentration ratio is simply the share of total industry sales accounted for by the m largest firms. A fundamental problem with concentration ratios is that they describe only one point on the size distribution of sellers. Consider the size distributions of two hypothetical industries, as shown in table 5.1. Industries X and Y have the same four-firm concentration ratio, namely, 80 percent. However, noting the other information in table 5.1, most economists would regard the two industries as likely to exhibit quite different patterns of behavior. If we instead calculate the three-firm concentration ratios, industry Y is now seen to be “more concentrated” (75 percent versus only 60 percent for industry X). The basic problem is that this type of measure wastes relevant data. Nevertheless, the concentration ratio is superior to a simple count of sellers. Table 5.1 Percentage of Sales Accounted for by the Five Leading Firms in Industries X and Y Firm

Industry X

Industry Y

1 2 3 4 5

20 20 20 20 20

60 10 5 5 5

Total

100

85

To pursue this point, consider the size distributions of the two industries shown as concentration curves in figure 5.1. The height of a concentration curve above any integer n on the horizontal axis measures the percentage of the industry’s total sales accounted for by the n largest firms. The curves rise from left to right, and at a continuously diminishing rate. In the case of identical shares, such as for industry X, the curve is a straight line. The curves reach their maximum height of 100 percent where n equals the total number of firms in the industry. If the curve of industry Y is everywhere above the curve of X, then Y is more concentrated than X. However, when the curves intersect, as they do in figure 5.1, it is impossible to say which is the “more concentrated” industry unless we devise a new definition.

Figure 5.1 Concentration Curves for Industries X and Y

The most widely available concentration ratios are those compiled by the U.S. Bureau of the Census. Ideally, these ratios should refer to industries that are defined meaningfully from the viewpoint of economic theory. However, the census classifications of industries were developed over years “to serve the general purposes of the census and other government statistics” and were “not designed to establish categories necessarily denoting coherent or relevant markets in the true competitive sense, or to provide a basis for measuring market power.”1 The census frequently includes products that are not close substitutes, and it sometimes excludes products that are close substitutes. An example of the latter is the existence of two separate “industries” for beet sugar and cane sugar. The census ignores both regional markets (for example, all bakeries are combined into a single national market) and foreign competition (steel imports are excluded from the steel industry). Given that we will occasionally refer to studies that use census concentration ratios, it will be helpful to offer a brief description of their procedure for classifying industries. The North American Industry Classification System (or NAICS) classifies business establishments in Canada, Mexico, and the United States according to type of economic activity. NAICS makes use of a series of numbers in which each succeeding digit represents a finer degree of classification. An example is the “Food Manufacturing” group which is group 311. In this three-digit group, there are nine four-digit industry groups, such as industry group 3114, which is “Fruit and Vegetable Preserving and Specialty Food Manufacturing.” In this particular four-digit industry are two five-digit industries, including “Frozen Food Manufacturing” (31141); and in 31141 are two six-digit industries, including “Frozen Fruit, Juice, and Vegetable Manufacturing” (311411). The Census Bureau computes concentration ratios for the top four, eight, and twenty firms for 639 five-digit industries.2 Table 5.2 reports four-firm concentration ratios for some four-digit industries. It is quite apparent that the theoretical market structures of perfect competition and monopoly do not provide useful categories for real-

world industries. Nor is it clear which industries should be classified as oligopolistic or competitive. Most would agree that the industries at the top of table 5.2 are oligopolies, but how far down the list should one descend before concluding that the market is competitive? Table 5.2 Concentration of Selected Industries Industry

CR4 (1997)

CR4 (2007)

HHI (1997)

HHI (2007)

Cigarettes Breweries Automobile manufacturing Cereal breakfast foods Turbines and turbine generators Gasoline engine and engine parts Major appliance manufacturing Snack food manufacturing Farm machinery and equipment Mattress manufacturing Petroleum refineries Iron and steel mills Audio and video equipment Dairy Product Manufacturing Computer and electronic product manufacturing Apparel manufacturing

99 91 87 87 79 68 66 63 53 42 41 39 30 26 21 18

98 90 71 85 62 47 74 66 62 57 45 55 38 30 20 7

NA NA 2,725 2,774 2,404 1,425 1,320 2,619 1,707 658 625 560 415 241 171 104

NA NA 1,549 2,909 1,465 712 2,423 NA 2,372 915 812 906 467 373 183 35

Source: U.S. Census Bureau, Economic Census (Washington, DC: U.S. Government Printing Office, various years). Notes: CR4 = four-firm concentration ratio, HHI = Herfindahl-Hirschman Index, NA = not available.

Herfindahl-hirschman index (HHI) Since 1992, the merger guidelines of the Antitrust Division of the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) have used the Herfindahl-Hirschman Index (HHI).3 Named after its inventors, Orris C. Herfindahl and Albert O. Hirschman, it is the most useful concentration index currently available. The HHI has the advantage of incorporating more information about the size distribution of sellers than the simple concentration ratio does. If we let si denote firm i’s proportion of total industry sales (that is, its market share), then the HHI is defined as

where n is the number of firms. The HHI is then the weighted average slope of the concentration curve (recall from figure 5.1 that industries with more steeply sloped curves are more concentrated). The weight for the slope of each segment of the curve is the corresponding si for that segment. If an industry consists of a single seller, then HHI attains its maximum value of 10,000. The index declines with increases in the number of firms and increases with rising inequality among a given number of firms. Returning to our example in table 5.1, the HHIs are:

Thus the HHI suggests that industry X is more competitive than industry Y, while the four-firm concentration ratio does not distinguish between them. In the 2010 Horizontal Merger Guidelines of the DOJ and FTC, a market with an HHI less than 1,500 is considered to be “unconcentrated,” a market with an HHI between 1,500 and 2,500 is referred to as “moderately concentrated,” and when the HHI exceeds 2,500 it is said to be “highly concentrated.” If all firms had equal market shares, these three categories would correspond to more than six firms, five or six firms, and four or fewer firms, respectively. However, one should not attach much significance to this categorization, as few decisions by these agencies are based on them. When a prospective merger is evaluated, many factors are considered (including the HHI), but that is a topic we will examine in chapter 6. One of the attractive features of the HHI is that it has foundations in oligopoly theory. Suppose that firms have homogeneous products and engage in Cournot competition. Firms are allowed to have different cost functions, and ci denotes the (constant) marginal cost of firm i, where i = 1, …, n. One can show that the Cournot solution has a firm’s market share being negatively related to its marginal cost. The lower is firm i’s marginal cost, the higher its profit-maximizing output will be and thus the higher its share of the market will be. The important result is that the HHI is directly related to a weighted average of firms’ price-cost margins from the Cournot solution:

where Pc is the Cournot price, si is firm i’s market share, and η is the absolute value of the price elasticity of market demand. The higher is the HHI, the higher the industry price-cost margin will be.4 Market concentration and the price-cost margin: causation versus correlation Empirical evidence has shown that a market’s concentration index and its price-cost margin are positively correlated.5 That is, when a market is relatively concentrated, it tends to have a relatively high price-cost margin; when it is relatively unconcentrated, it tends to have a relatively low price-cost margin. A key question is what we should make of this empirical relationship. Should we go around breaking up highly concentrated industries or instead take a laissez faire approach and leave them be? Addressing this policy question first requires that we develop some possible explanations for why a correlation exists between concentration indices and price-cost margins. One hypothesis, which we refer to as the market power hypothesis, is that a causal relationship holds between concentration and the price-cost margin: The more concentrated is an industry, the less aggressively firms compete, and thus the higher the price-cost margin will be. In chapter 4, this relationship was derived for the Cournot solution: The smaller the number of firms (and thus the greater the concentration), the higher the price-cost margin will be. In addition, collusion was found to be easier as the number of firms decreases (and thus concentration increases). According to the market power hypothesis, a high price-cost margin is then the result of high concentration. But before we go around taking apart Gap, Google, and General Electric, we ought to consider other possible explanations. The differential efficiency hypothesis argues that observed high concentration does not cause a high price-cost margin but rather that high concentration and high price-cost margins are both driven by a third factor. The argument is as follows. In some industries there are apt to be a few firms that have a differential advantage over their competitors. This advantage could be due to lower cost or better

products and services. In those industries, these superior firms will tend to dominate the market (so that concentration is high) and will be able to price considerably above cost (so that the firm and industry pricecost margins are high). Concentration does not then cause the price-cost margin but rather certain market conditions lend themselves to yield simultaneously high concentration and a high price-cost margin.6 The positive relationship between the HHI and the industry price-cost margin for the Cournot solution that was discussed above is an example of the differential efficiency hypothesis at work. If an industry has a few firms with relatively low cost, those firms will tend to price lower than their competitors, which will cause them to have a higher market share. Due to the skewed market shares, the concentration index is high. As a result of their lower cost, those differentially efficient firms will have a high price-cost margin, which, when weighted by their high market shares, will result in a high industry price-cost margin. If the differential efficiency hypothesis is what underlies the positive correlation between concentration indices and price-cost margins, then one does not want to go around breaking up highly concentrated industries. To do so would be to penalize firms for being superior and thereby deter them from doing what we want firms to do—provide better products and services at a lower cost. Determining whether a firm’s or industry’s high price-cost margin is the result of delivering what consumers want at a lower cost or is instead due to the anticompetitive exercise of market power is a central challenge of antitrust analysis. Entry Conditions Thus far in our analysis, we have measured the competitiveness of a market by the number of firms or some other measure of concentration. An equally important factor, however, is the ease with which entry can occur. Entry conditions are important for two reasons. First, the number of active firms is partially determined by the cost of entry. Entry conditions then play an important role in determining concentration. Second, entry conditions determine the extent of potential competition. Even if there are only a few active firms, a credible threat of entry could induce those firms to vigorously compete. For if they do not, a high industry price-cost margin could attract entry and drive their profits down. Entry conditions are then important because the cost or difficulty of entry affects the extent to which potential competition is a constraint on active firms. Here are some questions one needs to ask in order to assess entry conditions: How many prospective firms have the ability to enter in a reasonable length of time? How long does it take to enter this market? How costly is entry? Will a new firm be at a disadvantage vis-à-vis established firms? Does a new firm have access to the same technology, the same products, and the same information as the established firms? Is it costly to exit the industry? You might wonder why the last question relates to entry conditions. Since an entrant is uncertain as to whether it will succeed, the cost of exit can be an important factor in the original decision to enter. A critical question relevant to antitrust analysis is whether, after firms conduct their entry and exit decisions, the market structure that results is socially optimal. In other words, if no artificial impediments constrain entry and an ample supply of possible entrants exists, will entry occur until the market has the welfare-maximizing number of firms? While that is the case under perfect competition, we will find that is not generally so for imperfect competition. Equilibrium under free entry Entry into a market means acquiring the ability to produce and sell a product. In almost every market, there is some cost to entry. For example, the median cost of opening a restaurant is $275,000.7 In contrast, entry

into the manufacturing of automobiles involves tens of millions if not hundreds of millions of dollars. Entry costs may include investment in a production facility or the purchase of a license (as it does in many professional occupations). Entry into a consumer market often entails extensive advertising of one’s product in order to introduce it to prospective customers. As an initial step in considering entry conditions, we begin by investigating the relationship between the cost of entry and the number of competing firms. Consider an industry in which all active and prospective firms have access to the same production technology and input prices, so that each firm has the same cost function. Further assume that all firms produce the same product. Let π(n) denote each firm’s profit per period when there are n active firms in the market. For example, if we model active firms as simultaneously choosing output, then the Cournot solution applies, so that π(n) = Pcqc − C(qc), where qc is the Cournot firm output (where a higher n implies a lower qc), Pc is the Cournot price (where a higher n implies a lower Pc), and C(qc) is each firm’s cost of producing qc (not including the cost of entry). We assume that firm profit, π(n), is decreasing in the number of firms, because more firms generally create a more competitive environment. Assume that active firms operate in this market forever, and r is each firm’s discount rate. If the industry has reached an equilibrium with n active firms, then each active firm’s sum of discounted profits is π(n)/r. Using the condition that firm profit decreases with increasing number of firms, figure 5.2 plots the present value of a firm’s profit stream (before netting out the cost of entry).

Figure 5.2 Effect of the Cost of Entry on the Free-Entry Equilibrium Number of Firms

Suppose that this market is new and prospective firms simultaneously decide whether to enter. A free-

entry equilibrium is defined by a number of entrants, denoted ne, such that entry is profitable for each of the ne entrants and entry would be unprofitable for each of the potential entrants who chose not to enter. If K denotes the cost of entry, the free-entry equilibrium number of firms is defined by

For the case in figure 5.2, ne equals 5. Note that a free-entry equilibrium does not literally mean entry is without cost but rather that there are no impediments to firms entering a market (such as government regulation). In other words, if a firm wants to spend K to enter a market, it is free to do so. Suppose condition 5.1 does not hold. Examining figure 5.2, if six or more firms were to enter then [π(n)/r] − K < 0, so the first inequality in condition 5.1 does not hold. Thus, if ne > 5, then it is not a free-entry equilibrium. Each prospective firm expects to have a negative present value from entering the market. One or more of these entrants would prefer not to enter. Thus, six or more active firms is too many. Alternatively, suppose that the second inequality in condition 5.1 does not hold; that is, n firms plan to enter and [π(n + 1)/r] − K > 0 (for figure 5.2, this implies n < 5). In that case, entry of an additional firm is profitable, so we would expect additional entry. This is not a free-entry equilibrium either. Only when both inequalities in condition 5.1 are satisfied is an equilibrium achieved. In that case, entrants have a positive (or zero) present value, and nonentrants would have a negative present value if they entered.8 The relationship between the cost of entry and the number of active firms at a free-entry equilibrium is quite straightforward. If the cost of entry rises, fewer entrants find entry profitable. As shown in figure 5.2, if the cost of entry is higher (say, at K0), there would be fewer entrants; only three firms would enter. While this model of entry is useful for revealing how entry costs influence the number of competing firms, it is unsatisfactory—it ignores several important factors relevant to the role of entry. To begin, it does not allow for asymmetries among firms, and it is these asymmetries that explain the observed inequality in market shares. In particular, asymmetries between existing firms and potential entrants are ignored—this model only examines initial entry into a market. In practice, a primary concern of antitrust analysis is with the entry conditions faced by potential entrants for a currently active market. Later in this chapter we will examine disadvantages that potential entrants might face compared to existing firms and explore to what extent existing firms can create such disadvantages. We can at least touch on this issue now with the use of figure 5.2. Suppose that, as shown in figure 5.2, the cost of entry is K, so that five firms enter. Further suppose that, after some number of periods, an unanticipated and permanent shift outward of the market demand curve occurs. This rise in demand causes each firm’s profit to rise, so that the present value schedule shifts up (see figure 5.2). As a result, each active firm is earning profit of rather than π(5). If the cost of entry is still K, this rise in demand will induce two additional firms to enter. However, for reasons that will be considered shortly, the entry cost of a new firm may be higher than that for earlier entrants. If instead it costs K0 for a firm to enter the market today, additional entry will not occur in response to the rise in demand. As a result, the price-cost margin is higher, because demand is stronger but entry does not occur to stop its rise. Does a free-entry equilibrium result in a socially optimal market structure? Consider the impact of another firm entering a market. In most oligopolies, entry will expand industry supply and thereby lower price. Consumers benefit from the lower price by the amount of additional consumer surplus, which we denote by ΔCS (where Δ refers to change). Referring back to figure 3.2, a fall

in price benefits consumers because they pay less on the original units they were purchasing (rectangle PmBCPc) and earn surplus on the additional units they are purchasing (triangle BCD). While the fall in price benefits consumers, it harms incumbent firms in the market. Their profits decline, because they are receiving a lower price and are selling less due to the entrant capturing part of market demand. Let ΔπI denote the change in incumbent firms’ profits. Finally, consider the change in industry profit stemming from the entrant, which, as the entrant was presumed to be earning zero profit prior to entry, equals the entrant’s profit. That profit is denoted πE, though we need to subtract off the cost of entry K. Summing up, the change in social welfare is ΔSW = ΔCS + ΔπI + πE − K. Entry makes consumers better off (ΔCS > 0) and makes incumbent firms worse off (ΔπI < 0). As entry would occur only if it were profitable, then πE − K > 0. The first conclusion to draw is that the interests of an entrant will generally not coincide with those of society. For example, entry could be profitable (πE − K > 0) but also be welfare-reducing (ΔCS + ΔπI + πE − K < 0). To see how this could happen, suppose entry would only slightly raise industry supply and thereby only slightly lower price. Hence, ΔCS, while positive, is close to zero, and the change in industry profit, ΔπI + πE, while negative, is also close to zero. However, social welfare will be reduced because of the cost of entry:

It is then possible for entry to occur (as it is profitable), but it lowers social welfare. The root of the problem is that the profitability of entry is partly arising from a “business stealing” effect. Part of an entrant’s profit is due to taking demand from existing firms. While that may make entry profitable, it does not add to welfare, because it is just a transfer of profit between the old firms and the new firm. If entry fails to result in much of an expansion of industry supply, then consumers are not made much better off, and social welfare declines because of the additional costs associated with entry. When firms offer homogeneous products, this business stealing effect is sufficiently strong that too much entry always occurs.9 More firms will enter than maximizes social welfare; in other words, entry occurs to the point that the last entrant lowers social welfare. However, if firms offer heterogeneous products, then it is possible that too little entry takes place. While the business stealing effect is still present—as part of the profitability of entry will come from taking some of the incumbent firms’ demand—market demand also expands, because entry has increased product variety. That additional product in the market benefits consumers, but if the new firm does not appropriate enough of that benefit, it may find entry unprofitable even though it would raise social welfare. The question of whether there is too much or too little entry was examined in the case of radio broadcasting.10 The study looked at the number of radio stations in 135 U.S. metropolitan markets in the early 1990s. A new radio station will capture part of the current set of listeners (the business stealing effect) but may also expand the listening base by offering new content (the expansion of product variety). For example, it could introduce a sports talk radio station where one was not previously offered. The study found evidence of excessive entry in this market. While the socially optimal number of radio stations was estimated to be about five, a market had on average more than eighteen stations. The general point is that, in an oligopolistic market, the private interests of a potential entrant will not coincide with social interests. Entry is profitable if and only if πE − K > 0, but entry raises social welfare if and only if ΔCS + ΔπI + πE − K > 0 or πE − K > − ΔCS − ΔπI. However, no such incongruity occurs under perfect competition. As a single entrant is small, it has a negligible impact on industry supply and price, which means ΔCS and ΔπI are both close to zero. Hence, under perfect competition,

Therefore, entry is profitable if and only if it raises social welfare. However, when competition is imperfect, there can be no presumption that a free-entry equilibrium results in the socially optimal number of firms. This observation could provide a basis for government intervention and is pertinent to merger analysis. Sources of Concentration Why are some markets highly concentrated? For those markets that are highly concentrated, should we be concerned? Is there a role for government intervention in monitoring conduct, breaking up firms, or regulating price and entry? As already touched on when discussing the market power and differential efficiency hypotheses, the appropriate public policy response depends on the source of the concentration but also, regardless of its source, on the extent to which concentration is the basis for future anticompetitive conduct. The latter issue will occupy us in ensuing chapters; and here we investigate some possible sources of concentration. Scale economies on the supply side and demand side One explanation of why some industries are more concentrated than others is that some aspect of the production (supply) or consumption (demand) of the product makes larger size or larger market share more efficient. High concentration is then a reflection of efficiency. A cost function has economies of scale at some output if average cost is declining at that output. Figure 5.3 shows economies of scale for all outputs less than because average cost is decreasing in the amount produced. A lower average cost from producing at a higher rate comes from two general sources. First, the production technology may allow for a more effective use of inputs when output is higher through, for example, the specialization of labor and equipment. The classic example is automobile assembly. As the rate of output increases, workers can specialize more narrowly and become highly efficient in a number of tasks. For example, rather than install a complete engine, a worker is responsible for attaching one small part of it. Or if the output rate is sufficiently large, a firm will incur the fixed cost of installing robotics to replace specialized labor, which will further lower average cost. Second, a firm may be able to receive lower input prices when it purchases the larger volume associated with a higher rate of production. It is thought that part of Standard Oil’s cost advantage in the oil refining industry a century ago came from the lower rate it negotiated with railroad companies by virtue of its size.

Figure 5.3 Scale Economies as a Barrier to Entry?

As the world’s largest retailer for several decades, Walmart has benefited from economies of scale of both types.11 Its many stores justified investment in information technology, which lowered average cost; for example, the introduction of barcode readers in distribution centers in the late 1980s reduced the labor cost of processing shipments by half. In addition, as its purchases made up 15 to 25 percent of sales of such companies as Clorox, Kellogg, Playtex, and Revlon, Walmart had bargaining leverage that allowed it to obtain lower wholesale prices. As of 2015, Amazon supplanted Walmart as the world’s largest retailer, and Amazon has its own scale economies to exploit to its advantage. With regard to economies of scale in production, a key concept is minimum efficient scale (MES) which is the minimum output required to achieve the lowest long-run average cost. In figure 5.3, is the minimum efficient scale. A firm that operates near MES will have a cost advantage over smaller firms, which could prevent the latter from profitably operating. This result might imply that all firms need to produce at a rate near MES. However, if all firms are producing near MES, market demand (at a price around the average cost at MES) may only be large enough to require the output of a few firms. In a rough sense, only a few firms can operate profitably when MES is not small relative to market demand. Based on engineers’ cost estimates, table 5.3 reports MES as a fraction of market size and does so at the level of a plant and a firm. Although the data are old and any such estimation is difficult, we can make some robust points. First, markets vary widely on how big a firm must be to achieve efficient scale. While MES is reached with only 1 percent of market size for shoes and cotton synthetic fibers, it takes 10–14 percent for beer brewing and 14–20 percent for refrigerators and freezers. Second, little evidence suggests that MES, by itself, is sufficient to explain high levels of concentration.12 For most markets, the four-firm concentration ratio is significantly higher than four times MES. For example, the four largest firms in steel works had 48 percent of the market but, if each was operating at MES, they would have only 12 percent of the market.

Only for beer brewing and refrigerators and freezers could scale economies possibly explain the extent of market concentration. Table 5.3 Minimum Efficient Scale of Plants and Firms as Percentage of U.S. National Market, 1967 Industry Beer brewing Cigarettes Cotton synthetic fabrics Paints and varnishes Lacquers Petroleum refining Shoes, except rubber Glass containers Cement Steel works Ball and roller bearings Refrigerators and freezers Storage batteries

Minimum Efficient Scale Plant as Percentage of Total Market

Minimum Efficient Scale Firm as Percentage of Total Market

Four-Firm Concentration Ratio

3.4 6.6 0.2

10–14 6–12 1

40 81 36

1.4

1.4

22

1.9 0.2 1.5 1.7 2.6 1.4 14.1

4–6 1 4–6 2 3 4–7 14–20

33 26 60 29 48 54 73

1.9

2

61

Source: F. M. Scherer, A. Beckenstein, E. Kaufer, and R. D. Murphy, The Economics of Multi-Plant Operation: An International Comparison Study (Cambridge, MA: Harvard University Press, 1975).

Higher efficiency from larger size can also manifest itself on the demand side by making for a more attractive product. While smartphones are the rage of the early twenty-first century, it was instead basic landline telephone service that was avant garde at the turn of the twentieth century. Prior to AT&T being made a regulated monopoly, cities had competing telephone companies. The only problem is that they did not interconnect, which meant that two people could communicate only if they were both subscribers to the same company’s network. This property naturally provides an advantage to size, because the more people who subscribe to a telephone company’s service, the more attractive that company’s service becomes. The size advantage just described comes from delivering a more valuable product to consumers rather than a lower cost to the firm; it is due to what is called a network effect. A product has a network effect when its value to a consumer is greater when more people consume it. Local telephone service around 1900 had a network effect, and many popular goods and services in recent times have them as well. To take a recent example, consider an online auction site. If you are interested in selling a good, you would prefer to post your item at a site with many buyers. Similarly, if you are a buyer, you would prefer to visit a site with a lot of people selling the item of interest. Thus, a buyer and seller will prefer to use an auction site with lots of buyers and sellers. Given the advantage to size, it is then not surprising that market concentration in online auctions is very high, with eBay dominating the United States and Yahoo! dominating Japan. Computer operating systems also have network effects, though the effect operates indirectly (but just as powerfully if not more so). The value of an operating system such as Microsoft Windows resides in the applications written for it. A software developer is more inclined to write an application for an operating system that has a lot of users because then there is the prospect of high demand for the application. Thus we find that if more people use an operating system, then more applications are written for it, which then makes the operating system more appealing, which serves to attract yet more consumers to purchase the operating system. This feedback loop ultimately results in a small number of

operating systems dominating a market. Markets with strong network effects have demand-side economies of scale that can create high levels of concentration. As for supply-side scale economies, there is an efficiency, as large size creates a more valuable product for consumers. While concentration is born out of an efficiency, it can still lead to anticompetitive conduct as firms take advantage of their dominant position in the market. How antitrust policy has dealt with such markets is examined in chapter 9. Barriers to entry The traditional wisdom in industrial organization is that serious and persistent monopolistic deviations of price from cost are likely only when two conditions coexist: sufficiently high seller concentration to permit (collusive) pricing and high barriers to the entry of new competition.13 We have reviewed collusive pricing, but what are barriers to entry? Perhaps no other subject has created more controversy among industrial organization economists than that of barriers to entry. At one extreme, some economists argue that the only real barriers are government related. Examples include a franchise given by government to a local cable television company and the requirement that to operate a New York City taxicab, one must own a government-issued medallion.14 A patent is another example, in that it gives a firm a temporary monopoly.15 At the other end of the spectrum, some economists argue that almost any large expenditure necessary to start up a business is a barrier to entry. Given this state of affairs, we cannot hope to provide a definitive answer. Our objective is to discuss the various views and definitions and evaluate each of them. A pioneer in this area (and a source of much of the controversy), Joe Bain defined a barrier to entry as “the extent to which, in the long run, established firms can elevate their selling prices above minimal average costs of production and distribution … without inducing potential entrants to enter the industry.”16 One immediate problem with this definition is that it is a tautology: A barrier to entry is said to exist if existing firms earn above-normal profit without inducing entry. In other words, Bain defines a barrier to entry in terms of its outcome. One gets a better idea of what Bain has in mind when he states what he considers to be barriers to entry. These include scale economies, the capital cost requirements of entry, government restrictions like tariffs and patents, and absolute cost advantages of existing firms. Sources of the latter include a better technology (protected through patents or trade secrets), control of low-cost raw material supplies, and the learning curve. The learning curve refers to when a firm has lower cost by virtue of having discovered improvements in its production process through experience. These barriers are quite diverse and certainly entail very different welfare implications. A government restriction like a tariff is typically welfare-reducing. It is then a “bad” barrier to entry. In contrast, superior efficiency of existing firms due to a better technology is a “good” barrier to entry. No reasonable economist believes that society is better off if existing firms are made less efficient. However, at least in the short run, welfare would be higher if existing firms were forced to share their know-how with new entrants. The important point to make here is that a barrier to entry, as defined by Bain, need not imply that its removal would raise welfare.17 A very different definition was put forth by Nobel laureate George Stigler: “A barrier to entry may be defined as a cost of producing (at some or every rate of output) which must be borne by firms which seek to enter an industry but is not borne by firms already in the industry.”18 The emphasis of this definition is on differential costs between existing firms and entrants. For example, suppose that later entrants have to

advertise their product to consumers, while existing firms do not. This cost of advertising is a barrier to entry according to Stigler’s definition (and also Bain’s). Stigler’s definition is narrower than Bain’s in that some things are barriers according to Bain but not according to Stigler (for example, scale economies), although the reverse is not true.19 A third definition comes from Christian von Weizsäcker: “Barriers to entry into a market … can be defined to be socially undesirable limitations to entry of resources which are due to protection of resource owners already in the market.”20 While this definition has the right focus in relating entry barriers to welfare, it also defines a barrier to entry in terms of its ultimate outcome. It becomes operationally more useful if we can ex ante identify what those specific factors in an industry are that reduce welfare. In the remainder of this section, we focus on the controversy related to the definition of entry barriers. The discussion has largely revolved around what Bain has labeled barriers to entry. For example, a large amount of capital necessary for entry is often cited as a source of cost disadvantage faced by new entrants. Richard Posner strongly disagrees with this position: Suppose that it costs $10,000,000 to build the smallest efficient plant to serve some market; then, it was argued, there is a $10,000,000 “barrier to entry,” a hurdle a new entrant would have to overcome to serve the market at no disadvantage vis-àvis existing firms. But is there really a hurdle? If the $10,000,000 plant has a useful life of, for example, ten years, the annual cost to the new entrant is only $1,000,000. Existing firms bear the same annual cost, assuming that they plan to replace their plants. The new entrant, therefore, is not at any cost disadvantage at all.21

Posner does agree with a somewhat more subtle view of the capital requirements barrier, which is that the uncertainty of a new entrant’s prospects may force it to pay a risk premium to borrow funds exceeding the premium that existing firms have to pay. Others have observed that when truly huge amounts of capital are required, the number of possible entrants who can qualify is greatly reduced. Perhaps the most controversial entry barrier is that of scale economies. To get a feel for this controversy, we have constructed a fictitious conversation between Joe Bain and George Stigler.22 The conversation concerns a market for which the firm average cost function is as shown in figure 5.3. Average cost is declining until an output of is reached, after which average cost is constant. Here is the minimum efficient scale. Suppose that there is a single firm in the industry and it is producing q0 and pricing at P0. Note that its price exceeds average cost. Joe: As is apparent from figure 5.3, scale economies are a barrier to entry. The existing firm is pricing above cost, yet entry does not occur, as it would be unprofitable. George: But why do you say that entry is unprofitable? Joe: The reason is quite obvious. If a new firm comes in and produces at minimum efficient scale, the total industry supply would be Because price falls below average cost, entry is unprofitable. Of course, a new firm could instead produce at a lower rate and thereby reduce the extent to which price is depressed. However, because average cost is declining, the new firm would be at a considerable cost disadvantage and once again incurs losses. In either case, entry is unprofitable. George: Why do you assume that the new firm expects the existing firm to maintain its output? Why can’t the new firm enter and slightly undercut the existing firm’s price of P0? It would then get all of market demand and earn profit approximately equal to [P0 − AC(q0)]q0. Entry is profitable! Joe: Don’t be silly, George. Do you really believe that a new firm could induce all consumers to switch to buying its product by setting a slightly lower price? George: Why not?

Joe: There are lots of reasons. For example, we know that consumers are hesitant to switch brands. Such brand loyalty makes sense, as consumers have a lot less information about a new brand’s quality, since they lack personal experience. To offset brand loyalty, a new firm would have to offer a considerable price discount in order to lure consumers away. George: Joe, we finally agree on something. What you’ve done is pointed out the real barrier to entry— that consumers have a preference for the existing firm’s product. To overcome brand loyalty, a new firm must initially sell at a discount or perhaps even give away the product in the form of free samples. The cost of getting consumers familiar with its product is a measure of the true barrier to entry, and it meets my definition. Scale economies are a red herring. If you were paying attention, you should be quite confused about entry barriers. Join the crowd! The concept of barriers to entry lacks clarity, and one is never sure what to do with it. It is certainly not clear what the welfare implications are of any particular thing called a “barrier to entry.” The most unfortunate part is that some economists and antitrust lawyers throw the term entry barrier around as if only one accepted and meaningful definition existed. The best advice we can offer is to perform a two-stage inquiry. In the first stage, carefully examine the assumptions underlying the particular argument that something is a barrier. Determine whether it is indeed true that existing firms can maintain price above cost without inducing entry. In the second stage, consider whether there is a policy that could “remove” the barrier and improve social welfare. Contestability and sunk costs A different perspective on entry conditions is provided by the theory of contestable markets due to William Baumol, John Panzar, and Robert Willig.23 A market is perfectly contestable if three conditions are satisfied. First, new firms face no disadvantage vis-à-vis existing firms. This condition means that new firms have access to the same production technology, input prices, products, and information about demand. Second, there are zero sunk costs; that is, all costs associated with entry are fully recoverable. A new firm can then costlessly exit the industry. If entry requires construction of a production facility at cost K, then sunk costs are zero if, on exiting the industry, a firm can sell the facility for K (less any amount due to physical depreciation). If no market exists for such a facility and it must be sold for scrap at price R, then sunk costs equal K − R. The third condition is that the entry lag (which equals the time between when a firm’s entry into the industry is known by existing firms and when the new firm is able to supply the market) is less than the price adjustment lag for existing firms (the time between when a firm desires to change price and when it can change price).24 The central result is that if a market is perfectly contestable, then an equilibrium must entail a socially efficient outcome. For example, suppose that there are scale economies as in figure 5.3 and a firm prices at P0, which exceeds P* (the price associated with average cost pricing).25 If the market is perfectly contestable, then a new firm could enter, undercut the price of P0 by a small amount, and earn profit of approximately [P0 − AC(q0)] q0. Implicit in it earning that profit is that its product and cost function are the same as the incumbent firm’s and the incumbent firm is unable to adjust its price prior to the new firm selling. Furthermore, when the incumbent firm eventually does respond by adjusting its price, the new firm is assured of being able to costlessly exit the industry with its above-normal returns intact, because sunk costs equal zero. This type of hit-and-run entry will occur unless the incumbent firm prices at P*. Contestability is a theory for which potential competition plays the dominant role in generating competitive behavior.

If sunk costs are positive, then a new firm cannot costlessly exit. If this exit cost is sufficiently large, then it may swamp the above-normal profit earned before the incumbent firm responds, in which case entry does not occur. The more that entry costs are sunk, the higher can incumbent firms price above cost without inducing entry. By most definitions, sunk cost is a barrier to entry. While the theory of contestable markets is controversial, it has been instrumental in antitrust analyses, reducing their emphasis on concentration and taking greater account of potential competition. Dynamic Competition The previous section focused on how structural factors, such as entry conditions, affect market concentration and industry performance (for example, the price-cost margin). Now we turn to the role of firm conduct. An obvious avenue for conduct to affect concentration is through the decision of firms to merge. A merger between two firms reduces the number of sellers by one and raises the HHI as the merged firm has the combined market share of the two original firms. Of course, that is just the start of the story. Whether that higher concentration persists depends on various factors including entry conditions. But mergers are the topic of chapter 6. Here we consider more subtle ways for firms to impact concentration. When a firm acts to improve its future position in the market, it is said to engage in dynamic competition. The most extreme form of dynamic competition involves reducing the number of rival firms, which could entail driving some existing competitors out of the industry or deterring prospective firms from entering (referred to as strategic entry deterrence). Limit pricing is a pricing strategy to discourage entry, while predatory pricing is intended to encourage exit. Another strategy is to invest in cost-reducing capital so as to achieve a future cost advantage vis-à-vis one’s competitors, or to enter unprofitable product niches in order to stave off entry. These and other forms of dynamic competition will be examined here and in later chapters. Some of this behavior is legal (and could even enhance efficiency), while some runs afoul of antitrust laws. The latter will draw our attention in later chapters, especially chapter 8 on monopolization practices.26 Limit Pricing Two central questions underpin the literature on strategic entry deterrence. First, how can incumbent firms affect a potential entrant’s decision to enter? Second, if they can influence that decision, how does this ability affect the behavior of incumbent firms? We can also ask, even if entry does not occur, does the threat of entry induce incumbent firms to act more competitively? Bain-Sylos theory of limit pricing The earliest model of limit pricing is due to Joe Bain and Paolo Sylos-Labini.27 A central assumption in this model is what is known as the Bain-Sylos postulate: The entrant believes that, in response to entry, each incumbent firm will continue to produce at its preentry output rate.

This postulate is illustrated in figure 5.4. Assume that there is a single incumbent firm (or, alternatively, a perfectly colluding cartel) and that it is producing q0 and selling at P0 prior to entry. Given the Bain-Sylos postulate, the output of the entrant will simply add to q0, causing price to fall. In short, the line segment AB is the residual demand facing the potential entrant (where the entrant’s origin is at q0). For convenience, we can shift the residual demand curve AB leftward to P0B′, thereby making the origin for the entrant fall on the

vertical axis.

Figure 5.4 Residual Demand Curve under the Bain-Sylos Postulate

Notice that the incumbent firm can manipulate the residual demand curve by the choice of its preentry output. For example, in figure 5.4, higher output than q0 would imply a lower residual demand curve (that is, the dashed curve CD). Hence, the incumbent firm could choose its output so that the residual demand curve facing the potential entrant would make entry marginally unprofitable. This situation is shown in figure 5.5. The long-run average cost curve for a typical firm in this industry is AC(q). This average cost function holds for both incumbent firms and new firms, so that there is no barrier to entry in terms of an absolute cost advantage. As shown, average cost declines until (the minimum efficient scale) is reached and then becomes constant.

Figure 5.5 Determination of the Limit Price: Bain-Sylos Model

The key question is, can the incumbent firm create a residual demand curve for the entrant such that entry is unprofitable? The answer is yes. The residual demand curve that is tangent to AC(q) is one for which there is no output for a new firm that gives it positive profit. Working backward from the residual demand curve, an incumbent firm output of is necessary to generate that residual demand curve. The price associated with denoted is the limit price. It is the maximum price that deters entry. We then find that if the incumbent firm prices at it deters entry and earns above-normal profit of The key assumption in this analysis is that a potential entrant expects the incumbent firm to respond to entry by maintaining its output at its preentry rate. Industrial organization economists generally consider the Bain-Sylos postulate to be a “bad” assumption. It is instructive to learn why it is bad and what is a better way to think about firm behavior. To begin, it is generally thought to be undesirable to assume something about how an agent behaves other than that he or she acts to maximize his or her own well-being. In this light, the Bain-Sylos postulate has been criticized in that it assumes how the incumbent firm will respond to entry (or how the potential entrant believes the incumbent will respond to entry, which is just as bad). Rather than make such an assumption, we want to derive how a profit-maximizing incumbent firm will respond to entry. This methodological criticism aside, the Bain-Sylos postulate is a bad assumption because an incumbent firm will typically not choose to behave in the manner assumed. In response to entry, an incumbent firm will want to reduce its output below its preentry level. Recall from the Cournot model that a firm’s optimal output rate is lower, the more its competitors produce. Because entry entails more output being provided by one’s competitors, the profit-maximizing output of the incumbent firm should be lower after entry, not the same. In the Bain-Sylos model, entry is deterred only because the potential entrant believes the incumbent

firm’s threat to maintain its output. Yet the threat is not credible! An even more basic point is relevant here: The entry decision is wholly independent of the incumbent firms’ preentry output. To see this crucial point, consider a three-stage model in which the incumbent firms choose their outputs in stage 1, the potential entrant decides whether to enter in stage 2, and the active firms (which include the potential entrant if it entered) simultaneously choose output in stage 3 (that is, they engage in Cournot competition). The profitability of entry depends on a new firm’s profit at the Cournot solution in stage 3 as well as on the cost of entry. The key question to ask is: How does the incumbent firm’s output in stage 1 affect the Cournot solution achieved in stage 3? If the incumbent firm’s preentry output is to affect the entry decision, it must affect the cost of entry and/or the postentry equilibrium. If the postentry demand and cost functions are independent of past output decisions, then the postentry equilibrium will be independent of the incumbent firms’ preentry output, which implies that the entry decision is independent of preentry output.28 Strategic theories of limit pricing For an incumbent firm to affect entry decisions, its preentry behavior must somehow affect the profitability of entry. What might cause this effect to occur? The presumption is that if entry occurs, all active firms will achieve some oligopoly solution. As we found in chapter 4, the determinants of an oligopoly solution include the market demand curve, firms’ cost functions, and the number of firms (as well as, perhaps, firms’ discount rates). It follows that for incumbent firms to influence the profitability of entry, they must affect the postentry demand function, their cost functions, or a new firm’s cost function. A central goal in the literature on dynamic competition is to identify and explore the intertemporal linkage between incumbent firms’ preentry behavior and the postentry structure in terms of cost and demand functions. We proceed as follows. In this section, we describe some of the ways in which incumbent firms’ preentry output or price decisions can affect the postentry demand or cost functions and thereby affect the profitability of entry. Having established a linkage between preentry decisions and the postentry outcome, the next step is to examine how an incumbent firm might exploit this linkage. This will be postponed until the following section, where we investigate the use of capacity to deter entry. Though capacity is the entrydeterring instrument, the method of analysis and general insight applies as well to when preentry price or output is the instrument used by incumbent firms. In our discussion, let us assume there is just one incumbent firm and one potential entrant. When trying to find ways in which the incumbent firm’s preentry output can affect the postentry equilibrium, we need to think of reasons that past output would affect current demand or cost functions. One source of this intertemporal linkage is adjustment costs. In many manufacturing processes, costs are incurred when changing the rate of production. To increase output, a firm may need to bring new equipment on line. Installation of this equipment can require shutting down the production process, which is costly in terms of lost output. To reduce output, a firm may need to lay off workers, which is also costly. Adjustment costs are those costs incurred when changing the firm’s rate of production.29 An example of a cost function with adjustment costs is

where qt is the period t output, and qt−1 is output from the previous period. The cost to adjusting output is measured by ½(qt − qt−1)2. Notice that it is minimized when qt = qt−1, at which point no change in output occurs. It is greater the bigger the change in output becomes.

When the incumbent firm incurs a cost to adjusting its production rate, we want to argue that its preentry output will affect the profitability of entry. The more a firm produces today, the higher its profit-maximizing output will be in the future. Because of the cost to adjusting its output, a firm tends to produce output close to its past output. Thus, if the postentry equilibrium is the Cournot solution, then the incumbent firm will produce at a higher rate after entry, the more it produced in the preentry period. As shown in figure 5.6, increasing its preentry output shifts out the incumbent firm’s postentry best reply function. A rise in its preentry output then shifts the postentry equilibrium from point A to B. Because the incumbent firm produces more at B, postentry profit for the new firm is reduced. An incumbent firm may then be able to deter entry by producing at a sufficiently high rate prior to the entry decision.

Figure 5.6 Effect of the Preentry Output on the Postentry Equilibrium with Adjustment Costs

By the preceding analysis, one can motivate the Bain-Sylos postulate by the assumption that the incumbent firm faces infinitely large adjustment costs, so that it would be too costly to change its output in response to entry. This is obviously a very extreme assumption, though it does make the Bain-Sylos postulate an assumption about the structure of the model rather than one about behavior. What this analysis has shown is how the existence of adjustment costs can create a linkage between an incumbent firm’s preentry output and the postentry equilibrium. Given that there is a way in which an incumbent firm can affect the profitability of entry, the next question is whether it is optimal to use it to deter entry. It is possible that it could take a very high preentry output to make entry unprofitable. In that case, an incumbent firm might prefer to produce a lower output and let entry occur. A detailed treatment of

the decision to deter entry is provided later when we explore capacity as the entry-deterring instrument. Here we briefly mention several other ways in which the incumbent firm’s preentry output or price can affect the profitability of entry. Some production processes have a learning curve; that is, the more experience a firm has with the process, the more ways it will find to lower cost. One reason is that intricate labor operations, such as in aircraft assembly and the manufacture of computer components, become more efficient as workers gain experience. Using cumulative past output as a measure of experience, an example of a cost function with a learning curve effect is

where Yt is the sum of past outputs: Yt = q t−1 + q t−2 + …. Note that the more a firm produced in the past, the lower its marginal cost will be today. Hence the higher an incumbent firm’s preentry output becomes, the lower its marginal cost after entry will be, which means the more it will produce in the postentry equilibrium. Entry deterrence could then occur by setting a high preentry output, which would lower the incumbent firm’s marginal cost and give it a cost advantage vis-à-vis a new firm.30 Earlier in this chapter, we mentioned that consumers are hesitant to switch brands of a good, because certain costs are associated with doing so. To switch banks, you must close out your account, which entails a certain amount of time and effort. If the quality of an untried brand is uncertain, then a consumer will have to incur costs associated with learning about the new brand. Such costs are referred to as switching costs. The demand for a new firm’s product comes from consumers who currently buy from existing firms and consumers who are currently not in the market. If there are switching costs, a new firm is at a disadvantage when it comes to competing for the first type of consumer. It will have to offer those consumers a price discount to induce them to switch because of the costs associated with doing so. No such price discount has to be offered to consumers who are not currently buying. Incumbent firms can make entry less profitable by increasing the number of consumers who have attached themselves to their brands. This is most easily achieved by offering a low preentry price. A new firm would then have to offer a large price discount to get much of any demand for its product. The price discount required may be so large as to make entry unprofitable.31 In the preceding discussion we considered how the incumbent firm’s preentry output can affect the postentry demand function or its postentry cost function. Given this linkage, the incumbent firm may be able to make the postentry equilibrium sufficiently unattractive so as to deter entry. Alternatively, if an entrant is uncertain about what demand and cost will be after entry, the incumbent firm may be able to deter entry by influencing a potential entrant’s beliefs about postentry demand or cost functions rather than the actual demand or cost functions themselves. To consider this possibility, suppose that the potential entrant is uncertain of the incumbent firm’s marginal cost. Assume that entry is profitable when the incumbent firm’s marginal cost is comparable to or higher than that of the potential entrant. If instead the incumbent firm has a considerable cost advantage, entry is unprofitable. Because an incumbent firm’s marginal cost affects its profit-maximizing output, one would expect its preentry output to provide information about its cost. A potential entrant would tend to infer from a high preentry output (or a low preentry price) that the incumbent firm has low marginal cost and, therefore, entry is unprofitable. Of course, the incumbent firm might then produce at a high rate even if its marginal cost is high, in order to try to mislead the potential entrant into believing that it has low cost. Hence, the incentive to signal that it has low cost results in the incumbent firm producing at a higher rate

than if there were no threat of entry. This signaling phenomenon can keep an incumbent firm’s price below the monopoly level, even if entry never takes place.32 In summary, two important conclusions derive from the limit-pricing literature. First, preentry output or price can affect the postentry equilibrium in different ways and thereby influence the decision to enter. Second, even if entry is deterred, the threat of entry will generally induce incumbent firms to set a low price. Given the possibility of limit pricing, we next turn to providing some evidence of incumbent firms using price to deter entry in the airline industry. Limit pricing in the airline industry An implication of the theory of limit pricing is that, in response to a heightened threat of entry, incumbent firms will lower prices for the purpose of making entry less likely. This hypothesis was examined for airline route markets for 1993–2004.33 The heightened threat took the form of Southwest Airlines, which was an aggressive entrant during that period, entering two cities but not yet offering service between them. For example, as depicted in figure 5.7, Southwest is shown as serving a route involving Cleveland and a route involving Baltimore but not yet offering service between Baltimore and Cleveland. The presumption is that Southwest’s presence in cities A and B makes it more likely that it will start providing service between A and B, and the evidence is supportive of that presumption. When Southwest operates in two cities, it was estimated there was an 18.5 percent chance that it would enter the route market between those two cities in the next quarter and, in addition, this probability is seventy times higher compared to when it is only serving one of the two cities. In sum, if an incumbent firm serving route AB finds that Southwest has established a presence in city A and in city B, it should indeed see a heightened threat of Southwest entering route AB.

Figure 5.7 Airline Route with a Potential Entrant

When Southwest has entered both endpoints of a route but is not yet serving the route, existing airlines on

that route were found to lower their fares by 10–14 percent. While this response is consistent with limit pricing, it is possible that the price decline is instead intended to prepare for Southwest’s entry by building customer loyalty. If it was done for that purpose then the price response should be greater when entry is more likely. On the contrary, the price response was found to be largest when the likelihood of entry is neither low nor high. It seems reasonable to assume that when the likelihood of entry is moderate, aggressive preentry pricing might be most influential on Southwest’s entry decision and, therefore, limit pricing is most profitable. What happened if limit pricing failed and Southwest entered? On top of the decline in fares of 10–14 percent prior to entry, fares fell another 30–45 percent for a total average price decline of 44–60 percent. If you’re a consumer, you’ve got to love Southwest! Investment in Cost-Reducing Capital In a seminal paper, Avinash Dixit provided some fundamental insight that has fueled much of the research on strategic entry deterrence.34 Because his paper is representative of the type of analysis and insight found in models of dynamic competition, we consider it in some depth. Let us begin with a description of the technology available to firms. To operate at all in this industry, a firm must incur some fixed cost, which we denote by K. To produce one unit, a firm needs one unit of capacity, which costs r, and variable inputs that cost w. If a firm currently has a capacity stock of x, its cost function is then

Given preexisting capacity stock of x, a firm has a fixed (and sunk) cost of K + rx. To produce q when it does not exceed capacity requires only variable inputs that cost wq. However, if output is in excess of capacity, one must buy (q − x) additional units of capacity. This costs r(q − x), and therefore total cost is K + wq + rx + r(q − x) = K + (w + r)q. The game has three stages and two firms—one incumbent firm and one potential entrant. Firms have homogeneous products. Initially, neither firm has any capacity. Stage 1: The incumbent firm invests in capacity, denoted x. Stage 2: The potential entrant observes the incumbent firm’s capacity and decides whether to enter. Entry costs K > 0. Stage 3: Active firms simultaneously choose how much to invest in capacity and how much to produce. The incumbent firm carries x from stage 1. If entry occurred, then two firms are active in stage 3. Otherwise there is just one—the incumbent firm. To derive an equilibrium for this game, let us begin with the thought experiment of what would happen in stage 3 if entry occurred and how this would depend on the incumbent firm’s initial investment in capacity. The presumption is that each firm chooses a quantity so as to maximize its profit, which means a Nash equilibrium has been reached. The point we want to argue is that, generally, the higher is the initial capacity of the incumbent firm, the higher the incumbent firm’s postentry Nash equilibrium quantity will be and the lower the new firm’s postentry Nash equilibrium quantity will be.

Figure 5.8 shows the marginal cost curve that the incumbent firm faces in stage 3, given an initial capacity of x1. Marginal cost is w if the incumbent firm produces below its initial capacity, and it jumps to w + r if that firm produces above x1, as it must add capacity, which costs r per unit. If the incumbent firm’s initial capacity is instead then its marginal cost is lower for all quantities between x1 and Generally, when a firm’s marginal cost is lower, it desires to produce more. As a result, in the postentry game, the incumbent firm finds it optimal to produce at a higher rate when its initial capacity is than when it is x1. Because this higher quantity means that the new firm can expect a lower market price for any quantity that it would produce, it chooses to produce a lower amount. In other words, higher initial capacity for the incumbent firm credibly commits it to producing more in stage 3, because its marginal cost is lower. This commitment induces a new firm to produce less. We can further conclude that a new firm’s postentry profit is lower, the greater is the initial capacity investment (in stage 1) of the incumbent firm, because a firm’s profit is lower when its rival produces more.

Figure 5.8 Incumbent Firm’s Marginal Cost Curve

What we have described is a linkage between the incumbent firm’s preentry capacity investment and the postentry profit for a new firm. Through this linkage, the incumbent firm may have an instrument for deterring entry. By having sufficiently great capacity, the incumbent firm may be able to depress postentry profit for a new firm to the point where entry would be unprofitable. To explore whether the incumbent firm will choose to deter entry, we consider a simplified version of the Dixit game, as shown in figure 5.9. To read this diagram, start from the top and read downward. Initially, the incumbent firm has two choices: low capacity (Low) and high capacity (High). Thus, we are restricting x1 to being one of two values. One can think of building a small plant or a large plant. After it makes its investment decision, the potential entrant decides whether to enter. If entry does not occur, the incumbent firm earns the profit associated with being a monopolist. This is specified to be 15 when the incumbent

chooses Low in the capacity stage and 12 when it chooses High. Without the threat of entry, it would then choose to invest a low amount in capacity.

Figure 5.9 Modified Dixit Game

If entry does take place, then firms make simultaneous quantity decisions where each firm can decide to produce at a low rate (qL) or a high rate (qH). A profit matrix lists the profits for the two firms from the four possible quantity pairs. The first number in a cell of a matrix is the profit for the incumbent firm, whereas the second number is the profit for the new firm (recall that K is the cost of entry). For example, if the incumbent firm chooses x1 = Low, entry occurs, and both firms produce qH, then the profits of the incumbent firm and the new firm are 5 and 5 − K, respectively. Implicit in these profit numbers is that the incumbent firm prefers to produce at a high rate when it invests heavily in capacity (as its marginal cost is low) and prefers to produce at a low rate when it does not invest heavily in capacity (as its marginal cost is high for a high rate of production). If x1 = Low and entry occurs, the postentry (Nash) equilibrium has both firms producing qL. Given that the new firm produces qL, the incumbent firm earns 8 by producing qL and 7 by producing qH, so it prefers qL. Given that the incumbent firm produces qL, the new firm earns 8 − K by producing qL and 7 − K by producing qH, so it prefers qL. You should prove to yourself that the other three possible outcomes, (qL, qH), (qH, qL), and (qH, qH), are not Nash equilibria. If instead x1 = High, then the postentry equilibrium has the incumbent firm producing qH and earning 7, and the new firm producing qL and earning 6 − K. How will the potential entrant behave? Well, it depends on the cost of entry and the incumbent firm’s capacity. If x1 = Low, then the postentry equilibrium profit for a new firm is 8 − K. Hence, if x1 = Low, entry is profitable (and will occur) if and only if K < 8 (note that the potential entrant earns 0 from not entering). If x1 = High, then postentry profit is 6 − K, so that it enters if and only if K < 6.

The final step in deriving equilibrium behavior for this three-stage game is to determine the optimal behavior of the incumbent firm at the capacity stage. There are three cases to consider. Case 1 is when the cost of entry is high: K > 8. Because entry is unprofitable regardless of the incumbent firm’s capacity, the incumbent firm will choose the same capacity as if there were no threat of entry: x1 = Low. In Case 2, the cost of entry is intermediate: 6 < K < 8. In Case 2, the incumbent firm deters entry when x1 = High and induces entry when x1= Low. It then earns 8 from x1 = Low and 12 from x1 = High. Hence it optimally chooses to set x1 = High and deters entry. Note that it invests more in capacity than if there were no threat of entry. Finally, Case 3 occurs when entry is relatively inexpensive: K < 6. Entry occurs regardless of the incumbent’s capacity. The incumbent then earns 8 from x1 = Low and 7 from x1 = High, so that it invests a low amount in capacity. Thus the incumbent firm chooses the same capacity as a monopolist would who had no threat of entry. (In a more general model, the incumbent firm would strategically raise its capacity in anticipation of entry so as to reduce the new firm’s output.) Several important lessons can be learned from the preceding analysis. Since capacity is durable and lasts into the postentry period, an incumbent firm’s capacity investment affects its future cost function and thereby affects the postentry equilibrium. Because of its durability, capital is a natural instrument for strategically deterring entry and, more generally, for improving one’s position in the market. A second lesson is that even if there is a strategy that effectively deters entry, an incumbent firm may not choose to use it. In the preceding game, when 6 < K < 8, entry was strategically deterred by the incumbent investing a lot in capacity. However, we could make high capacity relatively more expensive, so that the incumbent firm’s profit when it chooses x1 = High is less than its profit from low capacity and allowing entry. Strategic capacity expansion in the casino industry If entry into a market is a “done deal,” then incumbent firms should be less motivated to invest in capacity. The anticipated arrival of additional output from a new firm is going to mean less demand for incumbent firms, in which case the incentive to expand capacity is reduced. However, if incumbent firms believe entry is likely, but not inevitable, they may want to expand capacity in order to deter such entry, as explained above. The use of capacity investment to deter entry has been examined for the casino industry.35 The data set included all planned openings of new casinos in the United States from 2003 to 2012. Of the one hundred proposed openings prior to 2010, forty-six of them eventually occurred. Hence, the submission of a plan to enter did not mean entry was sure to occur and, if entry is not certain, then perhaps incumbent firms might believe they can cause a potential entrant to change its plans. Capacity was measured by floor space of the casino. Consistent with trying to deter entry (and contrary to accommodating entry), incumbent firms expanded floor space by 4 to 7 percent in response to an announced plan of a new firm to construct a casino. Furthermore, evidence suggests that this entry-deterring strategy worked. In response to more floor space, there was a 14 percent smaller change that the potential entrant would go through with entry. As further evidence that the intent was to deter entry, incumbent firms did not expand capacity once the construction of a new casino had commenced. Only during the planning stage did higher capacity investment occur. Raising Rivals’ Costs We have thus far discussed several ways in which an incumbent firm can improve its future position in the market by giving itself a cost advantage over its competitors. This could involve, for example, exploiting a

learning curve or investing in cost reduction. In this section, we consider a strategy in which an incumbent firm gives itself a cost advantage by raising its rivals’ costs rather than lowering its own cost. We will consider one strategy, of which there are several, for raising the cost of a rival.36 Suppose that firm 1 has a relatively capital-intensive production process, while its competitor, firm 2, has a relatively labor-intensive production process. Further suppose that the industry is unionized and the union bargaining process works in the following manner. The union first bargains with firm 1 concerning its wage. On agreement with firm 1, the union demands that firm 2 pay the same wage. The (credible) threat of a strike by the union should be able to induce firm 2 to accept the wage that resulted from negotiations between the union and firm 1. Bargaining between the United Auto Workers and the domestic auto manufacturers has taken place in this manner. From a strategic perspective, it may actually be in firm 1’s best interests to agree to a relatively high wage with the union. Though a higher wage raises its cost, it raises firm 2’s cost even more, as firm 2 uses more labor in producing its product. Agreeing to a high wage then gives firm 1 a cost advantage over firm 2, even though firm 1’s cost has increased! The rise in firm 2’s marginal cost will cause it to reduce its output, which will raise firm 1’s demand and revenue. As long as this revenue increase is bigger than the increase in firm 1’s cost, firm 1 will have raised its profits by agreeing to a higher wage. It has been argued that the raising rivals’ cost tactic was used in the market for mail delivery in Germany. The incumbent firm, Deutsche Post, agreed to high wages which, because of a collective bargaining agreement, would raise labor costs for all firms. Some evidence suggests that these higher costs induced some new entrants to exit the market.37 Preemption and Brand Proliferation Consider a market with two possible products: X and Y. Assume that these products are imperfect substitutes and that all firms have access to the technology to produce either one. Initially, the industry is characterized as follows. A lone incumbent firm produces only X. Suppose that demand for Y is sufficiently weak, so that it is unprofitable for the incumbent firm to offer Y, nor is it profitable for a new firm to enter and offer Y. Furthermore, assume that a potential entrant would find it unprofitable to enter and produce X, as to do so would put it in direct competition with the incumbent firm. Now suppose that there is an unanticipated persistent increase in the demand for Y. Under this new demand structure, let π(A, B) denote the profit to a firm if its product line is A and its competitor’s product line is B. Four possible product lines can result: product X, product Y, products X and Y (denoted XY), and no products (denoted N). For example, π(XY, Y) is the profit of a firm offering both X and Y when its competitor offers just Y. To produce product Y, a firm has to incur a fixed cost of FY. Assume that if there were no threat of entry, the incumbent firm would choose not to offer Y:

The left-hand side is the profit from a monopolist offering only X, and the right-hand side is the profit from it expanding to offer Y as well. Inequality 5.2 will hold if the increase in demand for Y is sufficiently small relative to the fixed cost FY. We further assume that entry by a new firm with product Y is profitable if the incumbent firm offers only X and is unprofitable if it offers both products:

Here, Π(Y, X) − FY is the profit to a new firm from offering Y, and Π(Y, XY) − FY is the profit to a new firm from offering Y if the incumbent firm offers both X and Y. The condition 0 > Π (Y, XY) − F is natural if competition is sufficiently intense when two firms offer the same product. Note that inequalities 5.2 and 5.3 are consistent. Because products X and Y are substitutes, if product Y is offered, the demand for product X will fall. Although introduction of Y generates positive profit, it reduces profit from the sale of X. Of course, a new firm that offers only Y does not care about the reduced profits on X. In contrast, when the incumbent firm considers offering Y, it cares about total profits. Thus it values the introduction of Y by an amount equal to the profits earned from Y less the reduction in profits earned from X. When considering how the incumbent firm will behave in response to the rise in demand for Y, let us assume that it could put Y on the market before a new firm could. This assumption is actually not important for the story we are telling, though it is a natural one. For example, the incumbent firm is apt to learn about the rise in demand for Y before anyone else. We know that if the incumbent firm does not offer Y, then entry will occur. In that case, the incumbent firm’s profit is Π(X, Y). If instead it were to introduce Y, entry would be unprofitable [as Π(Y, XY) − FY < 0], and thus entry would not occur. The incumbent firm’s profit from offering Y is then Π(XY, N) − FY. Thus the incumbent firm will find it optimal to preempt entry by introducing Y if and only if

Rearranging expression 5.4 yields

Recall from inequality 5.3 that it was assumed that Π(Y, X) > FY. Thus, if the left-hand side of inequality 5.5 is at least as great as Π(Y, X) then, because Π(Y, X) > FY, it must exceed FY. It follows that inequality 5.5 holds if the following is true:

Rearranging expression 5.6 gives

Thus, if inequality 5.7 holds, then the incumbent firm will introduce product Y and thereby deter a new firm from doing so. The left-hand side of inequality 5.7 is profit earned by a single firm offering both products X and Y. The right-hand side is industry profit from one firm offering product X and a second firm offering product Y. Because there are two competing firms in the market, generally competition will drive prices down below the level that a two-product monopolist would set (unless the two firms were to perfectly collude). Assuming that no cost disadvantage occurs from offering both products, inequality 5.7 must then be true. One two-product firm earns more than two single-product firms competing against each other. The point here is simple and quite intuitive. If a new firm enters and offers Y, then two firms will be competing in the market—one with product X and one with product Y. This competition results in prices being driven below that which would be set by a single firm selling both products. In contrast, the incumbent firm can coordinate the pricing of products X and Y if it offers Y. Hence the value of introducing Y to the incumbent firm is greater than that to a new firm, so the incumbent firm will introduce product Y

before a new firm can. Although we gave the incumbent firm first shot at offering product Y, this is not important to the story. Suppose that, instead of assuming that demand for Y suddenly increases, we assume it rises slowly over time in an anticipated manner. At some point in time, call it t*, entry by a new firm with product Y yields a normal return (that is, the sum of discounted profits equals zero). Entry before t* results in a below-normal return, while entry after t* results in an above-normal return (assuming no one else has introduced Y). By the reasoning given previously, the incumbent firm earns positive profit from offering Y at t* (more than a new firm would earn), because it can coordinate the pricing of X and Y. Hence the incumbent firm must earn positive profit by introducing product Y just before t*. In contrast, a new firm would not offer it at any time before t*, as it would be unprofitable. We then find that the incumbent firm will preempt entry by introducing product Y before a new firm would find it profitable to do so. The strategy outlined here is referred to as brand proliferation.38 Incumbent firms offer new brands in order to fill niches in the market that would have provided room for profitable entry. Note that, in the absence of the threat of entry, product Y would not have been introduced (see inequality 5.2). Hence, its introduction is solely for the purpose of deterring entry. Brand proliferation in ready-to-eat breakfast cereals The leading manufacturers of ready-to-eat breakfast cereals were accused by the FTC of using an entrydeterring strategy of brand proliferation. Though the case was eventually dropped, it is useful to describe the argument made against General Foods, General Mills, and Kellogg.39 The three firms had a combined market share of 81 percent of the ready-to-eat cereals market, with Kellogg holding 45 percent. The FTC claimed that by introducing some 150 brands between 1950 and 1970, the three companies left “no room” for new entrants. The basis of this claim is that brands are located in a “product characteristics space,” and each competes only with brands located nearby that have similar product characteristics. Fixed product entry costs (associated with production and marketing, for example) make it necessary to achieve a sufficiently large sales volume to be economically viable. So, if the “room” between brands is small (because of brand proliferation), new entrants will be deterred, because they cannot expect to attain large enough sales. Contrary to this argument, the FTC judge saw brand proliferation as “nothing more than the introduction of new brands, which is a legitimate means of competition.… There is no evidence of a conspiracy or intent to deter entry by means of new product introductions.” After the judge dismissed the complaint, the FTC commissioners decided to let the dismissal stand rather than appeal the decision to a higher court. Summary In chapters 4 and 5 we have analyzed the feedback relationship between market structure and firm conduct. Initially, market structure was taken as exogenous, and we examined how concentration and entry conditions influence the way firms behave. Our analysis then turned to exploring how firm behavior can affect structure. Referred to as dynamic competition, we found there to be many ways in which firms can impact their future market share and the number of firms in the market. Examples include creating a cost advantage over one’s existing competitors by raising their costs and preempting future entry through one’s product decisions. Although for many years industrial organization economists treated entry conditions as being exogenous to firms, in fact entry conditions are partially determined by the behavior of established

firms through, for example, the strategic setting of price and investment. Furthermore, we presented evidence that these forces have been operative in a variety of markets, including airlines and casinos. Questions and Problems 1. The HHI is a better measure of industrial concentration than the concentration ratio. Comment. 2. Suppose that an industry has ten firms with market shares of the following percentages: 25, 15, 12, 10, 10, 8, 7, 5, 5, and 3. a. Derive the four-firm concentration ratio. b. Derive the HHI. c. Derive the effect of a merger between the fifth and sixth largest firms on the HHI. d. Suppose the government decides that it wants this industry to have a four-firm concentration ratio of 40 percent or less. How might this goal be achieved? 3. In 1972 a U.S. senator proposed the Industrial Reorganization Act. Among other things, this act required that firms in an industry be broken up if (1) their after-tax return on stockholders’ equity exceeded 15 percent for five consecutive years, or (2) the four-firm concentration ratio exceeded 50 percent. Discuss whether or not you think this piece of legislation should have been enacted. (By the way, it was not.) 4. How would you define a barrier to entry? 5. Sunk cost has been said to be a barrier to entry. Explain how sunk cost can make entry more difficult. State which definition of an entry barrier you are using. 6. Ace Washtub Company is currently the sole producer of washtubs. Its cost function is C(q) = 49 + 2q, and the market demand function is D(P) = 100 − P. There is a large pool of potential entrants, each of which has the same cost function as Ace. Assume the Bain-Sylos postulate. Let the incumbent firm’s output be denoted qI. a. Derive the residual demand function for a new firm. b. Given that the incumbent firm is currently producing qI, if a potential entrant were to enter, how much would it produce? c. Find the limit price. Hint: Find the output for Ace such that the slope of a new firm’s average cost curve equals the slope of a new firm’s residual demand curve. d. Instead of assuming the Bain-Sylos postulate, assume that active firms expect to achieve a Cournot solution. Does entry depend on qI? Explain. e. Making the same assumption as in part d, will there be entry? 7. Consider the Dixit capacity investment model when the inverse market demand curve is P(Q) = 100 − Q, w = 10, r = 30, and K = 156.25. a. Derive a new firm’s postentry best reply function. b. Given initial capacity of x, derive the incumbent firm’s postentry best reply function. c. Given x, derive the postentry equilibrium profit of a new firm. d. Derive the minimum capacity that makes entry unprofitable. e. Derive the optimal capacity choice of the incumbent firm. 8. From 1979 to 1987, Kellogg introduced the following brands of cereal: Graham Crackos; Most; Honey & Nut Corn Flakes; Raisins, Rice & Rye; Banana Frosted Flakes; Apple Frosted Mini Wheats; Nutri-Grain; Fruity Marshmallow Krispies; Strawberry Krispies; Crispix; Cracklin’ Oat Bran; C-3PO; Apple Raisin Crisp; Fruitful Bran; OJ’s; Raisin Squares; Just Right/Nugget & Flake; All Bran with Extra Fiber; Apple Cinnamon Squares; All Bran Fruit & Almonds; Strawberry Squares; Pro Grain; Mueslix; Nutri-Grain Nuggets; and Nutrific. a. Should the FTC be concerned? b. Five other companies introduced a total of fifty-one new brands over that same period. Does this fact change your

answer in part a? 9. The demand for product X is Px = 10 − 2X − Y, where Y is the quantity of a substitute product that currently is not being produced. The marginal cost of X is a constant equal to $1. Entry is completely barred and a monopolist, “Incumbent,” produces X. a. Find Incumbent’s price, quantity, and profit. b. Incumbent wishes to investigate the possibility of introducing Y, which is also protected from entry by other firms. The demand for Y is Py = 10 − 2Y − X and it also has a constant marginal cost of $1. However, there is a fixed cost of introducing Y of $4. Find the values of X, Y, Px, Py, and profit for Incumbent. Will Incumbent introduce Y? c. Would it be in society’s interests to have Incumbent introduce Y? d. Now assume that entry is no longer barred. For simplicity, assume that if a new firm, “Entrant,” decides to introduce Y, then Entrant and Incumbent will collude perfectly and settle at the joint profit maximum. Of course, given the demands and costs as assumed previously, the prices and quantities found in part b will apply. Will Entrant have an incentive to introduce Y? e. Still assuming that entry is not barred, will Incumbent have an incentive to preempt the entry by Entrant and offer Y first? (It is assumed that if Incumbent offers Y, then a second seller of Y will have negative profits.) f. If the fixed cost of introducing Y is now taken to be $6, answer parts d and e. Is it in society’s interests to have Y introduced? g. Society’s calculation of benefits for comparison with the $6 introduction cost consists of three parts: the increase in consumer surplus due to Y, the increase in producer surplus from Y, and the loss in producer surplus from X. On the other hand, Entrant compares only one of these parts with the $6. Is it ever possible for these two “decision rules” to give the same answer? That is, is it possible for social and private decision rules to coincide such that private decisions are socially “correct”? Explain. 10. Assume the same demand and cost curves as in problem 9, except now take the fixed introduction cost of Y to depend on bidding by Incumbent and Entrant. That is, assume that a third party owns a patent on Y and has requested bids for the right to produce Y. a. Referring to your answers in parts d and e in problem 9, what would be the maximum amounts each would be willing to pay for the patent? b. Assume now that a Cournot solution holds if Incumbent sells X and Entrant sells Y. Find the equilibrium prices, quantities, and profits. c. Under the Cournot solution scenario, find the maximum amounts that Incumbent and Entrant would pay for the patent so as to become the sole producer of Y. d. Explain the intuition underlying your answer to part c and why it differs in a qualitative way from the answer to part a. 11. In 1994, the North American Free Trade Agreement (NAFTA) reduced tariff rates between Canada, Mexico, and the United State. For a firm to receive the lower tariff rates, it had to comply with complicated and costly NAFTA filing regulations. Compliance would have imposed more of a burden for a small domestic firm than a large domestic firm, and a firm might decide not to comply and just pay the higher tariff. Can you explain why large firms might have been in favor of NAFTA but not small firms? Hint: Use the argument of raising rival’s costs.40

Notes 1. U.S. Bureau of the Census, Concentration Ratios in Manufacturing Industry, 1963, Part I (Washington, DC: U.S. Government Printing Office, 1966), p. viii. 2. This information can be accessed at www.census.gov/eos/www/naics/index.html. 3. These guidelines are discussed in detail in chapter 6. 4. Keith Cowling and Michael Waterson, “Price-Cost Margins and Market Structure,” Economica 43 (1976): 267–274. 5. For example, Ian Domowitz, R. Glenn Hubbard, and Bruce C. Petersen, “Business Cycles and the Relationship between

Concentration and Price-Cost Margins,” Rand Journal of Economics 17 (Spring 1986): 1–17. 6. These two hypotheses are discussed in Harold Demsetz, “Two Systems of Beliefs about Monopoly,” in Harvey Goldschmid, H. Michael Mann, and J. Fred Weston, eds., Industrial Concentration: The New Learning (Boston: Little Brown, 1974), pp. 164–184. The market power hypothesis was referred to as the collusion hypothesis, though it does not require firms to collude. The differential efficiency hypothesis is also known as the superior efficiency hypothesis. 7. See www.restaurantowner.com/public/How-Much-Does-it-Cost-to-Open-a-Restaurant.cfm (accessed August 9, 2016). 8. This theory ignores the difficult coordination problem faced by firms simultaneously making entry decisions. If all potential entrants are identical, how do they decide which n should enter? In fact, another free-entry equilibrium exists for this model, in which each potential entrant enters with some probability. In that case, there could be too little entry or too much entry. 9. N. Gregory Mankiw and Michael D. Whinston, “Free Entry and Social Inefficiency,” RAND Journal of Economics 17 (Spring 1986): 48–58. 10. Steven Berry and Joel Waldfogel, “Free Entry and Social Inefficiency in Radio Broadcasting,” RAND Journal of Economics 30 (Autumn 1999): 397–420. 11. Emek Basker, “The Causes and Consequences of Wal-Mart’s Growth,” Journal of Economic Perspectives 21 (Summer 2007): 177–198. 12. This second finding is also supported by Joe S. Bain, Barriers to New Competition (Cambridge, MA: Harvard University Press, 1956); and C. F. Pratten, Economies of Scale in Manufacturing Industry (London: Cambridge University Press, 1971). 13. F. M. Scherer, Industrial Market Structure and Economic Performance (Chicago: Rand-McNally, 1970), p. 233. 14. The unregulated entry of Uber and Lyft into this market shows how innovation can get around government-imposed entry barriers. 15. As with the taxicab industry, innovation can break down a patent entry barrier if it results in a product that is superior to the patented product. 16. Joe S. Bain, Industrial Organization, 2nd ed. (New York: John Wiley & Sons, 1968), p. 252. 17. Although one can remove government restrictions, how can one remove scale economies? Scale economies stem from existing technological know-how and, unless one lives in the world described by George Orwell in Nineteen Eighty-Four, knowledge cannot be removed. However, one can talk about breaking up firms, so that each firm is less able to take advantage of scale economies, though that seems like an unattractive proposition. 18. George J. Stigler, The Organization of Industry (Homewood, IL: Richard D. Irwin, 1968), p. 67. 19. For a discussion, see Harold Demsetz, “Barriers to Entry,” American Economic Review 72 (March 1982): 47–57. Demsetz criticizes the definitions of Bain and Stigler for focusing solely on the differential opportunities of existing firms and potential entrants. He argues that this approach ignores legal barriers to entry, such as the requirement that one must have a license to operate but where licenses are traded freely. In that case, the incumbent firm and the potential entrant have the same opportunity costs. It is a matter of interpretation whether Bain ignores such barriers. 20. Christian C. von Weizsäcker, Barriers to Entry (Berlin: Springer-Verlag, 1980), p. 13. 21. Richard A. Posner, “The Chicago School of Antitrust Analysis,” University of Pennsylvania Law Review 127 (April 1979): 929. 22. This conversation is not based on quoting any stated opinions of Bain and Stigler but on our interpretation of their writings. 23. William J. Baumol, John C. Panzar, and Robert D. Willig, Contestable Markets and the Theory of Industry Structure (San Diego: Harcourt Brace Jovanovich, 1982). 24. This last condition is probably the most problematic, as it is easy to imagine that the entry lag exceeds the price adjustment. When that condition does not hold, results can significantly change; see Marius Schwartz and Robert J. Reynolds, “Contestable Markets: An Uprising in the Theory of Industry Structure: Comment,” American Economic Review 73 (June 1983): 488–490. 25. The socially efficient solution referred to here is defined as the social welfare optimum, subject to the constraint that firms earn at least normal profits. This solution entails a single firm pricing at average cost and meeting all demand. This

analysis, however, does ignore the use of nonlinear pricing schemes like two-part tariffs. For details, see chapter 12. 26. Review articles concerning dynamic competition include Drew Fudenberg and Jean Tirole, Dynamic Models of Oligopoly (London: Harwood, 1986); Joseph E. Harrington Jr., “Strategic Behaviour and Market Structure,” in John Eatwell, Murray Milgate, and Peter Newman, eds., The New Palgrave: Dictionary of Economics (London: Macmillan, 1987); Richard Gilbert, “Mobility Barriers and the Value of Incumbency,” and Janusz A. Ordover and Garth Saloner, “Predation, Monopolization and Antitrust,” both in Richard Schmalensee and Robert D. Willig, eds., Handbook of Industrial Organization (Amsterdam: North-Holland, 1989), pp. 475–535 and 537–596. 27. This model was developed independently by Bain, Barriers to New Competition; and Paolo Sylos-Labini, Oligopoly and Technological Progress (Cambridge, MA: Harvard University Press, 1962). 28. This point was originally made in James W. Friedman, “On Entry Preventing Behavior,” in Steven J. Brams et al., eds., Applied Game Theory (Vienna: Physica-Verlag, 1979). 29. For an analysis of entry deterrence when there are adjustment costs, see M. Therese Flaherty, “Dynamic Limit Pricing, Barriers to Entry and Rational Firms,” Journal of Economic Theory 23 (October 1980): 160–182. 30. For an analysis of strategic competition when a learning curve exists, see Drew Fudenberg and Jean Tirole, “Learningby-Doing and Market Performance,” Bell Journal of Economics 14 (Autumn 1983): 522–530. 31. See Richard Schmalensee, “Product Differentiation Advantages of Pioneering Brands,” American Economic Review 72 (June 1982): 349–365; and Paul Klemperer, “Entry Deterrence in Markets with Consumer Switching Costs,” Economic Journal 97 (March 1987): 99–117. 32. See Paul Milgrom and D. John Roberts, “Limit Pricing and Entry under Incomplete Information,” Econometrica 50 (March 1982): 443–459; and Joseph E. Harrington Jr., “Limit Pricing When the Potential Entrant Is Uncertain of Its Cost Function,” Econometrica 54 (March 1986): 429–437. 33. The ensuing analysis is based on Austan Goolsbee and Chad Syverson, “How Do Incumbents Respond to the Threat of Entry? Evidence from the Major Airlines,” Quarterly Journal of Economics 123 (2008): 1611–1133, and Christopher Gedge, James W. Roberts, and Andrew Sweeting, “A Model of Dynamic Limit Pricing with an Application to the Airline Industry,” NBER Working Paper 20293, National Bureau of Economic Research, Cambridge, MA, July 2014. 34. Avinash Dixit, “The Role of Investment in Entry Deterrence,” Economic Journal 90 (1980): 95–106. 35. J. Anthony Cookson, “Anticipated Entry and Entry Deterrence: Evidence from the American Casino Industry,” working paper, University of Colorado at Boulder, December 2015. 36. See Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs,” American Economic Review 73 (May 1983): 267–271. 37. Sven Heitzler and Christian Wey, “Raising Rivals’ Fixed (Labor) Costs: The Deutsche Post Case,” DIW Berlin Discussion Paper 1008, German Insitute for Economic Research, Berlin, May 2010. 38. This analysis is based on the work of Richard Schmalensee, “Entry Deterrence in the Ready-to-Eat Breakfast Cereals Industry,” Bell Journal of Economics 9 (Spring 1978): 378–393. Also see B. Curtis Eaton and Richard G. Lipsey, “The Theory of Market Preemption: The Persistence of Excess Capacity and Monopoly in Growing Spatial Markets,” Economica 46 (1979): 149–158. 39. In re Kellogg Company, Docket No. 8883, F.T.C., 1981. 40. This exercise is based on Craig A. Depken, II and Jon M. Ford, “NAFTA as a Means of Raising Rivals’ Costs,” Review of Industrial Organization 15 (September 1999): 103–113.

6 Horizontal Mergers

In chapter 5 we examined price-fixing agreements among firms and antitrust law pertaining to such conspiracies. In this chapter, we continue our study of how cooperation among rivals can harm competition, now through the process of merger. A horizontal merger is when some firms in the same market (that is, competitors) combine to form one company. An example is the 2009 merger of two leading brewing companies, Anheuser-Busch (with brands such as Budweiser) and InBev (with brands such as Stella Artois). Of course, not all horizontal mergers harm competition, though the potential to harm is there when the number of competitors is reduced. Mergers, unlike price-fixing cartels, involve integration of the firm’s facilities, which raises the possibility of socially beneficial efficiencies from combining operations. This difference explains why price fixing is a per se offense, whereas mergers are considered under the rule of reason. A vertical merger is when two firms with potential or actual buyer-seller relationships combine to form one company. An example is the acquisition of NBC Universal by cable company Comcast, which, prior to the merger, was a purchaser of content from NBC Universal. A merger that is neither horizontal nor vertical is classified as a conglomerate merger, which is further subdivided into three classes by the Federal Trade Commission (FTC). A product extension merger is the combination of firms that sell noncompeting products but use related marketing channels or production processes. The Pepsico acquisition of Pizza Hut is an example. A market extension merger is the joining of two firms selling the same product but in separate geographic markets. An example is Vail Resorts that, as an operator of ski resorts in Colorado, engaged in a series of market extension mergers by acquiring resorts in Michigan, Minnesota, Wisconsin, British Columbia, and Australia. Finally, there is the “pure” category of conglomerate mergers between firms with no obvious relationships of any kind. The merger between R. J. Reynolds (tobacco) and Burmah Oil and Gas is an example. While the mechanism that permits horizontal mergers to potentially harm competition is clear, this is not so for conglomerate and vertical mergers. Perhaps the most obvious way for conglomerate mergers to harm competition is through the removal of potential competitors. This claim was made by the government in the Procter & Gamble–Clorox merger. Procter & Gamble (a leading detergent manufacturer) was alleged to have been eliminated as a potential entrant into the bleach market when it acquired Clorox. In this respect, horizontal and conglomerate mergers can be similar in their potential threats to competition, that is, the elimination of rivals (actual or potential). The threats to competition from vertical mergers are less obvious and can be viewed as unilateral actions that potentially inflict harm on rivals. For example, note that the merger of an iron-ore supplier and a steel manufacturer does not change the number of competitors in either market. One popular complaint by judges has been that such mergers “foreclose” markets to rivals. Simply put, rival iron ore suppliers are harmed by

no longer having the acquired steel manufacturer as a possible buyer, and this loss is thought to harm competition. Because vertical mergers are perhaps best viewed as an exclusionary activity, we postpone discussion of vertical mergers until chapter 7, where we consider other vertical relationships. The next section briefly describes historical trends in merger activity and the development of relevant antitrust law and policy in the United States. The remaining sections of this chapter examine the reasons for and consequences of mergers and how mergers are evaluated by competition authorities. Antitrust Laws and Merger Trends There has been an interesting interdependence between antitrust law and the trend of mergers in the United States. To begin, the United States has experienced six major merger “waves” which are briefly described in figure 6.1. The first wave, occurring roughly from the mid-1890s to around 1904, has been described as “merger for monopoly.”

Figure 6.1 Characteristics of the Six U.S. Merger Waves Source: Virginia Bodolica and Martin Spraggon, “Merger and Acquisition Transactions and Executive Compensation: A Review of the Empirical Evidence,” Academy of Management Annals 3 (2009): 109–181. The conversion of approximately 71 important oligopolistic or near-competitive industries into near-monopolies by merger between 1890 and 1904 left an imprint on the structure of the American economy that fifty years have not erased.1

Perhaps the most famous of these mergers occurred in the steel industry. During the 1880s, more than 200 iron and steel makers were merged into twenty larger firms. In 1901, J. P. Morgan then engineered a merger among twelve of these larger firms to form United States Steel Corporation, which then had about 65 percent of the market. The result was a sharp rise in prices and a handsome $62.5 million share of these monopoly rewards for Mr. Morgan. Other well-known firms created in this period through mergers include General Electric, American Can, DuPont, Eastman Kodak, Pittsburgh Plate Glass, American Tobacco, and International Paper. As reported in figure 6.2, the acquisition volume as a fraction of the economy (as measured by gross domestic product) was substantial.

Figure 6.2 Value of Assets as a Percentage of Gross Domestic Product, 1895–1920 and 1968–2001 Source: The data for 1895–1920 are from Ralph L. Nelson, Merger Movement in American Industry, 1895–1956 (Princeton, NJ: Princeton University Press, 1959). The data for 1968–2001 have been provided by Steven Kaplan, for which we are grateful. Also see Bengt Holmstrom and Steven N. Kaplan, “Corporate Governance and Merger Activity in the United States: Making Sense of the 1980s and 1990s,” Journal of Economic Perspectives 15 (Spring 2001): 121–144.

The Sherman Antitrust Act of 1890 was intended to avoid the inappropriate creation of market power. It has been argued, however, that it could well have been the reason for the first merger wave.2 Recall from chapter 4 that several Supreme Court decisions—Trans-Missouri Freight (1897), Joint Traffic (1898), and Addyston Pipe (1899)—established that price fixing was unlawful. Now that firms could no longer coordinate their prices by colluding, they turned to mergers in order to create a single corporate entity that would control prices. This change is a stark reminder that firms will adapt to the antitrust and regulatory environment, and effective policy requires anticipating such adaptations.3 While the Sherman Act did not contain specific antimerger provisions, Section 1 concerns combinations in restraint of trade, and Section 2 deals with monopolization. The government relied on both of these sections in its Northern Securities decision in 1904, which thwarted the attempt to combine the Northern Pacific and Great Northern railroads. Also in 1911, two famous monopolies—Standard Oil and American Tobacco—were found guilty and subsequently broken up. Because the Sherman Act applied to mergers only when the merging firms were on the verge of attaining substantial monopoly power, the Clayton Act was passed (in part) in 1914 to remedy this limitation. Section 7 reads: That no corporation engaged in commerce shall acquire, directly or indirectly, the whole or any part of the stock or other share capital of another corporation engaged also in commerce where the effect of such acquisition may be to substantially lessen competition between [the two firms] or to restrain such commerce in any section or community or tend to create a monopoly of any line of commerce.

Unfortunately, the reference to stock acquisitions left a large loophole. By purchasing a competitor’s assets, mergers could escape the reach of the Clayton Act.

The second merger wave took place during 1916–1929. Though mergers to monopoly were now discouraged by the antitrust laws, “mergers to oligopoly” became fashionable. For example, Bethlehem Steel was formed to become the second largest steel manufacturer by combining several smaller companies. The Great Depression ended this second wave. Next came the “conglomerate merger” wave, which began after World War II and peaked in the 1960s. It, too, differed from its predecessors. The reason for this difference was the passage of the Celler-Kefauver Act of 1950 and the strict judicial interpretations of that legislation. As we describe later, horizontal mergers involving firms with relatively small market shares were found to be illegal. The Celler-Kefauver Act was passed in response to a rising concern by Congress about the beginnings of the third merger wave. An influential FTC report suggested that unless something was done, “the giant corporations will ultimately take over the country.”4 While the report was criticized by many economists, it and a government defeat in a steel merger case led to the Celler-Kefauver Act. This act amended Section 7 of the Clayton Act to read: That no corporation engaged in commerce shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no corporation subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of another corporation engaged also in commerce, where in any line of commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly.

The principal change was to plug the asset acquisition loophole of the original law. The 1980s witnessed a fourth merger wave, during which the annual value of acquisitions rose from around $50 billion in 1983 to over $200 billion by 1988. One of the largest acquisitions was the 1989 leveraged buyout (LBO) of RJR Nabisco by Kohlberg, Kravis, Roberts & Co. for $25 billion. Popular in the 1980s, an LBO would have an investor group, headed by either the company’s top managers or buyout specialists (like Kohlberg, Kravis, Roberts & Co.), put up, say, 10 percent of the bid price in cash. They would then borrow against the company’s assets and raise, say, 60 percent in secured bank loans and 30 percent from “junk bonds.” (Junk bonds are bonds that must provide a high yield because of their high risk.) The investor group then buys all the outstanding stock of the company, taking the company private. After selling off parts of the company to reduce its debt and cutting costs to increase profitability, the investor group hopes to reap large profits by taking the streamlined company public again. The fifth merger wave occurred in the 1990s. In terms of the volume of acquisitions as a fraction of the economy, figure 6.2 shows that this wave exceeded even the first merger wave. One of the most spectacular of these mergers was between Time Warner, itself the product of a past megamerger, with America Online. At the time of its approval by the FTC, this merger of giants had a market capitalization in excess of $100 billion. This merger wave is different from the LBOs of the 1980s in that many mergers took place in industries in the midst of deregulation, including electric power, telecommunications, and banking and financial services. Because of structural changes and enhanced competition, some mergers served to gain entry to new markets. In pharmaceuticals, mergers between direct competitors may have achieved economies of scale and scope. Downsizing and consolidation are other factors that were particularly relevant in the defense and health care industries. Some of these mergers led to a noticeable increase in market concentration, such as the creation of MCI WorldCom, which combined the second and third largest providers of long-distance telephone service. The sixth and most recent merger wave began in the early 2000s and came to an abrupt halt with the world financial crisis of 2008. While the preceding five merger waves had their origin in the United States, the sixth merger wave was international and occurred during a time of increased economic integration and

globalization and historically low interest rates. Some horizontal mergers during and after this merger wave involved some of the largest firms in already heavily concentrated markets, such as airlines, banking, beer, hospitals, petroleum, and tobacco.5 The Effects of Horizontal Mergers While a merger may occur for a variety of reasons (such as empire building by a CEO), it is generally believed that mergers take place because they are profitable. That is, the profit of the merged firm is expected to exceed the combined profits of the two firms if they had not merged. Higher profit can come from two general sources: market power and efficiencies. We examine both sources of profitability and consider some estimates of their effect on prices. After exploring why firms merge, we then tackle the normative question: Should firms be allowed to merge? Addressing that question requires understanding the forces behind a proposed merger and then assessing what they imply about the expected impact on prices, products, costs, and ultimately consumer surplus and industry profit. Why Firms Merge In merger evaluation parlance, increased market power from a merger can manifest itself as a unilateral effect or a coordinated effect. A unilateral effect is when a merged firm, acting on its own, reduces output and raises price. It may also raise price in conjunction with rival firms by engaging in some form of collusion, which is referred to as a coordinated effect. Both unilateral and coordinated effects are designed to raise price in order to increase profit and thus are necessarily bad for consumers. A potentially win-win situation occurs when the merger enhances efficiency through, for example, lower cost, better products, and innovation. Let us examine how these market power and efficiency effects arise in the context of a merger. Market power For the purpose of describing the unilateral effect of a merger on price, consider the Cournot model presented in chapter 4. Recall that firms’ equilibrium quantities result in a price-cost margin that is negatively related to the number of firms, n, and the (absolute value of the) demand elasticity, η, according to the following formula:

A merger, by reducing the number of firms, then increases the price-cost margin (assuming the merger does not affect firms’ cost functions, which is considered later). To understand why fewer firms results in a higher price, let us review why, from an industry perspective, firms produce too much and how that can be the basis for a merger to be profitable through a unilateral effect on price. At the root of the matter is the presence of an externality under competition. Recall that an externality is when an agent’s action impacts the utility of another agent and the agent who took the action ignores that effect. In the current setting, the action is the output decision of a firm that affects a rival firm’s profit. When a firm considers producing more, it thinks about how that additional output will impact its own profit. Though more supply will serve to lower price—and thereby reduce the amount of profit earned on each unit

sold—it will allow the firm to sell more units, which can make that higher output profitable. Suppose Δqi > 0 is the amount of higher output for firm i and ΔP < 0 is the lower price. It earns additional profit of (P − MC)Δqi on those additional units (where P and MC are price and marginal cost, respectively) and loses profit of qiΔP on the original quantity qi that it was selling. As long as (P − MC)Δqi + qiΔP > 0, the rise in output is profitable. Where the externality comes in is that the other firms lose profit because of the lower price. The profit of firm j is lower by qjΔP, as it receives a lower price on each unit sold due to firm i’s expanded supply. Of course, firm i does not care about how its decision affects firm j’s profits. At an equilibrium, each firm is individually optimizing—producing at a rate that maximizes its profit given what other firms are supplying—but firms are not collectively optimizing because of this externality. Industry profit could be raised if all firms were to contract output a bit. Recall that this is the reason firms desire to collude. It also provides a basis for firms to merge. What a merger does is to partially internalize this externality. Prior to a merger, firm 1 would raise its output if it increased its profit even though it depressed firm 2’s profit; and similarly, firm 2 would raise its output if it increased its profit even though it depressed firm 1’s profit. However, if firms 1 and 2 merge (and let us now refer to them as divisions 1 and 2), then the manager of the merged firm will take into account the impact of raising the output of division 1 on the profit of division 2. It will be less inclined to produce at a high rate because, while that might maximize the profit of division 1, it will harm the profit of division 2, and as a result of the merger, it is interested in maximizing the total profit of both divisions. The merged firm will then produce less than the combined firms produced prior to the merger, because it has internalized this externality. Note that this externality is only partially internalized, because the merged firm still does not internalize the profit-depressing effect of its supply on the remaining rival firms. If the supply of the non-merged firms remained fixed in response to the merger then, by reducing supply, the merger is assured of being profitable; that is, the merged firm’s profit exceeds the profits of the two firms prior to the merger. However, those other firms will not produce the same amount. Now that the merged firm is producing less, the non-merged firms will optimally expand their supply, for recall that Cournot best reply functions are downward sloping (see figure 4.5). In particular, the profit-maximizing quantity is higher when rival firms produce less, because a firm’s residual demand is stronger. The higher supply of the non-merged firms will tend to reduce the profitability of the merger because it lowers the price in the market. Indeed, it is quite possible that it could make the merger unprofitable. If it was anticipated to do so, then the merger would not take place. The preceding analysis was based on the Cournot model, so that firms choose supply. If instead firms choose prices (with differentiated products), a similar story can be told, with one notable change. There is again an externality under competition in that a firm that lowers its price to increase demand and profit fails to take account of the negative profit effect on rival firms from the lower demand they have. A merger internalizes that effect. Now, division 1 (formerly, firm 1) will not lower price, even if it increases division 1’s demand and profit if much of that demand and profit comes from division 2. Thus, the merged firm will set higher prices compared to the premerger prices. Furthermore, the response of non-merged firms is to raise prices in response to the merged firm’s higher prices, because best reply functions are upward sloping (see figure 4.7). This response enhances the profitability of the merger, as a firm’s demand and profit are higher when rival firms set higher prices. When firms compete in prices, a merger is always profitable. We conclude this discussion of unilateral effects with some estimates of their magnitude for fourteen airline mergers that took place during the late 1980s.6 A market here is an airline route, and each merger affected hundreds of routes. The empirical strategy is to compare airfare changes on routes affected by a merger to airfare changes on unaffected routes. The unaffected routes are a control group to capture

industrywide factors, such as fluctuations in fuel prices and seasonal variations in demand that could influence airfares. For example, suppose that fares on a merger-affected route (say, Boston–Chicago) rose by 5 percent, but that fares on an unaffected route of comparable distance and density (say, Chicago– Philadelphia) also rose by 5 percent. One could conclude that the merger had no effect on fares and that the fare increase was due to some other factor, such as higher fuel costs. The authors recognized that any change in airfares of merging firms reflects the joint effect of possibly lower marginal cost, which would tend to lower fares, and increased market power, which would increase fares. To isolate the market power effect, they estimated the change in fares between the time of the announcement of the merger and its consummation. During that period, firms would have internalized the effect of price, but there would not have been any change in cost given that the firms had not yet integrated. Relative to routes not impacted by the merger, fares went up 5.54 percent and, if mergers involving a firm on the edge of bankruptcy are excluded (as they may not be representative of most firms), fares were higher by 11.32 percent. There was a clear unilateral effect from the merger. Furthermore, rival airlines responded as predicted by raising their fares as well. For firms not involved in the merger but which served the same routes as the merged firm, fares rose by 5.94 percent overall and by 12.64 percent when failing firms are excluded. The preceding analysis presumed that the firms competed prior to and after the merger. Another possibility is that a merger causes firms to replace competition with tacit collusion or, worse yet, the formation of a price-fixing cartel. This is known as a coordinated effect of a merger. Recall from chapter 4 that successful collusion requires coordination (firms have to reach a “meeting of minds” that they are no longer competing) and compliance (each firm finds it optimal to collude, not compete). Our discussion here looks into how a merger can promote both coordination and compliance. To examine the issue of compliance, let us return to the Cournot model. Collusion is intended to raise the price-cost margin above the competitive level. Recall that cartel stability requires that each firm find it more profitable to set the collusive price (and anticipate collusive profits in the current and future periods) than to undercut the collusive price (and reap higher profits in the current period but lower profits in the future). By reducing the number of firms, a merger influences that calculus and, in particular, reduces the incentive to cheat, which promotes compliance. With fewer firms, each firm’s market share is higher at the collusive outcome, which means that less market share can be gained by cheating. While the punishment from cheating may be weaker with fewer firms (as competition is less intense), on net the reduction in the shortrun gain from cheating exceeds the weakened punishment, so that a merger makes compliance easier. Coordination is generally recognized to be easier when fewer firms are involved. With regard to tacit collusion (that is, when firms do not expressly communicate), experimental evidence has shown that coordinating on a collusive price is quite common when only two firms are involved, relatively rare for three firms, and basically nonexistent for four or more firms.7 While one must be careful when extrapolating from the laboratory to the market, the general point is that a merger makes tacit collusion more likely. A recent study provided evidence that a merger in the brewing industry resulted in coordinated effects.8 Though technically a joint venture, it was evaluated as a merger by the Antitrust Division of the U.S. Department of Justice (DOJ). Referred to as MillerCoors, the merger involved SABMiller and Molson Coors Brewing Company. As reported in table 6.1, the two companies were the second and third largest suppliers of beer in the United States and controlled 29 percent of the market after the approval of the merger in June 2008.

Table 6.1 Market Shares and HHI based on Revenue, United States

Inflation-adjusted retail prices for beer had been steadily declining since 2001 (see figure 6.3). This trend “breaks dramatically and abruptly”9 after the formation of MillerCoors. Retail prices increased by six percent for brands supplied by MillerCoors and competitor ABI (which includes Miller, Coors, and Budweiser); both in absolute terms and in comparison to distant substitutes (such as Heineken and Corona Extra). The evidence supports the hypothesis that the merger supplanted competition between SABMiller, Molson Coors, and ABI with tacit collusion between the newly formed MillerCoors and ABI.

Figure 6.3 Average Retail Prices of Flagship Brand Twelve-Packs Source: Nathan H. Miller and Matthew C. Weinberg, “Understand the Price Effects of the MillerCoors Joint Venture,”

LeBow College of Business, Drexel University, Philadelphia, February 24, 2017. Note: The vertical axis is the natural log of the price in real 2010 dollars. The vertical line denotes when the merger between Coors and Miller was consummated.

Even when unilateral effects may not be a concern, there could still be the risk of coordinated effects. As an example, consider the Bertrand price game: Firms offer identical goods, have the same marginal cost, and choose prices. As long as two or more firms (and ample capacity) are involved, the Nash equilibrium has firms price at cost. Thus, a merger is predicted to have no unilateral effect as long as the postmerger structure has at least two firms. However, we also know that collusion is easier with fewer firms (see figure 4.14). Thus, a merger could have coordinated effects if the increased concentration made it more likely that collusion would emerge, even though unilateral effects are minimal. Another scenario in which a merger would have no unilateral effects but possibly have coordinated effects is when it involves firms operating in multiple markets. Consider figure 6.4, which has firms 1 and 2 in market I, firm 3 in markets I and II, and firm 4 in market II. For example, these could be geographic markets, and the firms are retail chains. While a merger between firms 1 and 2 could have unilateral effects (in market I), a merger between firms 2 and 4 would have no unilateral effects (as they do not compete), but it could have coordinated effects. To see how, suppose collusion is difficult in market I but is easy in market II. It is possible that merger could make collusion viable in market I. While firm 2 may have been unwilling to collude in market I prior to the merger, it may now be inclined to do so because of the threat that if it does not collude in market I, firm 3 will punish it in both markets I and II.

Figure 6.4 Merger among Multimarket Firms

Efficiencies Thus far we have focused on the nefarious side to a merger—enhanced market power—but there is also a virtuous side, which is greater efficiencies as reflected in lower costs (as well as better products, more innovation, etc.). There are two broad categories of cost savings: pecuniary and real. Pecuniary efficiencies are monetary savings from buying goods or services more cheaply. For example, the larger firm size resulting from a merger may enhance bargaining strength when dealing with suppliers. However, those savings do not represent a gain in surplus but rather a shift of surplus from input suppliers to the merged

firm and thus should not count as a social benefit from the merger. In contrast, real efficiencies involve genuine resource savings through, for example, increased specialization, scale economies, and sharing capital in the joint production of some goods. As only real efficiencies are socially beneficial, we will focus on them. We first consider economies of scale (so the efficiency is from combining volume) and then economies of scope (so the efficiency is from combining products).10 An example of a real efficiency due to a horizontal merger involved three English antifriction bearing manufacturers.11 These firms were able to revamp production assignments among their plants so as to eliminate duplication and lengthen runs. The changes led to a 40 percent increase in output per employee within three years. In the market for funeral services, SCI acquired small, family-owned businesses to the point that it was operating more than 1,800 funeral homes and cemeteries in the United States and Canada.12 These acquisitions created efficiencies through the sharing of capital as SCI funeral homes share limos and hearses, and it has regional centers for embalming and cremation. In a proposed merger that ultimately did not come to fruition because of the FTC’s opposition, Heinz intended to purchase the parent company of Beech-Nut.13 The primary competitive concern was in the market for baby food, where Gerber was the market leader, and Beech-Nut and Heinz were the other two major players. A projected efficiency was that Heinz intended to shift production from Beech-Nut’s old, high-cost, labor-intensive plant to the more automated plant of Heinz. Beech-Nut’s plant required 320 workers to produce 10 million cases of baby food per year, while Heinz only needed 150 workers to yield 12 million cases. Also, Heinz was operating at only 40 percent of capacity and could handle all of Beech-Nut’s production. Later in the chapter, we will examine the manner in which mergers are evaluated by the DOJ and the FTC. As the standard is consumer welfare, any unilateral or coordinated effects (which would cause price to rise) will then need to be offset by lower marginal cost (which would cause price to fall) so that, on net, prices are not higher (or there are some other benefits for consumers). Given the critical role of establishing efficiencies, let us explore one general class of cost savings in some depth. When a merger involves companies with production plants, one possible source of lower marginal cost is a more efficient allocation of industry supply, as was exemplified above with the proposed baby food merger. Specifically, a merged firm can coordinate output across plants so as to reduce total cost. To see how, suppose a merger does not affect technologies and input prices, so cost functions are unchanged. Firms 1 and 2 propose merging where their marginal cost curves are as depicted in figure 6.5; note that firm 2 is more efficient than firm 1. Prior to the merger, firm 1 is producing units with marginal cost and firm 2 is producing units with marginal cost By virtue of having a lower marginal cost curve, firm 2 produces more:

Figure 6.5 Analysis of Cost Savings from a Merger

To keep our analysis of cost savings untainted by market power effects, suppose the merged firm was to keep total supply unchanged at To maximize profit, it will allocate Q′ across its two production units (previously, firms 1 and 2) so as to minimize total cost. The first point we want to make is that, as long as the cost-minimizing solution has both production units being active, it must result in their marginal costs being equalized. To prove this claim, we will suppose marginal costs are not equal and then show that supply can be reallocated so as to reduce total cost. Let production unit 1’s output be q0, which implies production unit 2 produces Q′ − q0, and suppose their marginal costs are unequal and, furthermore, MC1(q0) > MC2(Q′ − q0). Now move a single unit of supply from production unit 1 to unit 2. This reduces the total cost incurred by unit 1 by MC1(q0) and raises the total cost of unit 2 by MC2(Q′ − q0); hence, total cost declines by MC1(q0) − MC2(Q′ − q0). Starting from quantities and the merged firm can continuously shift supply from unit 1 to unit 2, lowering cost as it does so, until the cost-minimizing quantities of and are reached, where, as shown in figure 6.4, marginal costs are equalized. In this manner, there is an efficiency from the merger that represents a gain in profit and surplus.14 In light of this analysis, it is relevant to ask whether a more efficient allocation of production can lower price even when the merged firm optimally adjusts its total quantity (which means that market power effects come into play). Unfortunately, it cannot. If the merger is profitable and the cost functions are unchanged (as presumed in the previous discussion), then the postmerger price must exceed the premerger price.15 The market power effect will trump the reduced marginal cost, so that the merged firm produces less (compared

to the total premerger supply of the firms involved in the merger) and industry supply shrinks. If a profitable merger is to lower price and make consumers better off, it must result in a better technology and not simply a shifting of supply to more efficient plants. A merger can also have economies of scope by combining product lines or production processes, which reduces costs and enhances productivity. The mergers of pharmaceutical companies have benefited from this efficiency. Knowledge spillovers arise when discoveries across programs stimulate advances. For example, the search for cardiovascular drugs led to improvements in central nervous system therapies. A study of ten major pharmaceutical companies over twenty years found that economies of scope were indeed present but were exhausted once the firm had more than six to seven major research programs.16 That is the type of effect relevant to evaluating a merger. Our discussion has concerned only static efficiencies, in that they are realized on a short time scale and are not endogenous to the decisions of the merged firm. Dynamic efficiencies can arise when a merger alters the incentives to adopt new technologies or invest in reducing cost and developing new products. For some mergers, dynamic efficiencies could far exceed any of the efficiencies discussed above. A case in point is the merger of Boeing and McDonnell Douglas in 1997.17 The market is medium-sized, wide-bodied (i.e., two passenger aisles) aircraft. In the mid-1990s, there were three manufacturers (Airbus, Boeing, and McDonnell Douglas) with four products on the market (A330, A340, B777, and MD-11). It is well documented that the production of aircraft is characterized by a strong learning curve (or learning-bydoing). It is a labor-intensive production process for which labor productivity increases with the amount of past production. The more planes that are built, the more improvements of the production process are discovered and implemented. On acquiring McDonnell Douglas, Boeing shut down the production of the MD-11. By itself, that is harmful to consumers because of reduced product variety. As it turned out, there was a large reduction in future marginal cost associated with increased learning-by-doing with regard to the B777. The B777 was introduced only two years prior to the merger and by concentrating production in the B777—rather than having it split between the B777 and the MD-11—cost fell at a faster rate. It was estimated that if Boeing had not shut down the MD-11, the merger would have reduced consumer surplus by around $7.61 billion due to the market power (unilateral) effect. By closing the MD-11 plant and thereby enhancing the learning curve associated with the B777, consumers were better off by $5.14 billion from the lower prices (as a result of lower costs) and quality improvements. Welfare Analysis Having examined why firms merge, let us now turn to considering when society should allow them to do so. Addressing this question requires evaluating the effect of a merger on prices and quantities and ultimately on the welfare of consumers and the profits of firms. It also depends on the welfare criterion used to judge a merger. While our analysis here will presume a social welfare standard—so a merger is deemed desirable when the change in consumer surplus and industry profit is positive—it is important to remember that the standard used in the United States is consumer surplus. That criterion will be assumed later when we consider the actual practice of evaluating horizontal mergers. Figure 6.6 provides a simple first cut at encompassing the market power and efficiency effects of a merger in what is referred to as the Williamson model.18 The horizontal line AC0 represents the level of average costs of two firms before they are combined, and AC1 shows the average costs after merger. Before merger, the degree of competition is sufficient to force price down to AC0. After merger, costs fall and

market power is created, leading to a price increase from P0 to P1.

Figure 6.6 Social Benefits (A2) and Costs (A1) of a Horizontal Merger—Perfect Competition in Premerger Stage

The merger results in a deadweight loss in consumers’ surplus equal to the shaded triangle A1. However, there is a gain to society because of the cost savings, given by the shaded rectangle A2. It represents the cost savings in producing output q1 at the lower average cost. The main result of this analysis is that a relatively small percentage cost reduction will offset a relatively large price increase, thereby making society indifferent to the merger. For example, if a merger is expected to increase price by 20 percent, only a 2.4 percent cost reduction is required to equate areas A1 and A2 in figure 6.6.19 (These particular numbers also assume a unitary price elasticity of demand.) Table 6.2 presents the cost reductions required for alternative assumptions about price increases and demand elasticities. Table 6.2 Percentage Cost Reduction Sufficient to Offset Percentage Price Increases for Selected Values of Elasticity of Demand Percentage Cost Reduction

5 10 20

η=3

η=2

η=1

η = 1/2

0.43 2.00 10.37

0.28 1.21 5.76

0.13 0.55 2.40

0.06 0.26 1.10

Note: Computed by authors using formula in note 19. η, elasticity of demand.

Let us show more generally that the welfare effect of cost reductions tends to swamp those of price rises. First, note that the area of triangle A1 in figure 6.6 is and that of rectangle A2 is q1(AC0 − AC1). If the reduction in average cost is sufficiently large relative to the rise in price, then the merger raises welfare, since A2 is bigger than A1. Similarly, if the rise in price is sufficiently large relative to the reduction in average cost, then the merger lowers welfare. More interesting is to analyze the welfare gain when the change in cost is comparable in size to the change in price. To consider such a case, let us suppose that they are proportional to each other, so that

where a > 0 and we are assuming that it is not too different from 1. As the welfare change is the gain due to cost savings less the forgone surplus due to reduced supply, it equals

where we have substituted aq1(P1 − P0) for AC0 − AC1. Collecting terms, the change in welfare is

If the change in cost is comparable in size to the change in price and the rise in price is not too large (or demand is not too elastic), so that q1 is not too much smaller than q0, it follows that

Hence, a merger for which the decrease in cost is comparable in size to the increase in price will always raise welfare. There is a sound economic reason for why a cost reduction has more impact on welfare than a price hike. At the competitive output of q0, the marginal willingness to pay (MWP) for the last unit (which is measured by the height of the demand curve at q0) equals the marginal cost (MC) to society of producing that unit (which is measured by AC0 in figure 6.6). As the surplus created by a unit is MWP − MC, that last unit produced and consumed adds nothing to surplus. Thus, when a merger causes supply to decrease, very little loss in surplus results from those initial units no longer sold. In contrast, there is a cost saving of AC0 − AC1 on each and every unit. This result is weakened, however, when the industry is not competitive prior to the merger.20 Consider figure 6.7, which is identical to figure 6.6 except that the premerger price, still denoted P0, now exceeds the premerger cost. The loss from reduced supply is no longer a triangle but rather trapezoid B1, while the gain from cost savings remains a rectangle, now denoted by B2. First note that the forgone surplus from reduced supply of q0 − q1 is larger, because the premerger quantity was below the competitive level. Next note that a trapezoid is the sum of a rectangle and a triangle, and since the rectangle portion of the trapezoid cannot be

guaranteed to be smaller than B2 when the change in price and cost are small, the argument associated with figure 6.6 does not apply. Intuitively, the initial units that are no longer sold because of a merger-induced price rise now have positive surplus because, prior to the merger, MWP − MC > 0. In sum, when the premerger market is imperfectly competitive, then the reduction in cost necessary to offset a rise in price is not quite as small as suggested by table 6.2.

Figure 6.7 Social Benefits (B2) and Costs (B1) of a Horizontal Merger—Imperfect Competition in Premerger Stage

Let us conclude with two additional qualifications. First, the analysis ignored the effect of the merger on the decisions of rivals firms. For example, if the merged firm prices higher, then other firms should be induced to also price higher, as the rise in a competitor’s price serves to shift out a firm’s demand function. The rivals’ response to the merger will then magnify the welfare losses from enhanced market power. Second, if the criterion by which a merger is evaluated is consumer surplus (as it is in the United States), then cost will have to fall sufficiently to offset the market power effect, so that price is lower. We return to this point in the next section. Finally, can we infer anything about the welfare effect of a proposed merger if competitors are challenging it? Remaining with the Williamson model, all that a competitor cares about is whether the merged firm will produce more or less output compared to the combined premerger output of the firms engaged in the merger. If a merger causes the merged firm’s output to fall (rise), then the competitors are better (worse) off, since there is more (less) demand left for them to supply. So suppose the merged firm

produces less (because cost savings are small or nonexistent). The stronger demand faced by competitors induces them to supply more, but, on net, total industry supply can be shown to fall, so that price is higher. In that case, competitors will not challenge the merger, because their profit is higher. However, consumer surplus is lower—as price is higher—and the merger is likely to reduce welfare. Analogously, if the merged firm produces more (due to sufficiently large cost savings), competitors will respond by contracting their supply and, on net, price will be lower. Competitors are clearly worse off and may challenge the merger, but the merger is welfare-enhancing, as price is lower and there are cost savings. Thus, if competitors challenge a merger, then the merger must improve welfare and should be approved! An exception to the above policy statement is when the merged firm engages in anticompetitive practices, such as those that raise competitors’ costs. If competitors have higher costs, then their profits are likely to be lower, and it can also result in price being higher, so that consumers are worse off as well. Such an argument was made in connection with the 1985 acquisition of Pan American World Airway’s Pacific Division by United Airlines. Northwest Airlines challenged the merger, and the purported device by which the merger would raise its cost was the proprietary reservation system of United used by travel agents.21 Merger Law and Enforcement Now that we have some understanding as to why firms would want to merge and what might be the welfare implications of a merger, the next task is to consider the government’s role in evaluating prospective mergers. After reviewing procedures and some facts about the merger evaluation process, the focus turns to issues regarding U.S. judicial practice and empirical methods used by competition authorities when assessing a merger’s possible impact. Merger Evaluation: Activity and Procedures In principle, competition policy with regard to mergers could be conducted before or after a merger. The “before” strategy would prohibit a proposed merger that is conjectured to be anticompetitive, while an “after” strategy would penalize those mergers that prove to be anticompetitive (such as through fines or deconstructing the merger). The virtue of evaluating proposed, rather than consummated, mergers is that it avoids unnecessary anticompetitive effects as well as the transaction costs associated with merging and demerging. However, this strategy suffers from having to make a decision based on forecasted rather than actual effects. Since the passage of the Hart-Scott-Rodino Act in 1976, the “before” strategy has been assisted by the legal requirement that a proposed merger for which the value of the transaction exceeds some magnitude must be reported to the DOJ and the FTC. The minimum value of a transaction for which premerger notification is required by law is $78.2 million as of 2016 (and is indexed to gross domestic product). Table 6.3 provides the number of prospective mergers reported during 2002–2015. Table 6.3 Transactions under the Hart-Scott-Rodino Premerger Notification Program, 2002–2015

On notification, the companies must wait thirty days before consummating the merger in order to give the government time to evaluate the case. If the government decides that further inquiry is required, it can issue a request for additional information and documentary material—known as a “Second Request”—which extends the waiting period for another thirty days. Second Requests are generally issued because of anticompetitive concerns and, as reported in table 6.3, are quite rare. Only 1–2.5 percent of all merger notifications result in a Second Request. The reviewing agency could approve the merger or, if it finds that it would substantially lessen competition, seek an injunction in federal district court to prohibit consummation of the transaction. For example, in 2015 the DOJ blocked the proposed sale of GE’s home appliance division to Electrolux, which would have combined two of the three largest suppliers of home appliances in the United States. In response to such an action, the companies can either drop the merger or pursue the matter in court. Outright prohibition of a merger is very rare. It is far more common for the agency to require a remedy intended to eliminate the anticompetitive effects. Remedies are often used when the concern lies in some subset of markets for which the merger would have substantial market power effects. An example is the 2015 merger of American Airlines and U.S. Airways. The DOJ sought to block the proposed merger because of possible unilateral and coordinated effects. In its complaint, the DOJ stated that the merger would cause unilateral effects on some routes including a monopoly on 63 percent of the nonstop routes from Reagan National Airport in Washington, DC. By reducing the number of major airlines from four to three, it was also felt that there might be coordinated effects. To alleviate the DOJ’s concerns, the airlines agreed to sell a combined 104 takeoff and landing slots at Reagan National Airport and thirty-four slots at La Guardia Airport in New York to low-cost carriers, such as JetBlue and Southwest. This sale would effectively decrease the number of arrival and departure flights controlled by the merged entity. Time will tell whether this condition actually prevented higher prices.22 The European Commission (EC) has a broadly similar process with some distinctive elements. On being notified of a proposed merger, the EC conducts a phase I investigation, which can either conclude with the clearing of the merger (unconditionally or with remedies) or the opening of a phase II investigation because of anticompetitive concerns. More than 90 percent of cases are resolved in phase I. Phase II is a far more indepth analysis of the merger’s competitive impact and includes a close examination of claimed efficiencies. If the EC concludes that the merger would harm competition, it sends a Statement of Objections to the parties, who then have the right to respond. Following the phase II investigation, the EC may either unconditionally clear the merger, approve the merger with remedies, or prohibit the merger. As in the United States, all decisions of the EC are subject to judicial review.

Development of Merger Law and Policy Era of structural presumption As discussed earlier, Section 7 of the Clayton Act was amended in 1950 to plug the “asset loophole.” The Supreme Court’s first ruling under the new act came in the Brown Shoe case of 1962.23 This case involved the merger of Brown, the fourth largest manufacturer of shoes in the United States with about 4 percent of the market, and G. R. Kinney Company, the twelfth largest with a 0.5 percent share. Both companies were also engaged in shoe retailing. Brown had 2.1 percent of the national retail market, and Kinney had 1.6 percent. Though the case involved both horizontal and vertical dimensions, we shall deal only with the horizontal aspects here. The first step the Court took was to discuss the definition of the relevant retail shoe market.24 That is, should total retail shoes sold in the United States be the market, or should men’s shoes and women’s shoes in large cities be separate relevant markets? It clearly matters, since the two firms had a combined share of only about 4 percent of the national market, while they had 57 percent of the market for women’s shoes in Dodge City, Kansas. According to the Court in Brown Shoe (1962): The “area of effective competition” must be determined by reference to a product market (the “line of commerce”) and a geographic market (the “section of the country”). The outer boundaries of a product market are determined by the reasonable interchangeability of use or the cross-elasticity of demand between the product itself and substitutes for it. However, within this broad market, well-defined submarkets may exist which, in themselves, constitute product markets for antitrust purposes. The boundaries of such a submarket may be determined by examining such practical indicia as industry or public recognition of the submarket as a separate economic entity, the product’s peculiar characteristics and uses, unique production facilities, distinct customers, distinct prices, sensitivity to price changes, and specialized vendors. Applying these considerations to the present case, we conclude that the record supports the District Court’s findings that the relevant lines of commerce are men’s, women’s, and children’s shoes. These product lines are recognized by the public; each line is manufactured in separate plants; each has characteristics peculiar to itself rendering it generally noncompetitive with the others; and each is, of course, directed toward a distinct class of customers.

Next, the Court turned to the geographic dimensions of the market: The criteria to be used in determining the appropriate geographic market are essentially similar to those used to determine the relevant product market. Congress prescribed a pragmatic, factual approach to the definition of the relevant market and not a formal, legalistic one. The geographic market selected must, therefore, both “correspond to the commercial realities” of the industry and be economically significant. Thus, although the geographic market in some instances may encompass the entire Nation, under other circumstances it may be as small as a single metropolitan area.

The Court concluded that the relevant geographic markets were “every city with a population exceeding 10,000 and its immediate contiguous surrounding territory in which both Brown and Kinney sold shoes at retail through stores they either owned or controlled.” By this definition, the Court found some markets with high combined market shares. For example, the maximum was the 57 percent in Dodge City noted earlier. However, this was atypical. The most important statistic seemed to be that in 118 separate cities, the combined share of one of the relevant product lines exceeded 5 percent. If a merger achieving 5 percent control were now approved, we might be required to approve future merger efforts by Brown’s competitors.… The oligopoly Congress sought to avoid would then be furthered and it would be difficult to dissolve the combinations previously approved.

It is useful to contrast the Court’s opinion in Brown Shoe (1962) with the Williamson tradeoff analysis

described earlier. The small market shares seem unlikely to give rise to price increases from greater market power. However, the Court’s view of Congressional intent was that a trend toward increased concentration should be stopped in its incipiency. What about cost savings? In Brown Shoe the Court recognized that integrated operations could create efficiencies; however, such efficiencies were not regarded as important as maintaining small competitors: The retail outlets of integrated companies, by eliminating wholesalers and by increasing the volume of purchases from the manufacturing division of the enterprise, can market their own brands at prices below those of competing independent retailers. Of course, some of the results of large integrated or chain operations are beneficial to consumers. But we cannot fail to recognize Congress’ desire to promote competition through the protection of viable, small, locally owned businesses. Congress appreciated that occasional higher costs and prices might result from the maintenance of fragmented industries and markets. It resolved these competing considerations in favor of decentralization. We must give effect to that decision.

An interpretation of the Brown Shoe decision is that antitrust has multiple objectives, with economic efficiency being just one of them. More specifically, the objective of maintaining many small retailers must be balanced (somehow) against the higher costs to consumers. One antitrust expert, Robert Bork, disputed the view that Congress intended multiple goals for antitrust: In Brown Shoe, in fact, the Supreme Court went so far as to attribute to Congress a decision to prefer the interests of small, locally owned businesses to the interests of consumers. But to put the matter bluntly, there simply was no such congressional decision either in the legislative history or in the text of the statute.… The Warren Court was enforcing its own social preferences, not Congress’.25

The Brown Shoe decision is representative of the structural presumption that was an underpinning of merger law and practice in the 1960s and 1970s. Rooted in the structure-conduct-performance paradigm discussed in chapter 3, the structural presumption is that a merger is inherently harmful, because it raises market concentration. This view was well articulated in Philadelphia National Bank (1963): A merger which produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market is so inherently likely to lessen competition substantially that it must be enjoined in the absence of evidence clearly showing that the merger is not likely to have such anticompetitive effects.26

One can take that view and still allow mergers among competitors with small market shares on the grounds that anticompetitive effects may be small and cost savings may be large due to scale economies. However, it was a feature of practice at that time that mergers involving quite small market shares were prohibited. A classic case is the Von’s decision in 1966.27 Von’s was the third largest grocery chain in the Los Angeles area in 1960 when it acquired the sixth-ranked chain, Shopping Bag Food Stores. The combined firm had only 7.5 percent of the market and was second to Safeway Stores. Despite the low share of 7.5 percent, the Court found the merger to be illegal. The emphasis in the decision was on the trend toward fewer owners in single grocery stores in Los Angeles, as they had declined from 5,365 in 1950 to 3,818 in 1961. Also, the number of chains with two or more grocery stores increased from 96 to 150 over 1953 to 1962. According to Justice Black, “The basic purpose of the 1950 Celler-Kefauver bill was to prevent economic concentration in the American economy by keeping a large number of small competitors in business.” Rise of the Chicago school of antitrust and the merger guidelines What was so striking about the structural presumption era is not just that mergers involving relatively low market shares were prohibited but that other evidence suggesting the market would remain highly competitive could rarely rebut it. The Supreme Court’s General Dynamics decision in 1974 was a notable

departure from that perspective.28 The case involved two coal producers. The approach of the DOJ was its standard one: Define the relevant geographic market and calculate the market shares of the two merging parties. With this approach, it was determined that the merged firm would have a 23.2 percent market share of coal sales in Illinois, and the four-firm concentration ratio would be 75 percent. On those grounds, it blocked the merger. Taking a different perspective, the Court found these market shares were not relevant to assessing the impact of the merger. It correctly argued that what was critical for a firm to compete were coal reserves, and one of the parties of the merger had already locked up almost all of its reserves in long-term contracts, so it could not compete for new contracts. As it would not be much of a competitor even without the merger, the Court decided that the merger would not impact competition. While the General Dynamics case was not a turning point, it was reflective of a growing perspective to not simply define a market and measure the impact on concentration, but rather to more broadly assess the impact of a merger on competition. Particularly influential in the move away from the structural presumption was Robert Bork’s The Antitrust Paradox in 1978. This intellectual perspective (referred to as the Chicago School of Antitrust) was the foundation for subsequent judicial developments that focused on assessing the impact of a merger on consumers while taking account of all relevant factors, including the threat of entry and cost efficiencies. As noted by the District of Columbia Circuit Court: “The Supreme Court had adopted a totality-of-the-circumstances approach … weighing a variety of factors to determine the effects of particular transactions on competition.”29 An important step in this approach was the introduction of the Merger Guidelines by the DOJ. The Guidelines describe the set of factors considered when evaluating a merger and offer potential “safe harbors,” whereby a proposed merger would not be challenged. While the Guidelines noted that market concentration was relevant, it was emphasized not to be determinative regarding whether a merger would be allowed. The guidelines were first released in 1968 by DOJ with revisions in 1982, 1984, 1992 (at which time they began being co-issued by the FTC), 1997, and 2010.30 The 1982 guidelines gave prominence to entry conditions, while a greater emphasis on the efficiency defense was part of a minor revision in 1997. The EC introduced its own merger guidelines in 1989, which were substantially revised in 2004. Practices for Evaluating a Merger Will a merger have significant unilateral effects? Are there efficiencies? If projected efficiencies are realized, will they offset any unilateral effects? Are there likely to be coordinated effects? How effective is the threat of entry? These are the types of questions that need to be addressed when a competition authority evaluates a prospective merger. When the standard is consumer welfare (as in the United States), a competition authority ultimately wants to know whether it will cause price to rise and, if so, whether any offsetting benefits accrue to consumers, such as better products (e.g., more innovation with a merger of pharmaceutical companies) or more product variety (e.g., more flights with a merger of airlines). In our discussion, those other possible benefits will be put aside, so that we can focus on assessing the price effects of a merger. For unilateral effects, we first consider market definition and the so-called “small but significant and nontransitory increase in price” (SSNIP) test—which has been part of the Merger Guidelines since 1982—and then turn to a more recent method that does not require market definition (and is covered in the 2010 Guidelines). These methods do not, however, measure the size of the price effects, although they can be informative of possible unilateral effects. Some methods for estimating the magnitude of the unilateral effect are then discussed, and we

conclude our coverage with coordinated effects and the consideration of entry. Market definition and the SSNIP test A major contribution of the Guidelines is in defining the relevant antitrust market. As we noted in the discussion of the cases earlier, delimitation of the relevant market has been crucial for determining the legality of a merger. The DOJ defines the market conceptually as follows in the 1982 Merger Guidelines: A market is defined as a product or group of products and a geographic area in which it is sold such that a hypothetical profit-maximizing firm, not subject to price regulation, that was the only present or future producer or seller of those products in that area likely would impose at least a “small but significant and nontransitory” increase in price, assuming the terms of sale of all other products are held constant. A relevant market is a group of products and a geographic area that is no bigger than necessary to satisfy this test.

This approach has come to be known as the SSNIP test. Operationally, the test is taken to be a 5 percent increase in price lasting for one year, but it serves only as a benchmark. An example should be instructive. For simplicity, assume that the product, gravel, has no close substitutes. Hence, we can focus on the geographic dimension of the market. Furthermore, assume that a merger between two gravel suppliers located in Centerville is being examined by the DOJ. At the present time, the market in Centerville is competitive, and the price is $100. Furthermore, assume that the cost per unit by suppliers everywhere is $100, and that it costs 25 cents to haul a ton of gravel 1 mile. The issue is to determine the geographic limits of the market. Should it stop at the city limits of Centerville? Given that it is costly to haul gravel from suppliers outside the city limits, how many miles should the market extend beyond the city limits? In figure 6.8 the horizontal axis shows Centerville located along a highway that extends to the east and west, where the numbers indicate the number of miles from Centerville’s city limits.

Figure 6.8 Geographic Market Definition

The height of the vertical line at Centerville represents the $100 competitive price at that location. The lines sloping upward to the east and west have vertical heights equal to $100 plus the miles away from Centerville multiplied by 25 cents per mile. Hence, at a distance of 20 miles to the east, the height of the sloping line is $100 + (0.25) (20) = $105. This number can be interpreted as the delivered price that a supplier 20 miles to the east could sell gravel for in Centerville. The Guidelines provide the answer to the market definition problem. The market should include all suppliers who would need to be part of a hypothetical cartel such that the price in Centerville could be raised by, say, 5 percent, to $105, on a nontransitory basis. If it costs 25 cents per mile to transport a ton of gravel, then a supplier 20 miles away could sell in Centerville at $105. Hence, all suppliers within 20 miles of Centerville should be part of the market. Notice that if the price increase is taken to be 10 percent, then the market boundaries should extend out to 40 miles, implying a market with more suppliers. The example makes the important point that one must decide on the percentage price increase before the market boundaries can be determined. In short, market power is a continuous variable—there is no magical way to determine a market without incorporating some standard. In fact, there were arguments at the DOJ as to whether a 5 percent or a 10 percent increase should be specified. Although 5 percent was specified, the Guidelines point out that it is not “an inflexible standard that will be used regardless of the circumstances of a given case.” A higher price increase of course means a broader market and thereby permits more mergers to slip by unchallenged. (Two particular merging firms will have lower market shares in a 40-mile market than in a 20-mile market and, as we shall explain, the Guidelines are more likely to endorse mergers that involve lower market shares.) A lower price increase means possibly prohibiting relatively harmless mergers or those that may promote efficiency. When products are homogeneous, such as cement and chemicals, market definition is fairly straightforward, and there is likely to be a consensus as how to define the market. However, the exercise of defining the market is fraught with challenges and disagreements when the products that firms offer are differentiated. Is Burger King in the same market as McDonald’s? What about Subway? What about Pizza Hut? What about Chipotle? What about Applebee’s? What about Outback? What about Ruth’s Chris Steakhouse? We may agree that McDonald’s and Ruth’s Chris Steakhouse are not good substitutes and thus do not belong to the same market, but where does one draw the line? Exemplifying the dispositive and problematic role of market definition was the proposed merger of Whole Foods and Wild Oats that came before the FTC in 2007. The two grocery store chains specialized in natural and organic food. The FTC sought to block the merger on the grounds that the relevant market definition was “premium natural/organic supermarkets” (PNOS), and indeed the merger would have significantly increased concentration in that case. But the situation is not so simple. Whole Foods and Wild Oats documented that many of their customers “cross shop” at traditional supermarkets where they make their nonorganic purchases but can also buy some organic foods. The District Court ruled that “the FTC has not met its burden to prove that ‘premium natural and organic supermarkets’ is the relevant product market in this case for antitrust purposes.”31 The ultimate problem with market definition is that it defines a firm as “in” or “out” when the reality is more subtle. Traditional supermarkets are competitors to some degree to PNOS, which would become clear if the PNOS were to significantly raise their prices and we witnessed consumers switching from PNOS to traditional supermarkets. This dissatisfaction with merger evaluation methods based on market definition led

to the development of an alternative approach, known as upward pricing pressure, which is superior by virtue of getting closer to the core issue of a merger: Will the merger raise prices? Upward pricing pressure The objective is to assess whether the merged firm will be inclined to push prices up or down, as well as the strength of that tendency. The approach is based on the following simple but powerful idea, which we will illustrate for the toothpaste market.32 Proctor & Gamble’s Crest and Colgate-Palmolive’s Colgate have competed in the market for toothpaste since 1953. There are many other toothpaste suppliers, including GlaxoSmithKline (with AquaFresh), Unilever (with Close-Up), and Tom’s of Maine. Suppose ColgatePalmolive has proposed to buy the Crest brand from Proctor & Gamble. What do we think will happen to the prices of Colgate and Crest? To answer this question, let us first consider the companies’ pricing incentives prior to the merger. If Colgate-Palmolive considers raising the price of Colgate, it will earn a higher margin on the units it sells but will sell fewer units. If the first effect is smaller, then profit will go down, in which case it would not raise price. Its optimal price is where those two effects are of the same magnitude (but opposite sign), so that profit cannot be increased by either raising or lowering price. Note that the higher price would benefit rival Proctor & Gamble, because it would cause the demand for Crest to rise. But Colgate-Palmolive ignores that effect, because it does not care about the profits earned on another company’s product. It is a very different situation once Colgate-Palmolive acquires the Crest brand, because then it will care about the profits earned on Crest and, therefore, will be more inclined to set a higher price for Colgate. (This process shows the internalization of the externality that was discussed earlier in the chapter.) If the profit from the Colgate brand declines with a higher price for Colgate, it will now be at least partly offset by higher profit on Crest. The merger then enhances the incentive to raise price, and how strong that incentive is depends on the extent to which a higher price for Colgate results in higher profit earned on Crest. If much of the loss in demand for Colgate goes to, say, Unilever’s Aquafresh, then the incentive to raise price is weaker. In addition, it depends on how much profit is made on each unit of Crest sold. Even if much of the loss of demand for Colgate goes to Crest, if the price-cost margin on Crest is small, then the rise in profit from Crest will be small, which implies the shift in demand from Colgate to Crest will not add much profit. Therefore, the merged firm will not raise the price of Colgate very much. These forces are displayed in figure 6.9, where product 1 is Colgate and product 2 is Crest. If the price of Colgate rises from to the profit earned on Colgate falls by the area A in figure 6.9, because of the lost profit from selling Δq1 fewer tubes of Colgate; but it also rises by the area B because of the higher price-cost margin on the tubes still sold. Suppose areas A and B are the same size, in which case the price hike would not change profit. After the acquisition of Crest, Colgate-Palmolive would also take into account the higher demand and profit on Crest. As shown in figure 6.9, the demand function for Crest, D2(p1, p2), shifts out because of the higher price for Colgate, which raises the demand for Crest by Δq2. Area C is the profit earned on the additional units sold. The merger now makes a price increase of Δp1 on Colgate profitable for Colgate-Palmolive.

Figure 6.9 Merger Enhances the Incentive to Raise Price

In sum, the incentive to raise price on Colgate due to the merger depends on how many consumers would substitute away from Colgate to Crest (which is referred to as the diversion ratio) and how profitable those added sales from Crest are (which is determined by the price-cost margin on Crest). These two forces combine to make up what is called upward pricing pressure (UPP). Let us now formalize these concepts in order to derive an expression that measures a firm’s inclination to raise price after a merger.33 Continuing with the above example, suppose is firm 1’s premerger price. A change in profit associated with a small change in price, denoted Δp1, is measured by where Δq1 is the change in 34 demand, which is positive (negative) when Δp1 < (>)0. A small rise in price will yield higher revenue and profit of q1Δp1 on the units being sold at the original price, but lower profit of from selling Δq1 fewer units. After the merger, the effect of a small change in the price of product 1 on the merged firm’s profit is now

where is the premerger price of product 2, Δq2 is the change in product 2’s demand from the change in the price of product 1 (which will be positive when the price of product 1 rises), and 0 < e1 ≤ 1 captures the possible reduction in the marginal cost of product 1 because of a merger-induced efficiency. (The merger may also affect the cost of product 2 but, for simplicity, we are ignoring that effect.) Let us rearrange expression (6.1):

For the second equality, note that the term disappears, because is presumed to be firm 1’s premerger profit-maximizing price, in which case (otherwise, firm 1 could increase profit by changing price, but then would not be profit maximizing). In analyzing the first term in the final line of expression (6.2), is the gain in profit from more sales of product 2 due to a small increase in the price for product 1 (and corresponds to area C in figure 6.9). That effect will tend to cause the merged firm to raise price, because it now values the additional profits earned on product 2. Offsetting that force is the term e1c1Δq1, which is negative because Δq1 < 0. That term captures the lower cost of producing product 1 due to the merger and will be a force driving price lower. Which effect is greater in magnitude determines whether the unilateral effect (after adjusting for the efficiency) raises or lowers price. It is useful to rearrange expression (6.2) as follows:

Recall that this expression measures the change in profit from a slight increase in the price of product 1 above its premerger level. If it is positive (negative), then the merged firm will be inclined to raise (lower) price.

is the diversion ratio, and it measures the fraction of lost sales of product 1 (Δq1 < 0)

that are captured by product 2 (Δq2 > 0). The diversion ratio is the absolute value of

so it is a positive

number. When the diversion ratio is higher, there will be a stronger tendency to raise price, because more of the lost sales from product 1 are captured by product 2, which is also owned by the merged firm. Of course, the value of these recaptured sales depends on the markup on product 2, which is why DR12 is multiplied by After netting out the cost efficiency e1c1, is the UPP. If UPP > 0, then the merged firm will raise product 1’s price from the premerger level, because that will raise total profit. If UPP < 0, then it will lower price.35 Figure 6.10 graphically depicts UPP. The three curves show the relationship between profit and the price of product 1. The bottom curve is the profit hill prior to the merger, and note that maximizes profit. The middle profit hill is after the merger, assuming there is no efficiency (e1 = 0). It is higher because it includes the profit from the acquired product 2. More relevant, however, is that profit is now increasing in p1 at Consistent with figure 6.9, the merged firm will want to raise price. The slope of the line at the point where it is tangent to the arrow is UPP, and it measures how much profit rises in response to a higher price. The steeper is the slope, the more profit will rise with a price increase and, therefore, the larger the unilateral effect from the merger will be. The top profit hill adds in an efficiency (e1 > 0), which makes for higher profit. Note that the UPP is smaller, as reflected in the flatter slope. With the merger lowering the marginal cost of product 1, the incentive to raise price, UPP, is lessened.

Figure 6.10 Upward Pricing Pressure

When the products are symmetric so that the premerger prices and costs are the same, it can be shown that

where DR is the common diversion ratio,

is the premerger price-cost margin, and e is the

common efficiency. For example, if the premerger price-cost margin is 0.25 and the diversion ratio is 0.40 (which means that for each ten units of product 1 no longer sold because of product 1’s higher price, product 2 captures four of those units), then

If the anticipated reduction in

cost is less than 10 percent (that is, e < 0.10), then UPP is positive, which means that the merged firm would raise the prices of both products. Thus the efficiency gain must exceed 10 percent if the unilateral effect is not to harm consumers. It is crucial to emphasize that calculating the UPP did not require defining the market or identifying the relevant set of competitors of the merged firm. For example, it was unnecessary to determine whether the toothpaste sold by Tom’s of Maine is a close substitute for Colgate and Crest. However, if we wanted to forecast postmerger prices, then the set of competitors would need to be determined. UPP is part of the 2010 Merger Guidelines and has been used as an initial screen to assess whether there

might be significant unilateral effects. An example is a 2015 FTC decision on the proposed merger of Dollar Tree and Family Dollar. These companies operate in the discount general merchandise retail store segment, and around the time of the proposed merger, Dollar Tree had 5,157 stores and Family Dollar 8,184 stores in the United States. For each geographic market, the FTC measured UPP. Ultimately, the FTC approved the merger subject to the divestiture of 330 stores, and about half of them were identified through the use of UPP.36 In 2014, Electrolux proposed to acquire the home appliance division of General Electric (GE). The DOJ blocked the acquisition, which resulted in the case going to court, where UPP was used for the first time in a contested merger case.37 The DOJ identified two channels through which home appliances were sold. The contract channel largely involved selling to builders who would install appliances in newly constructed homes. The contract market for cooking appliances was controlled by Electrolux, GE, and Whirlpool (with a combined market share of 97.5 percent), and the proposed acquisition would have raised Electrolux’s market share to 64.4 percent. The retail channel comprised selling through such stores as Home Depot and Lowe’s. The market was not quite as concentrated—as LG and Samsung were suppliers—but the postmerger market share of Electrolux would still be high at 59.2 percent. The DOJ’s expert witness estimated the UPP and the amount of cost reduction (that is, the value of e from the above discussion) necessary for the UPP to be zero. In other words, what is the minimum percentage reduction in cost due to the merger that would cause price not to rise? If the gain in efficiency is believed to be less than that reduction, then there will be consumer harm and the merger should be blocked; otherwise, it should be allowed. The companies claimed there would be a cost reduction of 3.2 percent. For three categories of cooking appliances, table 6.4 reports the minimum required efficiency for price not to rise in response to the acquisition. For example, the price of Electrolux ranges is predicted not to rise if cost declines by at least 16 percent, and the price of GE ranges is predicted not to rise if cost declines by at least 15 percent. For all three categories, the required efficiency gains are large compared to the predicted cost reductions. These estimates support a positive and large UPP for realistic efficiencies; therefore, the acquisition would harm consumers. During the trial, GE chose to terminate the deal, and its home appliance division was soon acquired by a Chinese manufacturer. Table 6.4 Reduction in Marginal Cost for Upward Pricing Pressure to Equal Zero (%)

Electrolux General Electric

Ranges

Cooktops

Wall Ovens

16 15

47 13

33 17

Source: Trial Exhibit of Michael D. Whinston in U.S. v. AB Electrolux, Electrolux North America, Inc. and General Electric Co. Public version.

Methods for estimating postmerger prices UPP is a useful screen for identifying points of concern warranting a more extensive analysis. Such an analysis would consider other factors, such as the ease of entry, and may also include an estimation of postmerger prices. Here we describe two estimation methods. The market comparison method uses premerger markets to predict what the postmerger environment would look like. It compares premerger prices in markets for which both firms are present with prices in markets when only one firm is present, while controlling for other differences in the markets. The

presumption is that the premerger price in a market with only one of the firms is a good proxy for the postmerger price in a market that, prior to the merger, has both firms operating. To illustrate this method, consider the proposed merger of Staples and Office Depot in the late 1990s.38 The relevant market was judged to be office superstore (OSS) chains, which consisted of the two merger partners plus OfficeMax. The defendants contested this definition. They argued that non-OSS firms (such as Walmart, Kmart, and Target) should be included, because more than 80 percent of office supplies are purchased through them. However, the FTC maintained that OSS firms were different from non-OSS firms, because they carried a broad range of office supplies—up to 7,500 items compared to no more than 2,400 items carried by, say, Walmart. This large selection permits one-stop shopping, which may be very appealing to small business customers. The FTC also argued that the defendants’ own documents defined OSS firms as “the competition.” Next, the FTC produced documents obtained from the firms and econometric studies showing that prices were higher in markets with fewer superstores. Referring to table 6.5, markets that had both Staples and Office Depot charged 11.6 percent lower prices than markets with only Staples. Adding Office Depot to where Staples and OfficeMax were already present lowered prices by 4.9 percent. Furthermore, direct competition among Staples and Office Depot was increasing over time. Among those markets for which Staples had a store, it was in competition with Office Depot in 46 percent of them in 1995 and 76 percent by 2000. Hence, markets could well become more competitive in the future if the merger was blocked. Table 6.5 Average Price Differential for Different Office Superstore Market Structures Benchmark Market Structure

Comparison Market Structure

Price Reduction (%)

Staples only Staples + OfficeMax Office Depot only Office Depot + OfficeMax

Staples + Office Depot Staples + OfficeMax + Office Depot Office Depot + Staples Office Depot + OfficeMax + Staples

11.6 4.9 8.6 2.5

Source: Serdar Dalkir and Frederick R. Warren-Boulton, “Prices, Market Definition, and the Effect of Merger: Staples-Office Depot (1997),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: The Role of Economics, 4th ed. (New York: Oxford University Press, 2004), p. 62.

Sophisticated econometric studies by both sides were submitted that attempted to control for factors other than just the existence of the other superstores, for example, determinants of cost and demand. The results were that the FTC found a 7.3 percent price increase, while the defendants found a much smaller increase of 2.4 percent. The anticompetitive effect of a positive price increase was supported by both sides, although the magnitudes were different. The parties then sought to take account of efficiencies, and there the disagreement was vast. The FTC argued that 43 percent of the cost savings claimed by the defendants would be achieved without the merger and should therefore be excluded. After other adjustments, the FTC concluded that cost would decline by only 1.4 percent and only 15 percent of it would be passed on to consumers. The net price effect estimated by the FTC was then a 7.1 percent increase, which is the original 7.3 percent (without efficiencies) less 0.15 × 1.4 percent (coming from the pass through of lower cost). In comparison, the defendants found a net price decrease of 2.2 percent. The judge was persuaded by the FTC and issued an injunction to bar the merger.39 Merger simulation is a second approach to estimating the impact of a merger on prices. Using premerger data on prices and quantities and assuming a particular model of competition among firms, it involves first

estimating firm demand and cost functions. With those estimated functions, profit-maximizing prices can be calculated for the postmerger environment. Let us explain the method with a simple example.40 Consider three single-product firms with firm 1 offering product x, firm 2 offering product y, and firm 3 offering product z; and suppose firms 1 and 2 are proposing to merge. Let pi and ci denote the price and cost, respectively, of product i, and Di(px, py, pz) be the demand for product i, where i = x, y, z. In the premerger environment, it is assumed that firms choose their prices to maximizes their profits: Firm 1 chooses px to maximize (px − cx)Dx(px, py, pz) Firm 2 chooses py to maximize (py − cy)Dy(px, py, pz) Firm 3 chooses pz to maximize (pz − cz)Dz(px, py, pz). Using premerger data on prices and quantities and other variables that impact cost and demand (and assuming a particular form for the demand functions), the demand functions and costs are estimated. Estimation of demand is based on measuring how the amount sold varies with price. Estimation of costs requires using the assumption of profit maximization: Given the observed prices and quantities and the estimated demand functions, cost is inferred to be that level for which the observed prices maximize profits. Let and denote the estimated cost and demand function, respectively, for product i. With this information, we can simulate the equilibrium if firms 1 and 2 were to merge. The merged firm would control products x and y, while firm 3 would still control product z. Assuming no efficiencies, the postmerger environment is described by: Merged firm chooses px and py to maximize Firm 3 chooses pz to maximize Given the cost and demand estimates, the postmerger equilibrium prices can be calculated. These predicted postmerger prices are then compared to the actual premerger prices to arrive at a forecast of the price effects of the merger. While this approach does not take account of any merger-induced cost reduction, it can be used to determine how much cost must decline for the merger not to raise prices. An academic (not agency) study used the merger simulation method on a series of proposed mergers in the ready-to-eat cereals industry in the 1990s.41 The second-largest supplier, General Mills, proposed to buy Nabisco’s cereal line, which was the sixth-largest producer. It was called off because of anticompetitive concerns, which seems justified, based on the estimates from the merger simulation approach. For example, it was projected that the merged firm would have raised the price of Nabisco’s Shredded Wheat by 7.5 percent. With that merger blocked, Kraft, the owner of the third-largest supplier Post, proposed to acquire Nabisco’s cereal line. This merger was approved, and the merger simulation supports that decision as, for example, it predicted that the prices of Shredded Wheat and Grape Nuts, which were seen to be close substitutes, would only rise by 3.1 and 1.5 percent, respectively. But the “one that got away” was the approval of General Mills’ purchase of the Chex cereal line from Ralston. The predicted price increase of Chex was a massive 12.2 percent. Marginal cost would have had to decrease by 22.1 percent to offset this predicted price rise, which seems unrealistically high. Coordinated effects Coordinated effects can be relevant when a market has conditions conducive to collusion (see chapter 4 for some of those conditions) or has experienced or attempted collusion in the past.42 Consider the first case in

which the EC objected to a merger on grounds of coordinated effects (or “collective dominance” as it is referred to there).43 Decided in 1992, it involved the French bottled water market and Nestlé’s proposed purchase of Perrier. Given the premerger situation (see table 6.6), unilateral effects (and possibly abuse of dominance, which is an anticompetitive concern covered in chapter 8) is a possible concern in that the merged firm is projected to have a 53 percent market share (see table 6.7). Table 6.6 Premerger Market for French Bottled Water (million liters) Company

Sales

Capacity

Market Share (%)

Nestlé Perrier BSN Others

897 1,885 1,208 1,258

1,800 >13,700 1,800 ?

17.1 35.9 23 24

Table 6.7 Postmerger Scenarios in the Market for French Bottled Water (million liters)

Anticipating this issue, Nestlé proposed to sell Volvic (which was a major source of still mineral water for Perrier) to BSN. As shown in table 6.7, this would result in a more balanced market in which each of the two leading firms would have a 38 percent market share. However, it is exactly this (approximately) symmetric duopoly that heightens apprehensions of coordinated effects. Examining merger cases during 1990–2004, a study found that the EC generally did not raise concerns about coordinated effects unless the postmerger structure would be close to a symmetric duopoly.44 The prospect of coordinated effects in this market was accentuated by evidence of weak competition: a high degree of parallelism in market behavior, high price-cost margins, and a large differential between the prices of national mineral waters and local spring waters. With an eye for coordinated effects, let us compare two scenarios: merger with and without the resale of Volvic. It can be argued that merger without the resale is less conducive to collusion. With BSN having limited capacity, it would be constrained in its ability to punish Nestlé for deviating from the collusive price. That would make it less likely that collusion could be sustained. In contrast, the merger with the resale of Volvic to BSN is more conducive to collusion, because now BSN (with the additional capacity of Volvic) could effectively punish Nestlé for deviating from the collusive price, and that threatened punishment would make for stable collusion. Making capacity distribution more symmetric may then promote coordinated effects. The EC approved the merger under the condition that Perrier sell Volvic to BSN (as proposed) as well as several well-known brands and 3 billion liters of water capacity to a third party so as to create a sizable third firm. The merger was consummated under those conditions.

Entry conditions Even when there is the prospect of unilateral or coordinated effects, a merger could be approved because it is believed that the threat of entry will either deter price increases, or if it did not, entry would occur to drive price back down. For example, many airline mergers have been approved because of the possible constraining influence of low-cost carriers that could enter a route market in response to a substantive rise in fares. At the same time, excessive reliance can be placed on the disciplining role of entry. It is not enough that entry might occur in response to the exercise of a merger-created rise in market power; it is critical that entry does occur, and that depends on the cost of entry and the presence of potential entrants. This issue arose in the Syufy Enterprises (1990) case.45 The DOJ blocked a merger to monopoly in the Las Vegas movie theater market. In rejecting the DOJ’s argument, the Ninth Circuit Court appealed to the district court’s statement that entry is easy and would prevent the exercise of monopoly power. The Court wrote: “We cannot and should not speculate as to the details of a potential competitor’s performance; we need only determine whether there were barriers to the entry of new faces into the market.” With that view, “some” entry in the recent past was sufficient. In making this decision, one wonders whether the structural presumption has been replaced with, let us call it, the “entry presumption”: Entry will constrain firms unless there are substantial well-documented entry barriers. Whether there is an entry presumption in current merger practice and, if so, whether there is empirical justification for it are both questions open to debate. Entry considerations can also provide a rationale for prohibiting a merger between firms that are not competitors, on the grounds that the merger would eliminate a potential competitor and thereby make the market less competitive. An example is the Procter & Gamble case in 1967.46 Procter & Gamble was the dominant producer of soaps and detergents in the United States and had acquired Clorox, which was the leading manufacturer of household liquid bleach with 49 percent of the national market. Prior to the merger, Proctor & Gamble did not sell bleach and thus was not a competitor to Clorox. The Supreme Court held that the merger violated Section 7 of the Clayton Act for several reasons. Here we confine the discussion to the Court’s opinion regarding potential competition. The FTC also found that the acquisition of Clorox by Procter eliminated Procter as a potential competitor. The evidence clearly shows that Procter was the most likely entrant. Procter was engaged in a vigorous program of diversifying into product lines closely related to its basic products. Liquid bleach was a natural avenue of diversification, since it is complementary to Procter’s products, is sold to the same customers through the same channels, and is advertised and merchandised in the same manner. Procter had considered the possibility of independently entering but decided against it, because the acquisition of Clorox would enable Procter to capture a more commanding share of the market. It is clear that the existence of Procter at the edge of the industry exerted considerable influence on the market. First, the market behavior of the liquid bleach industry was influenced by each firm’s predictions of the market behavior of its competitors, actual and potential. Second, the barriers to entry by a firm of Procter’s size and with its advantages were not significant. There is no indication that the barriers were so high that the price Procter would have to charge would be above the price that would maximize the profits of the existing firms. Third, the number of potential entrants was not so large that the elimination of one would be insignificant. Few firms would have the temerity to challenge a firm as solidly entrenched as Clorox. By acquiring Clorox, Procter removed itself as a potential competitor. Because Procter was the most likely entrant and perhaps was unique in its capability to enter, the Court viewed this merger as removing an

important constraint on the exercise of market power in the bleach market. International Issues There are many competition authorities that review and control mergers. One source of differences among them is the standard used to judge whether a merger ought to be approved or blocked. In the United States, mergers are judged according to consumer surplus; if consumers will be harmed, then the merger should be blocked. As mentioned in chapter 3, the standard in Canada is total surplus, which means that a merger that harms consumers by, say, raising price, can be approved if, say, cost savings exceed the loss in consumer welfare. In China, the Anti-Monopoly Law requires the Anti-Monopoly Bureau of China’s Ministry of Commerce (MOFCOM) to consider, along with competitive effects, “the effect of the concentration on national economic development.” Industrial policy—such as the promoting of domestic industries—can then intrude on competition policy. From the perspective of global companies, the plethora of competition laws means having to gain the approval of multiple competition authorities when seeking to merge. Typically, the key jurisdictions are the United States, the European Union, and China. An example is the joint venture proposed by three global shipping companies in 2013. Known as the P3, the venture was described as intending to share capacity to reduce costs while leaving pricing, sales, and marketing to the individual companies. The United States and European Union approved it, but China, which evaluated it as a merger, blocked it and thereby prevented the joint venture from being consummated. It has been debated whether MOFCOM’s decision was based on preventing anticompetitive effects or protecting domestic shipping companies. A case exemplifying both jurisdictional differences and the challenges faced by companies was the proposed purchase of McDonnell Douglas (MD) by Boeing in 1996.47 The FTC approved the transaction by a vote of four to one, while the EC initially opposed the deal and only approved it after receiving some concessions. Table 6.8 shows the timeline for the case. Table 6.8 Chronology of the Boeing–McDonnell Douglas Merger December 14, 1996 January 29, 1997 February 18, 1997 February 28, 1997 March 19, 1997 May 21, 1997 June 12–13, 1997 June 26, 1997 July 1, 1997 July 13, 1997 July 30, 1997

Boeing and McDonnell Douglas sign agreements to merge. Boeing and McDonnell Douglas file Hart-Scott-Rodino notice with FTC. EC receives notification of Boeing–McDonnell Douglas merger. FTC issues Hart-Scott-Rodino Second Request. EC initiates further inquiry into the merger. EC issues Statement of Objections to the merger. EC holds hearings on the merger. EC notifies FTC about its initial conclusions and asks FTC to account for EC’s concerns about competition in the market for large commercial aircraft. FTC announces that it will not oppose the merger. DOJ and DOD inform EC of concerns that would arise if EC banned the merger. EC issues its Final Decision permitting the merger with conditions.

Source: William E. Kovacic, “Translatlantic Turbulence: The Boeing McDonnell Douglas Merger and International Competition Policy,” Antitrust Law Journal 68 (2001): 805–873. Note: DOD, Department of Defense; DOJ, Department of Justice; EC, European Commission; FTC, Federal Trade Commission.

The market is commercial aircraft. Boeing had 64 percent of sales and MD had 6 percent, with the

remaining 30 percent controlled by Airbus. Boeing and MD were American companies and Airbus was a French (and, therefore, European Union) company. One concern was unilateral effects, as the merger would make a highly concentrated market even more concentrated. A second concern was that the merger would enhance Boeing’s dominant position, which could harm Airbus. This is a monopolization (or abuse of dominance) effect (which will be covered in chapter 8). Of particular note, Boeing had signed contracts with three airlines—American, Continental, and Delta—to be their exclusive supplier for the next twenty years. The acquisition of MD might deny Airbus access to customers that are critical for it to have sufficient scale to be competitive. In terms of explaining their decision, the FTC ignored the monopolization effect. While recognizing the possible unilateral effect, the FTC concluded that MD was no longer “a meaningful competitive force” and thus felt there would not be any adverse price effects. In contrast, the EC emphasized the monopolization effect and only approved the merger once Boeing agreed to retract the exclusivity clause from their contracts with the three airlines and not to enter into any exclusive contracts for twenty years. Was this a legitimate difference in the economic analysis? Or did the United States and the European Union value the harm to Airbus differently?48 Summary A merger between competitors is a direct and immediate expansion of market power. This chapter has focused on the incentives to merge, how such mergers may impact social welfare, and the policy challenges faced when analyzing them. Using the Williamson model, the simplest analysis involves weighing any cost savings from a merger against any price increases emanating from increased market power. This enhanced market power may come from each firm’s quantity decision having a bigger impact on price—which leads them to want to constrain supply more—or may come from fewer firms that result in competition being replaced with some form of collusion. Cost savings are a bit harder to identify, and their source varies across cases. However, one common source of cost savings is when the premerger firms have production units with different costs, in which case a merged firm can more efficiently allocate supply than the market. Antitrust law and policy have evolved considerably. The Sherman Act proved inadequate in providing an effective role for the federal government to intervene in merger activity. Indeed, it was under the watch of the Sherman Act at the turn of the twentieth century that the first merger wave led to the creation of monopolies or near-monopolies in many important industries. While the Clayton Act of 1914 sought to provide the instruments to constrain merger activity, an important loophole permitted many mergers, so that new oligopolies were created through mergers and acquisitions in the 1920s. With the amendment of the Clayton Act in the 1950s, along with a sentiment in policy circles that was highly suspicious of firms with large market shares, merger policy was very aggressive in the 1960s and 1970s as mergers were challenged that involved market shares below 5 percent. There is a consensus among economists that such a merger policy was far too interventionist and that concerns about market power were misplaced. A more tempered merger policy emerged in the 1980s and has been guided by the Horizontal Merger Guidelines jointly issued by the Antitrust Division of the DOJ and the FTC. Along with merger policy, the economic methods for measuring the unilateral effects of a merger have evolved. The traditional and still most common method is to define the relevant market impacted by the merger and then assess how much prices are likely to rise and to what extent any price increase is offset by efficiencies or some other benefits to consumers. In response to the difficulties and controversies often

surrounding the task of market definition, a new method has been introduced. Upward pricing pressure is able to inform us regarding the direction and magnitude of prices in response to a merger by using information on diversion ratios (when a firm raises price, how much of the reduced demand is capture by a rival firm) and price-cost margins for the firms involved in the merger. While it is early in the use of the UPP, it shows great promise as a practical tool in merger evaluation. Questions and Problems 1. Assume the following facts concerning the horizontal merger model developed by Williamson and shown in figure 6.6. Demand is q = 100 − P; average cost premerger, AC0 = $50; average cost postmerger, AC1 = $44; and premerger price, p0 = $50. Assume that the postmerger price, p1 = $70, results from the market power created from the merger. a. Calculate the value of area A1. b. Calculate the value of area A2. c. Should the merger be allowed? What qualifications should be considered? 2. Assume all of the facts given in problem 1 except that now take the premerger price, p0, to be $52. How does this affect your answers to problem 1? 3. Assume a homogeneous good market for cellphones. Two firms, 1 and 2, have a combined demand of q = 40 − 0.4P, and all manufacturers of cellphones have constant marginal and average costs of $50. Initially, the price is $50. a. Firms 1 and 2 have decided to merge. They can lower their cost curve from $50 to $48 because of economies of combined operations. They expect that as the market leader, they can lead the industry to a new price of $60. Ignore industrywide effects (i.e., use the above demand curve) and compute social costs and benefits of the merger. On this basis, should the merger be approved? b. Now recognize that the two firms above were initially a part of a five-firm industry in which each firm acts as if it had a “share-of-the-market” demand curve of 20 percent of the market demand. The market demand is Q = 100 − P. (Note that the combined demand curve referred to in part a is in fact 40 percent of the market demand). Would firm 3 favor or oppose the merger, assuming that the phone price rises to $60 and it operates on its “share-of-the-market” demand curve, q = 20 − 0.2P? c. If social benefits and costs are now computed on an industrywide basis, should the merger be approved? d. Now assume that greater cost savings are expected by firms 1 and 2. Their cost curve will shift down to $45 rather than to $48. It is now a real possibility that the new combined firm will decide to cut price to $50 (or just a bit below) and take over the entire market. Find the new firm’s profits under the price increase strategy (of $60) and under the monopolization strategy. Given that the new firm will follow the most profitable strategy, will firm 3 favor or oppose the merger now? e. How might information about rival firms’ attitudes toward a merger (or their stock prices) be useful to antitrust enforcement agencies? 4. According to some economists, horizontal mergers may not always be profitable even though they reduce the number of suppliers. For example, assume a three-firm industry in which the firms behave according to the Cournot model. Let market demand be Q = 20 − P. Each firm has a constant average cost of $4. Now assume that a merger reduces the number of firms to two. Calculate the combined profits of the two firms premerger, and then calculate the profit of the combined firm in the postmerger situation—a Cournot duopoly. Is this a reasonable way of modeling the profitability of horizontal mergers?49 5. Identify two situations in which a merger between firms that currently do not compete can still have unilateral or coordinated effects. 6. A maverick firm is defined as a firm that is sufficiently aggressive that its presence could be an obstacle to effective collusion. An example involves Northwest Airlines and Southwest Airlines as mavericks in the U.S. airlines industry. In February 2000, Continental Airlines raised prices $20–40. American, Delta, TWA, United, and US Airways matched the price increase. Northwest, America West, and Southwest did not match the price increase. As a result, the other airlines rolled back the increases. Three weeks later, Continental raised price again, in response to which the six other carriers

matched but again Northwest and Southwest did not. The price increases were rescinded. How should the presence of a maverick firm in a market affect the evaluation of a proposed merger? What if the maverick firm is not one of the merger parties? What if it is one of the parties? 7. Describe market conditions and a prospective merger such that there is a serious risk of unilateral effects but little risk of coordinated effects. Now describe market conditions and a prospective merger such that there is a serious risk of coordinated effects but little risk of unilateral effects. 8. How does the extent of merger-related efficiencies impact whether the merger will have unilateral effects? What about coordinated effects? 9. Firms 1 and 2 are proposing to merge. They offer symmetrically differentiated products and have identical costs and, therefore, identical premerger prices. (Note that “symmetrically differentiated products” means that if they charge the same price, then they have the same demand.) The common premerger price for firms 1 and 2 is $90, and the common marginal cost is $60. If firm 1 were to raise its price to $100, we know that its demand would drop by 20 units and firm 2’s demand would rise by 5 units. a. Assume the merger would reduce marginal cost by 10 percent. Using UPP, is there reason to be concerned with the merger? b. Suppose the prospective merger partners want to convince the DOJ that the merger will not raise price. Using UPP, how large must they argue the efficiency is? c. Suppose there are improved estimates of firms’ demand functions and now we know that if firm 1 were to raise its price to $100, its demand would (still) drop by 20 units, but firm 2’s demand would rise by 10 units. Are unilateral effects stronger or weaker with this new demand estimate? Is the merger more or less likely to be approved?

Notes 1. Jesse W. Markham, “Survey of the Evidence and Findings on Mergers,” in National Bureau of Economic Research, Business Concentration and Price Policy (Princeton, NJ: Princeton University Press, 1955), p. 180. 2. George Bittlingmayer, “Did Antitrust Policy Cause the Great Merger Wave?” Journal of Law and Economics 28 (April 1985): 77–118. 3. A related effect has been documented in the European Union. Based on European Commission decisions during 1990– 2012, a cartel conviction was typically followed with increased merger activity in that market. See Steve Davies, Peter L. Ormosi, and Martin Graffenberger, “Mergers after Cartels: How Markets React to Cartel Breakdown,” Journal of Law and Economics 58 (August 2015): 561–583. 4. U.S. Federal Trade Commission, The Merger Movement: A Summary Report (Washington, DC: Federal Trade Commission, 1948), p. 68. 5. For a study summarizing analyses of some of these mergers, see Orley Ashenfelter, Daniel Hosken, and Matthew Weinberg, “Did Robert Bork Understate the Competitive Impact of Mergers? Evidence from Consummated Mergers,” Journal of Law and Economics 57 (August 2014): S67–S100. 6. E. Han Kim and Vijay Singal, “Mergers and Market Power: Evidence from the Airline Industry,” American Economic Review 83 (June 1993): 549–569. 7. Experimental studies involve creating a market setting in the laboratory and incentivizing subjects through monetary rewards to act like firms. For results on tacit collusion, see Steffen Huck, Hans-Theo Normann, and Jörg Oechssler, “Two Are Few and Four Are Many: Number Effects in Experimental Oligopolies,” Journal of Economic Behavior and Organization 53 (April 2004): 435–446; and Christoph Engel, “How Much Collusion? A Meta-Analysis of Oligopoly Experiments,” Journal of Competition Law and Economics 3 (2007): 491–549. 8. Nathan H. Miller and Matthew C. Weinberg, “Mergers Facilitate Tacit Collusion: Empirical Evidence from the U.S. Brewing Industry,” LeBow College of Business, Drexel University, Philadelphia, July 25, 2016. 9. Ibid, p. 2. 10. For a coverage of various types of efficiencies along with some estimates, see Lars-Hendrik Röller, Johan Stennek, and

Frank Verboven, “Efficiency Gains from Mergers,” WZB Discussion Paper FS IV 00-09, Wissenschaftszentrum Berlin für Sozialforschung, Berlin, August 2000. 11. F. M. Scherer and D. Ross, Industrial Market Structure and Economic Performance, 3rd ed. (Chicago: Rand-McNally, 1990). 12. Paul M. Barrett, “Is Funeral Home Chain SCI’s Growth Coming at the Expense of Mourners?” Bloomberg News, October 24, 2013. www.bloomberg.com/news/articles/2013-10-24/is-funeral-home-chain-scis-growth-coming-at-theexpense-of-mourners (accessed on August 20, 2016). 13. The ensuing discussion is based on Jonathan B. Baker, “Efficiencies and High Concentration: Heinz Proposes to Acquire Beech-Nut (2001),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 5th ed. (New York: Oxford University Press, 2009), pp. 157–177. 14. This argument for a cost-minimizing solution to have equalization of marginal costs presumes that both quantities are positive, because if one unit’s quantity is zero and its marginal cost is higher, it is not possible to further shift supply away from it. In that case, the cost-minimizing solution is to shut down the more costly plant. 15. This result is proven in Joseph Farrell and Carl Shapiro, “Horizontal Mergers: An Equilibrium Analysis,” American Economic Review 80 (March 1990): 107–126. 16. Rebecca Henderson and Iain Cockburn, “Scale, Scope, and Spillovers: The Determinants of Research Productivity in Drug Discovery,” RAND Journal of Economics 27 (Spring 1996): 32–59. 17. The ensuing discussion is based on Yonghong An and Wei Zhao, “Dynamic Merger Efficiencies of the 1997 Boeing– McDonnell Douglas Merger,” Texas A&M University, College Station, January 2016. 18. The ensuing analysis is based on Oliver E. Williamson, “Economies as an Antitrust Defense: The Welfare Tradeoffs,” American Economic Review 58 (March 1968): 18–36. 19. The calculations use

and A2 = (ΔAC)q1. Substituting for ∆q from the definition of price elasticity, η, we

get

Equating A1 and A2 yields

Because AC0 = p0, divide the left side by (AC0)q1 and the right side by p0q1. The result is

Finally, assuming a constant elasticity demand curve, we have

So for η = 1, ∆p/p0 = 0.2, we get q0/q1 = 1.2, and therefore ∆AC/AC0 = 0.024. 20. For an excellent discussion of this issue, see Michael D. Whinston, Lectures on Antitrust Economics (Cambridge, MA: MIT Press, 2006), pp. 60–62. 21. See Franklin M. Fisher, “Pan American to United: the Pacific Division Transfer Case,” RAND Journal of Economics 18 (Winter 1987): 492–508.

22. A recent survey of studies of the price effects of many consummated mergers provides evidence that remedies may not be as effective as is believed by the DOJ and FTC; see John E. Kwoka Jr., “Does Merger Control Work: A Retrospective on U.S. Enforcement Actions and Merger Outcomes,” Antitrust Law Journal 78 (2013): 619–650. Prices increased on average for all mergers (covered in the study), and no evidence indicated that the price increases were lower for mergers with remedies. 23. Brown Shoe Company v. United States, 370 U.S. 294 (1962). 24. The horizontal dimensions of the shoe manufacturing market were not at issue before the Supreme Court. The district court found that the merger of Brown’s and Kinney’s manufacturing facilities was economically too insignificant to be illegal, and the government did not appeal the lower court’s decision. 25. Robert H. Bork, The Antitrust Paradox (New York: Basic Books, 1978), p. 65. 26. United States v. Philadelphia National Bank, 374 U.S. 321, 363 (1963). 27. United States v. Von’s Grocery Co. et al., 384 U.S. 270 (1966). 28. United States v. General Dynamics Corp. et al., 415 U.S. 486 (1974). 29. United States v. Baker Hughes, Inc. 908 F. 2d 981, 984 (1990). 30. The 2010 DOJ-FTC Horizontal Merger Guidelines can be found at www.justice.gov/atr/file/810276/download. 31. Federal Trade Commission v. Whole Foods Mkt., Inc., 502 F. Supp. 2d 1 (D.D.C. 2007). 32. The idea of examining toothpastes comes from Sonia Jaffee and E. Glen Weyl, “Price Theory and Merger Guidelines,” CPI Antitrust Chronicle, March 2011 (1). 33. The key reference for UPP is Joseph Farrell and Carl Shapiro, “Antitrust Evaluation of Horizontal Mergers: An Economic Alternative to Market Definition,” B.E. Journal of Theoretical Economics 10 (2010): issue 1, article 9. As chief economists at the FTC and DOJ, respectively, they were responsible for UPP appearing in the 2010 Horizontal Merger Guidelines. Without implicating them in the exposition, I thank Joe and Carl for their expository suggestions. 34. The analysis presumes that the price change is small though, for purposes of easy visualization, the price increase shown in figure 6.9 is not small. 35. UPP measures the direction in which the merged firm wants to move price, holding the prices of non-merged firms constant. Of course, they will adjust their prices. However, under fairly general assumptions, if UPP > ( MCk by assumption, the actual isocost line facing the shoe industry premerger has a steeper slope, such as PP. Hence the shoe industry picks input mix E, which minimizes its expenditures on inputs. Because the industry’s payments to Xie’s include a monopoly profit, the expenditures on inputs that it minimizes are not equivalent to true resource costs. The true resource costs at E are higher than at F by the vertical distance MN (measured in units of L). In other words, setting a monopoly price on K causes inefficient production in shoe manufacturing; the costs of production are too high, because the input mix is incorrect (from society’s viewpoint). If Xie’s Shoe Machinery monopolized forward into shoe manufacturing, the production of shoes would

shift from E to F, because the integrated monopoly would minimize costs using the true opportunity costs of K and L. The cost saving MN would then constitute a profit incentive for the vertical acquisition. Thus far it would appear that such a merger should be permitted; the costs of production would be lowered. However, there is a further step in our analysis: What price will the integrated monopolist charge for shoes, given the lower real cost of production? Unfortunately, analysis shows that the price can either rise or fall;8 and if price rises, we are back facing a tradeoff where the benefits are cost savings and the costs are deadweight losses due to monopoly pricing. In summary, we have shown that in the case of variable-proportions production, vertical monopolization will be profitable. The welfare effects can be either positive or negative, depending on the particular parameters (elasticity of demand, elasticity of substitution in production, and so on). However, we should not lose sight of the fact that the real problem is the horizontal monopoly that was assumed to exist in shoe machinery. Only if antitrust authorities could do nothing about this horizontal problem does the analysis here become relevant. Commitment and the restoration of market power Returning to the case of fixed-proportions production, let us describe a situation in which the upstream monopolist is unable to extract full monopoly profit from downstream firms.9 In that case, a vertical merger may raise the merging firms’ profit but reduce welfare. Suppose the upstream firm produces jet engines and there are two downstream firms that produce commercial jets. To simplify matters, assume that there are five airlines, and each airline’s demand for jets is at most one unit. Their valuations are shown in table 7.1. For example, airline A values a jet at 140, airline B at 100, and so forth. Table 7.1 Consumer Valuations and Profit of an Integrated Monopolist Airline

Valuation

Total Revenue

Total Cost

Total Profit

A B C D E

140 100 70 55 40

140 200 210 220 200

30 60 90 120 150

110 140 120 100 50

The cost of producing a jet is 10 for the engine and 20 for all other inputs. If an integrated monopolist produced one jet, its profit is 110, as it receives a price of 140 from airline A and its total cost is 30 (see table 7.1). If it produced two jets, it can sell them for a price of 100 (selling to A and B), so total revenue is 200, and given that its total cost is 60, it earns profit of 140. As one can see from table 7.1, the production of two jets is the integrated monopolist’s profit-maximizing solution, because it yields the maximum profit of 140. Now consider the situation faced by an upstream manufacturer of jet engines selling to the two downstream manufacturers of jets, which we will denote X and Y. One proposal is that it approaches each jet manufacturer and offers to sell it one engine at a price of 80. Note that if firm X expects firm Y to buy one engine and thereby produce one jet, firm X is indeed willing to pay 80 for an engine. Firm X expects there to be two jets on the market, so they will sell for 100 each. The profit of firm X is then 100 minus the cost of the engine, which is 80, minus the cost of other inputs, which is 20; its profit is zero, so it is willing

to pay 80 (but no more). The engine manufacturer would then make a profit of 140 as it sells two engines at a per unit profit of 70 (the price of 80 less cost of 10). But let us look more closely at exactly how these deals are consummated. Suppose the engine manufacturer first makes a deal with firm X to buy one engine at a price of 80. It then approaches firm Y, and the engine manufacturer shows firm Y the contract it signed with firm X to deliver one engine. While the upstream firm could earn profit of 70 from selling one engine to firm Y at a price of 80, consider it offering two engines each at a price of 50. This offer will yield additional profit of 80, which would give it higher profit of 150, as it earns 70 from firm X and 80 from firm Y. Furthermore, firm Y is willing to pay 50 and buy two engines, as it knows it could sell each of them for a price of 70 (as jets are sold to airlines A, B, and C) and its cost per unit is 70 (50 for the engine and 20 for other inputs). So, where is the problem? Note that three jets will be on the market and each will sell for 70. But firm X paid 80 for an engine, which means it is incurring a loss of 30. Firm X is then no longer willing to pay 80 for an engine. In other words, if firm X anticipates that the engine manufacturer will sign such a deal with firm Y, firm X will not sign the original deal to buy one engine at a price of 80. Hence, this cannot be a market equilibrium, as firm X is not maximizing its profit. A market equilibrium is a pair of deals between the upstream firm and the two downstream firms such that all parties correctly anticipate what happens and each is acting to maximize its profit. Let us show that the equilibrium has the engine manufacturer sell a total of four engines—two to each jet producer—at a price of 35. Suppose it offers such a deal to firm X and firm X accepts. The engine manufacturer then goes to firm Y. Firm Y will be willing to pay at most 50 for one engine as, in that case, there will be three jets produced (two by firm X and one by firm Y), and each will fetch a price of 70. This yields profit of 40 for the engine manufacturer from firm Y. If firm Y buys two engines and produces two jets, each jet will have a price of 55, so that the most Y is willing to pay for an engine is 35. This yields a profit of 50 for the engine manufacturer from selling to firm Y (as it earns per unit profit of 25 on each of two engines). One can show that it earns profit of only 30 from selling three engines to firm Y. Thus, given that it sells two engines to firm X, the engine manufacturer maximizes the profit it generates from Y by selling two engines at a price of 35 each. Given the symmetry of the situation, if it is optimal for firm Y to buy two engines at a price of 35 given firm X buys two engines at a price of 35, then firm X’s purchase is optimal, given that it correctly anticipates what Y will do. This outcome describes an equilibrium. Unfortunately, the upstream monopolist ends up with profit of only 100, as it sells four engines at a price of 35. This amount is lower than the profit received by an integrated monopolist. What is the source of the problem and what is its solution? The problem is the inability of the upstream monopolist to credibly commit to the deals it makes with the downstream firms. It would like to commit to selling only one engine to Y so that X is willing to pay 70 for one engine. But X knows that, after locking X into buying one engine, the upstream firm can earn more profit by selling two engines to firm Y. The usual reason for restricting output—that it lowers the price received on other units— is partially stifled, because X is locked into paying a price of 70, and this is true whether Y buys one or two units. It is the lack of commitment that prevents the upstream monopolist from extracting full monopoly profit. A solution is to vertically integrate so that a firm produces both engines and jets. In that case, it will want to produce two jets, each at a cost of 30, and sell them for 100 each. It does not want to sell any engines to the remaining jet producer, for that would only serve to lower total profit. For example, if it sold one engine, then the other jet producer is willing to pay 50, because the price it will receive for a jet is 70 and its other costs are 20. Now, the integrated firm earns a total profit of 130, 50 from selling an engine and 80 from

producing and selling two jets at a per unit profit of 40. This profit is lower than not selling to the other jet manufacturer. In this situation, vertical integration does not really extend monopoly power but rather restores it. Monopoly power is lost because of the lack of commitment on the part of the upstream monopolist. A vertical merger achieves that commitment. Regardless, the merger is anticompetitive, because it results in a higher final product price. Another device that restores commitment is an exclusive dealing contract whereby the upstream firm agrees to sell to only one downstream firm, say, firm X. Then firm X would be willing to buy two engines at a price of 80, as it knows that the engine manufacturer is prohibited from selling any engines to firm Y. We will explore exclusive dealing later in this chapter. Raising rivals’ costs The preceding analysis identified some anticompetitive effects of vertical mergers. Though shown in the context of an upstream monopoly, they are relevant whenever markets are imperfectly competitive. In this section, we consider an anticompetitive effect that does not arise in the case of a monopoly. When both upstream and downstream markets are oligopolistic, vertical integration can be profitable and raise the final product price by causing downstream competitors to have higher cost. This is an example of an anticompetitive effect known as raising rivals’ costs.10 In the context of vertical mergers (and also vertical restraints), two types of raising rivals’ costs have been identified. Input foreclosure is when the upstream division of an integrated firm excludes downstream firms from purchasing its input, which results in those firms having higher costs because of having to use inferior inputs or facing higher input prices. Customer foreclosure is when upstream suppliers are denied access to selling to the downstream division of an integrated firm. By preventing them from having an adequate customer base, input suppliers may experience higher cost or fail to achieve enough variable profit to cover its fixed costs. The latter can result in exit or, in the case of a prospective firm, deter entry. We explore input foreclosure here and then examine customer foreclosure when we investigate exclusive dealing. Consider an industry with two upstream firms, denoted U1 and U2, and two downstream firms, denoted D1 and D2. As depicted in figure 7.4(a), both upstream firms can supply both downstream firms. The upstream firms offer a homogeneous commodity and compete by setting prices. They have a common cost of production of 10 and let the price of Ui be denoted wi, where i = 1 or 2. The downstream firms offer differentiated products and require one unit of the upstream commodity to produce one unit of the downstream good. A downstream firm’s unit cost is the sum of its input price, which is wi if it buys from firm Ui, plus the cost of transforming that input into the final product, which is specified to be 15. Let us assume linear demand curves for the two downstream firms (which are the same as specified in equations 4.11 and 4.12 in chapter 4):

Figure 7.4 (a) Pre-Vertical Integration; (b) Post-Vertical Integration

The downstream firms’ profit functions are then

where wi is the price paid for the input by Di. For example, if D1 buys from U2, then

If downstream firms compete in terms of price, it can be shown that the Nash equilibrium prices are11

Recall the concept of Nash equilibrium from chapter 4: and are Nash equilibrium prices if, given firm D2 prices at a price of maximizes firm D1’s profit and, given firm D1 prices at a price of maximizes firm D2’s profit. As shown in equation 7.5, D1’s equilibrium price is increasing in the input price it pays, because its profit-maximizing price is higher when its marginal cost is higher. But the reason that it is increasing in the input price paid by firm D2 is that D2’s price is increasing in its marginal cost, and D1’s optimal price is higher when its rival prices higher. This effect is due to products being substitutes, so that a firm’s demand is stronger when the competitor sets a higher price and, under normal assumptions, stronger demand implies a higher profit-maximizing price. Let us now examine upstream competition in the absence of a vertical merger. Since the upstream firms offer homogeneous products, they will compete vigorously for the demand of the downstream firms. In fact, competition drives price all the way down to their marginal cost of 10. As upstream goods are identical, a downstream firm will buy all of the input from the firm with the lowest price. If both upstream firms were to price above cost, say at 15, and equally share demand, one of them could undercut the price a little bit, say to 14.99, and experience a doubling of its demand. Such a move is clearly profitable. Equilibrium occurs when price is 10, because then no firm wants to lower price (as doing so results in losses), and there is no reason to raise price (as profit remains zero since demand falls to zero). The equilibrium has upstream prices of 10, w1 = 10 = w2, and downstream prices of 83.34 using equations 7.5 and 7.6. Now consider the prospect of firms U1 and D1 merging. As shown in figure 7.4(b), firms U1 and D1 form a firm and the lone supplier of D2 is U2. The integrated firm will produce the input internally at cost 10, so its cost is the same as without the merger. At first glance, the merger has not benefited these firms, but let us examine the situation faced by the unintegrated firms—upstream firm U2 and downstream firm D2. Here we make a key assumption that will be discussed later: The integrated firm does not sell its input to D2. What price will firm U2 set for its input? Prior to the merger, it was in vigorous competition with the other input supplier, but it is now in a monopoly position. It will then charge a price above 10, knowing that, while firm D2 will demand less than if the input price is 10, it will still demand a positive amount. Indeed, one can show that the profit-maximizing price is w2 = 72.45.12 D2 ends up with a (much!) higher marginal cost. Now that D2’s cost is higher, we know by equations 7.5 and 7.6 that both firms’ prices are higher.13 The unintegrated downstream firm prices higher because it faces a higher price for the input. The integrated firm prices higher downstream because its rival, firm D2, prices higher. Furthermore, vertical integration has raised the profit of the two merger partners because it has induced a rival to price higher. This is the raising

rivals’ costs effect and is an anticompetitive implication of the vertical merger. Finally, since firms’ prices are higher and the total cost of production remains the same, social welfare is lower due to the merger. Before jumping to any general conclusions, consider several important caveats. First, our simple model did not have double marginalization as, prior to the merger, the input is priced at marginal cost. This property is special to the assumption that upstream firms have homogeneous products and thereby price at cost. If products are not perfectly identical, upstream firms will price above cost. In that case, double marginalization is present in the premerger scenario, and furthermore, vertical integration will reduce it. That implication is a welfare-enhancing feature of the merger. More generally, one can expect two counteracting forces with a vertical merger. First, the raising rivals’ cost effect decreases welfare and reduced double marginalization increases welfare. How they net out depends on the particular situation and thus must be evaluated on a case-by-case basis. Second, contrary to the assumption made, the integrated firm may find it optimal to supply the input to the unintegrated downstream firm. In our example, if firm U2 prices at 72.45, then the integrated firm would prefer to undercut at a price of 72.44. Doing so will only marginally lower the price of firm D2, and the integrated firm will make positive profit on the input sales. However, there are other models of competition for which a raising rivals’ cost effect is present and the integrated firm does not want to sell to the unintegrated downstream firms. In those models the input price and the final product prices can both be higher. So while our model is subject to this criticism, other (more sophisticated) models are not.14 Continuing with this scenario, let us show that the downstream firm will have a strategic incentive to buy from the integrated firm, because it will make the integrated firm less aggressive in the downstream market.15 Note that if the integrated firm lowers its downstream price, it will sell more units in the downstream market, and some of that rise in sales will come from the unintegrated downstream firm. But if the unintegrated downstream firm is selling less, it will then be buying less input from the integrated firm. The prospect of that lost profit from lower input sales serves to soften the integrated firm’s competitiveness in the downstream market. That effect is beneficial to the unintegrated downstream firm and is present only when it buys input from the integrated firm. Hence, the unintegrated downstream firm is willing to buy from the integrated firm even if its price is higher than that of other (unintegrated) upstream firms (as long as it is not too much higher). The third caveat is that the unintegrated firms might respond by integrating themselves. Indeed, such a response is optimal in our model. Once again, in a more complex model, one can find circumstances in which it is not optimal to respond with vertical integration, so that our conclusions persist.16 However, the assumptions required for that to be true are rather strong. Finally, it is worth mentioning an analysis showing that a dominant downstream firm that competes with a collection of competitive firms can profitably backward integrate into a competitive upstream industry with potentially negative welfare effects.17 A motivating example is the steel industry at the start of the twentieth century. U.S. Steel was a dominant firm with a market share exceeding 60 percent of the steel market. It acquired an essential input, iron ore, to the point that it owned 40 percent of reserves.18 Though there are efficiency benefits from the integration—as the dominant firm is assumed to have lower cost, so that shifting some of the downstream supply to it reduces total industry cost—it also serves to raise the input price faced by its downstream competitors because of the raising rivals’ cost effect. Though the final product price is always higher with integration, and thus consumers are worse off, the welfare effect can be positive or negative. In summary, we have suggested that harmful effects from vertical integration are unlikely to occur unless there is preexisting market power at one level or both. While this seems to indicate that the real problem is

horizontal market power that should be attacked directly, perhaps such an approach is not always possible. Hence we shall pursue the analysis from a different angle. Suppose we recognize that market power at one level can be extended to another level. Does the creation of a “second” monopoly have any harmful consequences for economic efficiency? If it does not, then the rationale for the merger may lie elsewhere— for example, socially beneficial transaction cost savings. In this case, economic efficiency might be better served by permitting such mergers. Antitrust Law and Policy Historical development With regard to vertical mergers, the focus of antitrust enforcement until the late 1970s was preventing perceived market foreclosure in its incipiency. Exemplifying the aggressive stance is the Brown Shoe case in 1962, which had vertical dimensions in addition to the horizontal dimensions discussed in chapter 6. The Court held that the relevant market was the entire United States and noted with concern a trend toward increasing vertical integration: Since the diminution of the vigor of competition, which may stem from a vertical arrangement, results primarily from a foreclosure of a share of the market otherwise open to competitors, an important consideration in determining whether the effect of a vertical arrangement [is illegal] is the size of the share of the market foreclosed.19

In Brown Shoe, the size of the market foreclosed was around 1 percent! That is, Brown (primarily a shoe manufacturer) could be expected to force Kinney (primarily a shoe retailer) to take only a small volume of Brown shoes, to the exclusion of other shoe manufacturers. However, as in the horizontal part of the case, the Court gave great weight to halting a perceived trend of rising foreclosure. Beginning in the late 1970s, some lower courts became less receptive to the authorities’ challenge of vertical mergers. This change in judicial review led to a more lenient policy, which peaked in the 1980s. Under the administrations of Ronald Reagan and George H. W. Bush, the DOJ blocked only one vertical merger, which was the proposed merging of the cable programs Showtime and The Movie Channel. This policy change was also reflected in the DOJ merger guidelines. While the 1968 guidelines stated that vertical mergers between a supplier with a 10 percent share and a buyer with a 6 percent share might be challenged, the 1982 guidelines had no similar statements about foreclosure percentages. In recent decades, the DOJ and FTC have been more scrutinizing of vertical mergers in a manner broadly consistent with the more modern economic theories of harm reviewed earlier. This increased (though still restrained) level of activity is reflected in figure 7.5, which reports the number of vertical mergers challenged by the DOJ and FTC from 1994 to 2015. Of the forty-eight mergers that were challenged, the prospect of foreclosure was relevant in thirty-six of them, collusive information exchange was raised as a concern in eleven, and eight of them might have eliminated a potential competitor.

Figure 7.5 Enforcement Actions in Vertical Mergers by Presidential Administration, 1994–2015 Source: Steven C. Salop and Daniel P. Culley, “Revising the U.S. Vertical Merger Guidelines: Policy Issues and an Interim Guide for Practitioners,” Journal of Antitrust Enforcement 4 (April 2016): 1–41.

The issue regarding collusive information exchange is as follows. If an upstream firm acquires a downstream firm that deals with multiple upstream firms, that downstream firm could be a conduit of information among upstream competitors for the purpose of collusion. The FTC expressed this concern with the 1998 proposed merger of Merck and Medco because Medco, as a manager of pharmacy benefits, dealt with many pharmaceutical companies. Now that it owned an upstream firm, it could have an incentive to use its contact with rival pharmaceutical companies to coordinate their behavior. Let us next turn to explaining why a vertical merger might raise anticompetitive concerns because of the elimination of a potential competitor. A firm that is upstream to a market could be a natural potential entrant because of its knowledge of that market and its production of an essential input. Similarly, a firm that is downstream from a market could be a viable source of entry into the upstream market. It follows that a vertical merger might then eliminate a potential entrant and that reduced potential competition could lead to higher prices. This issue arose in the DOJ’s evaluation of the 2010 proposed merger of Live Nation and Ticketmaster (though the primary concern was unilateral horizontal effects).20 Ticketmaster had long been the dominant firm in ticketing of concerts and other live performances. Live Nation was the largest concert promoter with more than one-third of all major concerts, and it had recently vertically integrated by self-ticketing its own concerts. The possible vertical harm from the merger came from the elimination of Ticketmaster as a possible entrant into Live Nation’s primary market. Given its position in ticketing, Ticketmaster could have entered into promotions and venues and competed with Live Nation, just as Live Nation had moved into ticketing. The vertical merger would then eliminate Ticketmaster as a source of entry into the concert promotion market, which could have adverse effects on the competitiveness of that market. Cases As we consider some cases, keep in mind that consumer welfare is the standard by which a merger in the United States should be evaluated. Though anticompetitive concerns will initially be posed in terms of harm

to rival firms, that harm is relevant only to the extent that it manifests itself in higher prices, less variety, or some other source of harm to consumers. Competition policy is intended to protect competition—for the benefit of consumers—not to protect competitors. Time warner and turner broadcasting Let us examine the FTC’s response to a proposed merger involving Time Warner, Turner Broadcasting System, and TCI. At the time, TCI was the largest cable service provider, with about 27 percent of all U.S. cable television households, with Time Warner coming in second at around 17 percent. As shown in table 7.2, Time Warner also owned cable networks such as HBO and Cinemax. Turner was solely focused on programming and offered such channels as CNN and TNT. The vertical dimension to this case is that cable programming (the upstream industry) is an input into cable service (the downstream industry). Table 7.2 Cable Program Service Ownership Interests, 1996a Time Warner

TCI

Turner Broadcasting System

Cinemax (100) Comedy Central (50) Court TV (33.3) E! (50) HBO (100)

BET (17.5) Court TV (33.3) Discovery (49) Encore (90) E! (10) Faith & Values (48) The Family Channel (20) Fit TV (20) HSN (80.4) International Channel (50) Intro TV (100) NewSport (33) Prime Network (33) QVC (43) Q2 (43) Request TV (47) Starz! (90) TLC (48) Viewer’s Choice (10)

Cartoon (100) CNN (100) Headline News (100) TBS (100) TCM (100) TNT (100)

Source: Stanley M. Besen, E. Jane Murdoch, Daniel P. O’Brien, Steven C. Salop, and John Woodbury, “Vertical and Horizontal Ownership in Cable TV: Time Warner-Turner (1996),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 3rd ed. (New York: Oxford University Press, 1999). a Percentage ownership share is given in parentheses for each table entry.

In September 1995, Time Warner agreed to purchase Turner and, due to its shareholding in Turner, TCI would come to own 7.5 percent of Time Warner. The proposed merger would result in the new Time Warner controlling more than 40 percent of programming assets, while Time Warner and TCI would serve about 44 percent of cable subscribers. The FTC was concerned with both input and customer foreclosure. The complaint alleged that after the acquisition, Time Warner and TCI would have the power to (1) foreclose unaffiliated programming from their cable systems to protect their programming assets, and (2) disadvantage competing cable distribution systems by denying programming or by providing programming only at discriminatory (i.e., disadvantageous) prices.21 In response to these anticompetitive concerns, the FTC imposed several conditions before approving the merger. First, Time Warner could not package the most desirable channels, like HBO, with lesser channels,

for the reason that doing so would limit cable capacity for other programming companies. Second, certain reporting requirements were imposed so as to make it easier for the FTC to learn about possible exclusionary activities. Third, the FTC eliminated a long-term agreement requiring TCI to carry Turner programming at preferential prices. The fear was that it would foreclose non-Time Warner programming on TCI cable systems. A later analysis showed that the FTC was justified in having been concerned about foreclosure.22 The study found that cable systems that own premium programming provided, on average, fewer programs—one less premium channel and one to two fewer basic channels. By way of example, consider the shopping networks, QVC (which was owned by TCI and another cable operator, Comcast) and Home Shopping Network (HSN). Only 6 percent of TCI and Comcast systems carried HSN, compared to 28 percent of all cable systems. And while both networks were carried by 9 percent of all cable systems, this was true of only 5 percent when it came to TCI and Comcast. Estimates further revealed that, compared to other cable systems, TCI and Comcast were 25 percent less likely to carry HSN and 4 percent less likely to carry both QVC and HSN. Similar results were found for premium movie services. Though this constitutes evidence of foreclosure, we also know that efficiency benefits can come from vertical integration, such as reduced double marginalization. A second study confirmed evidence of foreclosure and found, consistent with some efficiency gains, a decline in prices for consumers.23 However, there is no evidence that subscribership rose and, therefore, no evidence that consumers saw the postmerger package of channels and prices to be more attractive. On net, it is not clear that consumers benefited from the merger. General electric and honeywell In chapter 6 on horizontal mergers, we discussed the challenges of consummating a merger among global companies because of the need to gain approval from multiple competition authorities. That challenge arose when General Electric (GE) and Honeywell proposed to merge in 2001. While approved by the DOJ, the European Commission (EC) blocked it.24 The merger would impact markets in jet engines, aircraft systems, and engine controls. Possible anticompetitive concerns arose from both a horizontal and vertical perspective, but our discussion will focus on the vertical dimension associated with jet engines. GE was one of the leading producers of jet engines, and Honeywell was the leading upstream producer of engine starters (which are used in the manufacturing of a jet engine). The EC argued that the merger created a risk of input foreclosure, because “the merged entity would have an incentive to delay or disrupt the supply of Honeywell engine starters to competing engine manufacturers [such as Rolls Royce].”25 While foreclosure is surely an available option if the firms were to merge, the relevant question is whether it is the most profitable option or whether it would do better to continue to supply engine starters to other engine manufacturers. The attractiveness of foreclosure would depend on whether competing engine manufacturers would be unable to find an alternative supply of engine starters at a comparable price. The EC viewed the other engine starter producers as already tied to some engine manufacturers and thought that Rolls Royce would be left with no source of supply in the event of foreclosure. Even if foreclosure was pursued, it is not immediately clear that the merger would be harmful, because of the benefit of reduced double marginalization. A combined GE-Honeywell might sell its jet engines at a lower price, because it would no longer be charged a price exceeding cost on engine starters. However, the EC failed to assess the ultimate impact on the downstream price.

In response to the EC blocking the merger, GE and Honeywell appealed the decision to the European Union’s Court of First Instance. The Court upheld the decision on horizontal grounds, though it found the analysis of vertical effects to be seriously flawed. Comcast and NBC universal A merger similar to Time Warner and Turner arose when Comcast proposed purchasing NBC Universal (NBCU) in 2009.26 Comcast was the largest cable provider with 23.8 million subscribers in 39 states. NBCU owned a collection of popular cable networks, including USA, SyFy, Bravo, and MSNBC, as well as other content such as NBC Television Network and Telemundo Television Network. The merger was evaluated by the DOJ and also by the Federal Communications Commission (FCC), because their permission was required to transfer various licenses. One of the central anticompetitive concerns was that Comcast might find it profitable to withhold some of NBCU’s programming from competing systems. There were two sources of competition to Comcast. The first source was multichannel programming distributors (MVPDs) which included competing cable companies, direct broadcast satellite companies (such as DirectTV), and local wire-line telephone companies (such as AT&T). A second more nascent but growing source of competition was online video programming distributors (such as Hulu, Netflix, Amazon, and Apple). If Comcast withholds some of NBCU’s content from rivals, it foregoes the profit it would have earned from selling that content. It can benefit by causing some consumers to switch to Comcast, because the content offered by those rivals is now less attractive. To get at this effect, the FCC estimated the fraction of an MVPD’s customer base that would discontinue service with that MVPD if NBCU’s content were removed (known as the departure rate) and, of those customers, the fraction that would adopt Comcast as their new provider (known as the diversion rate). The higher is the departure rate and the higher is the diversion rate, the more attractive it will be for Comcast to withhold service, as it will then pick up more demand. Based on national averages, for every hundred customers lost by DirecTV, DISH, Verizon, or AT&T, Comcast would pick up about twenty-six of them. In some areas, such as Philadelphia, Comcast would capture between sixty-three (coming from AT&T) and seventy-one (coming from DirecTV). If the departure rate was high enough, there could then be a strong incentive to withhold content from rival distributors. Even if Comcast did not withhold service, it might sell the content of NBCU at a significantly higher price. To estimate that price increase, the FCC used a bargaining model to represent the negotiations that would occur between Comcast—as owner of NBCU’s content—and a rival MVPD, such as DISH. In bargaining, Comcast would want to take account of the opportunity cost associated with selling content, as it would then lose some customers to that rival MVPD. The predicted price increase on NBCU’s content was argued to be increasing in the departure rate and diversion rate. After assessing the harm and evaluating claimed efficiencies, both the FCC and DOJ concluded that the merger would, on net, make consumers worse off. Rather than prohibit the merger, the FCC proposed a conduct remedy intended to alleviate possible foreclosure by Comcast. It required that MVPDs have the right to go to arbitration if they felt the contractual terms offered by Comcast for programming were unfair. The procedure specified that each side would make an offer to a third party arbitrator, who would then select the offer based on an assessment of market value. Vertical Restraints

We turn now to a set of business practices that often accomplish some of the same objectives as vertical integration, but through contractual means rather than a merging of firms. The various practices that we shall examine throughout this chapter are exclusive dealing, tying, resale price maintenance, and territorial restraints. We begin with some concrete examples of each practice. Exclusive dealing is illustrated by an agreement between Exxon and an independent service station that the service station would buy all its gasoline and motor oil supplies from Exxon. Exclusive dealing can be viewed as a way of accomplishing vertical integration by contract. In its relations with game developers, Nintendo used exclusionary contracts in the 1980s, when it had the dominant video game platform. In producing a game for Nintendo, a game developer had to agree not to offer a version of that game for any other video game system for a period of two years. Tying refers to the practice of a supplier agreeing to sell its customer one product (the tying good) only if the customer agrees to purchase all of its requirements for another product (the tied good) from the supplier. A well-known example was IBM’s practice in the 1930s of leasing its tabulating machines only on the condition that the customer purchase all of its needs for tabulating cards from IBM. A more recent example in the computer industry is when Microsoft tied its Windows operating systems and its browser Internet Explorer, so that customers could only buy the former if they also bought the latter. That business decision led to a historic antitrust case, which will be examined in chapter 9. The remaining two practices are most common to manufacturer-retailer relations. Resale price maintenance (or RPM) means that the supplier constrains the dealer in terms of the price at which it can resell its product. Usually, RPM is either a minimum resale price or a maximum resale price. An example of minimum RPM would be if a video game developer (such as Electronic Arts) required retailers to sell its games for no less than $60. An example of a maximum resale price would be if the New York Times required its home delivery distributors to sell the newspaper for no more than $10 per week. Note that a vertically integrated firm would set its own resale price to final customers, so RPM is a partial substitute for vertical integration. A territorial restraint is an agreement between a supplier and a dealer that the former will not allow other dealers to locate within a specified area—thereby preserving an exclusive marketing territory for that dealer. Such agreements are widespread in the automobile industry. An example would be if Ford Motor Company agreed to allow only one Ford dealership in a city. Again, if Ford Motor were completely integrated into retailing its cars, it would choose where it would locate its retail outlets. Historically, these practices were generally judged under either Section 1 of the Sherman Act or Section 3 of the Clayton Act. Section 3 of the Clayton Act specifically mentions both exclusive dealing and tying, and holds them to be illegal “where the effect … may be to substantially lessen competition or tend to create a monopoly.” However, jurisprudence has changed in the past few decades: courts have rarely ruled against exclusive dealing and tying as a violation of Section 1, though they have done so with regard to Section 2. As Section 2 prohibits attempts at monopolization, it is used when a firm that already is dominant seeks to extended that dominant position. This move away from applying Section 1 coincides with a shift from the use of a per se rule to a rule of reason. With Supreme Court decisions in 1997 and 2007, resale price maintenance went from per se illegality to being judged under the rule of reason. Exclusive Dealing Exclusive dealing is a contract between a supplier and a dealer stating that the dealer will buy all of its supplies from that supplier. In effect, exclusive dealing is an alternative way of accomplishing vertical

integration; it is “contractual” integration rather than the more permanent ownership integration discussed earlier in this chapter. And just as vertical mergers worry the courts because of possible foreclosure of rivals, exclusive dealing can have similar anticompetitive effects.27 Economic analysis Vertical integration is often the efficient organizational form, because it reduces transactions costs, and the same can be said in favor of exclusive dealing. Benefits may include lower selling expenses by the supplier and lower search costs by the dealer. Also, the supplier may find it worthwhile to invest in developing the skills of its dealers only if it knows that the dealers will be devoting all their efforts to selling the supplier’s products. Another factor is that the supplier may find it worthwhile to promote the products nationally if it knows that the dealers will not substitute a lower-priced nonadvertised brand when consumers flock to their stores. Chicago school theory An exclusive contract between a supplier and a buyer means that all other sellers are foreclosed from selling to that buyer. Such a contract raises an anticompetitive concern of foreclosure, and buyers will be harmed by the elimination of other sellers. The Chicago school claims, however, that a buyer would not sign a contract that commits it to buying from one seller unless doing so made it better off than not signing the contract. And, furthermore, an acceptable contract to the buyer would actually be unprofitable to the seller. To see this how this argument works, consider a situation in which there is currently one monopoly supplier with marginal cost c′. Given the demand curve D(P) in figure 7.6, it will set a price and quantity to equate marginal revenue, MR, and marginal cost. This results in a price of Pm. Now suppose a more efficient entrant comes along with marginal cost c″ which is less than c′. If no exclusive contract was signed, then the two firms compete, and, assuming their products are homogeneous and they compete by setting prices, the equilibrium will be that the new firm prices just below c′ and meets all demand of Q0. The incumbent seller does not want to match or undercut this price, since it will be pricing below cost. Furthermore, if it prices at c′ (which it is content to do), the new firm will not want to set a price higher than c′ as then the original firm will get all of the demand.

Figure 7.6 Effect of Exclusive Dealing on Profit and Welfare

Now consider the incumbent seller offering a contract to a buyer that excludes her from buying from any other firm. If she signs the contract, she knows that the seller will take advantage of its monopoly position and charge a price of Pm. In that case, the buyer gets surplus equal to triangle A. If the buyer does not sign the contract, then she will face a price just below c′ from the new firm and receive surplus equal to the sum of A, B, and C. A buyer must then be paid an amount equal to B + C in order to sign the contract. However, the amount that the original seller must pay the buyer to accept exclusivity exceeds monopoly profit, which is measured by rectangle B. Thus, the seller is worse off by an amount equal to the deadweight welfare loss C from pursuing exclusive dealing. If exclusive dealing is used, the Chicago school argues, it must then be for efficiency reasons that serve to expand the surplus in some manner. As is typical, the Chicago school analysis is correct but not universally applicable. More recent analysis has found rationales for exclusive dealing predicated not on efficiency but on harming rivals and consumers. A common feature to these models is that the presence of an externality creates the opportunity for some agents (those who are party to the contract) to extract surplus from those excluded from the contract. In the first model, a properly designed exclusionary contract between the incumbent seller and buyers can extract surplus from a more efficient entrant. This contract can be anticompetitive, because it reduces the likelihood of a more efficient firm entering. In the second model, entry is once again deterred, but now it is because the incumbent seller signs a contract with only some of the buyers and this transfers surplus from the remaining buyers. Once again, efficient entry is prevented.

Exclusionary contracts that extract surplus from a more efficient entrant The first step in this analysis is to expand the set of contracts between the incumbent seller and the buyer.28 An exclusionary contract is now described by a pair (p, x), where p is the price that a buyer pays for each unit it purchases from the incumbent seller, and x is a penalty it must pay to that seller for each unit the buyer buys from other sellers. The innovation in the contract is the penalty. A pure exclusive dealing contract can be thought of as one in which x is infinity, so that once signed, a buyer would never buy from any other firm. As we will see, lowering the penalty can induce the buyer to accept the contract, and this can reduce welfare. Suppose the buyer and the incumbent seller agree to a contract (c′, x) so it is supplied at cost. With this contract in place, the buyer effectively faces a price of pE + x from the entrant when the entrant charges a price of pE, since the buyer must pay x to the incumbent seller for each unit she buys from the entrant. Thus, the buyer will only buy from the entrant if pE + x < c′. Given it enters, the equilibrium would then have the entrant pricing just below c′ − x (as long as x is sufficiently small so that c″ < c′ − x) and selling Q0 units. What have the buyer and seller gained from such a contract? Without the contract, the prospective firm enters and sells just below c′, in which case the incumbent seller earns zero profit and consumers get surplus of A + B + C. With the contract, the consumer gets the same surplus (as they pay a price of c′ − x but also a penalty of x), but the incumbent seller receives profit of xQ0 from the penalty clause. In sum, they are better off, and, to break the buyer’s indifference about signing the contract, the incumbent seller could provide an initial payment to the buyer for signing the contract and thereby share the anticipated gain of xQ0. What the incumbent seller and buyer have done is to extract some of the surplus created by the entry of a more efficient firm. Without the contract, the entrant earns profit of (c′ − c″)Q0, but now it earns only (c′ − c ″ − x)Q0. In fact, the buyer and incumbent seller could extract all of the entrant’s surplus by setting x = c′ − c ″. As shown in figure 7.6, the incumbent seller and buyer would then add rectangle E to their original surplus of A + B + C. We have shown how an exclusive dealing contract that allows the buyer an escape clause with a penalty can both enhance the incumbent seller’s profit and be attractive to the buyer, something the Chicago school thought was not possible. But what we have not shown is that it is anticompetitive; entry still occurs, and price falls. Thus far, social welfare—the sum of profit and consumer surplus—has remained the same; it is just that we have shifted it around among the various parties. Let us now enrich the model by first assuming a fixed cost to entry, k > 0. The net profit to an entrant is then (c′ − x − c″)Q0 − k. In that case, if x = c′ − c″, then the entrant would not enter, since its variable profit is zero so it could not cover its cost of entry. But the incumbent seller and the buyer do not want to prevent entry, as then the additional surplus is never created and thus there is nothing to extract. The next modeling change is to suppose that k is random, unknown to the incumbent seller and buyer but known to the potential entrant when it makes its entry decision. The potential entrant will enter when k is sufficiently low. For example, when x = 0, the entrant will enter when k ≤ (c′ − c″)Q0, as it will charge a price (just below) c′. In that case, the probability of entry is the probability that k is less than (c′ − c″)Q0. The point is that it can be optimal for the incumbent seller and buyer to set x > 0 in order to extract some surplus, in spite of it having the undesirable implication of lowering the probability of entry. Now exclusive dealing results in a welfare loss, because it is less likely that a more efficient firm will enter. In those instances in which there would have been entry without exclusive dealing but there is not now, price is higher. The incumbent seller and buyer are willing to forgo that surplus in exchange for getting a higher fraction of surplus when there is entry.

Exclusionary contracts that extract surplus from some customers Let us now turn to a second way in which an exclusionary contract is profitable to the incumbent seller, acceptable to buyers, and welfare-reducing.29 Suppose the situation depicted in figure 7.6 is for a single buyer and there are three identical buyers, whom we will call Manny, Moe, and Jack. Each buyer has demand curve D(P), so that total demand is 3D(P). In the absence of exclusionary contracts, the net profit from entry is (c′ − c″)3Q0 − k. Let us make the following assumption:

Alternatively stated (and referring back to figure 7.6), 2E exceeds k, but k exceeds E. Thus, if a new firm is able to sell to two or three buyers, variable profit is sufficient to cover the cost of entry. However, if it only gets one buyer, then entry is unprofitable. Consider the incumbent seller entering into an exclusive dealing arrangement with Manny and Moe whereby they are prohibited from buying from any other firm. In that case, entry is unprofitable, as the entrant has only one buyer to whom to sell. Now let us turn to the situation faced by Manny. By not signing the contract, the most he can receive in surplus is A + B + C, which he gets when entry occurs. The least surplus he can get from signing the contract is A. Thus, Manny will surely sign the contract if the incumbent seller provides compensation of B + C plus a little bit more. The same is true for Moe. Therefore, the cost to the incumbent seller from locking in Manny and Moe and thereby deterring entry is no more than 2(B + C). The value of deterring entry is 3B as, regardless of whether a buyer has signed an exclusive dealing contract, the incumbent seller charges the monopoly price. The incumbent seller can then offer an exclusive dealing contract that Manny and Moe will accept as long as 3B exceeds 2B + 2C. It is not difficult to find examples where that does indeed occur. We conclude that the incumbent seller can successfully deter entry by a more efficient firm by signing some (but not all) of the buyers to an exclusive dealing contract. The surplus extraction in this case is not from the entrant, because the entrant does not enter, and thus there is no surplus to extract. The surplus comes instead from Jack; that is, the buyer left without a contract. Jack ends up with surplus of only A, while Manny and Moe each receive A + B + C. And price remains at Pm, with an inefficiently high cost of c′. Exclusive dealing lowers surplus by an amount of 3(C + E) − k; there is the deadweight welfare loss of 3C and the increase in total cost of 3E, but then the entry cost is avoided. This is strictly positive since, by assumption, 3E exceeds k. If the incumbent seller is clever, it may be able to play the buyers off against each other so that it gets them to agree to exclusive dealing for almost nothing! Suppose it offers an exclusive dealing contract to Manny and Moe with a small payment. And it tells each of them that if he refuses the deal, it will be offered to Jack. If Manny and Moe anticipate the other one signing the deal and anticipate Jack signing the deal if one of them refuses, each will believe that two buyers will end up locked in to buying from the incumbent seller. Thus, regardless of whether one of them signs the deal, they expect entry to be deterred, which means a surplus of A. If they sign it, they get a surplus of A plus the small payment that the incumbent seller is offering. In that case, Manny and Moe should sign the contract. Exclusive dealing occurs, entry is deterred, and the incumbent seller ends up with surplus of almost 3B.30 In sum, the Chicago school argues that for an incumbent seller to induce all buyers to sign an exclusive dealing agreement, it requires payment exceeding monopoly profit. From this argument they concluded that exclusive dealing does not work as a strategy to deter efficient entry. What our analysis shows is that it does not require signing all buyers to an exclusionary contract to deter entry. And if that is true, then the amount

of required payment can be less than monopoly profit, in which case exclusive dealing can be profitable for the incumbent seller. Because an entrant has to earn enough variable profit to cover its fixed cost (that is, there are scale economies), the trick is for the incumbent firm to sign enough buyers—to have enough customer foreclosure—so that a new firm lacks that critical mass to make entry profitable. This point was clearly at play in Dentsply International (2005).31 Denstply was the dominant manufacturer of artificial false teeth in the United States with a market share of 75–80 percent. The case concerned its practice of requiring dealers not to carry certain lines of artificial teeth supplied by rival manufacturers. The Court ruled that this requirement caused the sales of the other manufacturers to be “below the critical level necessary for any rival to pose a threat to Dentsply’s market share.” The court ordered Dentsply to stop requiring that a dealer make Dentsply its exclusive source of artificial teeth. Contracts that reference rivals An exclusive dealing contract is a special case of a contract that references rivals (CRR) which, while they may have legitimate bases, can be exclusionary.32 The defining feature of a CRR is that the terms of a contract between a buyer and a seller depend on the buyer’s transactions with a rival to that seller. Exclusivity is one such feature in that it requires a buyer to not purchase from any other sellers. More generally, a CRR could make it more costly to buy from another seller but not prohibit it outright. A common class of CRRs are share contracts, which specify that the terms of the agreement—such as price—depend on the share of a buyer’s purchases with that seller. A share contract could take the form of loyalty discounts that depend on the fraction of purchases with other sellers (with larger discounts when a buyer purchases a smaller fraction from other sellers). Concord Boat (2000) was a case involving the use of a share contract between Concord—a dominant producer of boat engines—with its customers, which were boat producers.33 If a customer had 60 percent of its purchases of engines from Concord, it would receive a 1 percent discount. The discount increased to 2 percent when a customer had 70 percent of its purchases from Concord, and to a 3 percent discount at 80 percent. Such discounts are to be distinguished from volume discounts, whereby a buyer pays a lower price when the amount purchased from a seller is greater. In the case of a share contract with loyalty discounts, the price is lower when the fraction of a buyer’s purchases from that seller is greater. Implicitly, a buyer pays a tax— in terms of lost discounts—from buying from another seller. In this manner, rival firms are disadvantaged, and share contracts can have some of the exclusionary effects associated with exclusive dealing.34 Share contracts can go beyond loyalty discounts. Consider their use in the market for heavy-duty class 8 truck transmissions. These transmissions are purchased by the manufacturers of eighteen-wheeler trucks that do long-distance hauling, along with other trucks, such as cement mixers and garbage trucks. Eaton was the dominant producer of transmissions and, after a significant decline in demand in 1999–2000, it began to enter into long-term CRRs with customers. These contracts were at least five years in length and generally had the feature of providing a fixed payment if some minimum share was reached and then rebates in excess of that minimum. For example, International Truck and Engine received an up-front payment of $2.5 million and would lose that payment if it purchased less than 87 percent from Eaton. The threshold varied among customers, for example, Freightliner had a threshold of 92 percent and PACCAR had one of 90–95 percent. These contracts locked up more than 85 percent of the market for at least five years.35 Antitrust law and policy Historical Development

Originally, the courts treated exclusive dealing harshly. Exemplifying this position is a 1922 case, Standard Fashion Company v. Magrane-Houston Company. The Supreme Court found an exclusive dealing arrangement between a manufacturer of dress patterns and a retail store to be illegal on the grounds that that rival pattern manufacturers were foreclosed from the market.36 The Supreme Court approved an evaluation of the problem given by the Circuit Court of Appeals: The restriction of each merchant to one pattern manufacturer must in hundreds, perhaps in thousands, of small communities amount to giving such single pattern manufacturer a monopoly of the business in such community.

The Circuit Court went on to observe that this could lead to ever higher concentration in the pattern business nationally “so that the plaintiff … will shortly have almost, if not quite, all the pattern business.” This analysis is not sound, however, since it ignores the issue of what a pattern manufacturer must give up to obtain an exclusive dealing arrangement in the first place. That is, the retail stores can benefit by tough bargaining with potential pattern suppliers before signing an exclusive arrangement with one of them. In a case decided in 1961, the Supreme Court refused to strike down an exclusive dealing arrangement between Tampa Electric Company and Nashville Coal Company.37 The reason was that the arrangement involved only about 0.77 percent of total coal production, which was insufficient to qualify as a substantial lessening of competition in the relevant market. However, in a 1949 case involving an exclusive dealing arrangement between Standard Oil Company of California and about 6,000 retail service stations, the Court held that 6.7 percent of the market was sufficient for illegality.38 Hence, whether exclusive dealing is likely to be illegal seems to have depended on the market shares involved. Those cases reflect a judicial perspective whereby competitive harm is presumed when “enough” of a market is foreclosed to rivals. However, no economic argument exists for such a presumption, and the courts have since adopted a more sophisticated view. The courts now consider how an exclusive dealing arrangement, or any form of exclusionary conduct, impacts competition and whether it leads to consumer harm. A plaintiff in a case must establish that the seller has significant market power (often the court must be convinced it is dominant), show that exclusive dealing extends the seller’s market power, and that exclusivity is not simply the outcome of a competitive process. We will see some of these issues at play in our review of the Visa–Mastercard and Intel cases. Visa–MasterCard With transactions surpassing $1 trillion annually, the general-purpose charge and credit card market is one of the most important financial markets in the United States. The four most significant cards are well known to all readers—Visa, MasterCard, American Express, and Discover. A judicial decision in this market affirmed the importance of exclusive dealing as an antitrust issue.39 Visa and MasterCard are networks owned by thousands of member banking institutions. Member banks, such as Citibank, may either be an “issuer” (that is, they issue Visa, MasterCard, or both) or an “acquirer,” or both. When an individual seeks to make payment using, say, Visa, the merchant conveys the transaction to an acquiring bank (which is contracted with the merchant), which then relays it to the issuing bank (which issued the card to the individual). If the transaction is approved, the issuer retains a fee equal to 1.4 percent of the transaction amount, and the acquirer retains 0.6 percent. Thus, a merchant with a $100 transaction actually receives $98. By comparison, American Express and Discover work quite differently by performing both issuing and acquiring roles. They are similarly compensated by merchants, with American Express’s transaction fee typically being 2.73 percent and Discover’s about 1.5 percent.

This case dealt with two issues. The first was the “dual governance” feature to Visa and MasterCard in that many banks were member-owners of both networks. The courts did not find that to be a violation of the Sherman Act. The second issue is the one that concerns us: the exclusionary nature of the agreement made between the Visa–MasterCard networks and the member banks. Visa’s by-law 2.10(3) and MasterCard’s Competitive Programs Policy prohibited member banks from issuing the cards of certain other competitors, including American Express and Discover. The claim of the plaintiffs was that this was a violation of Section 1 of the Sherman Act. In applying the rule of reason, the U.S. Court of Appeals nicely summarized the procedure:40 As an initial matter, the government must demonstrate that the defendant conspirators have “market power” in a particular market for goods or services. Next the government must demonstrate that within the relevant market, the defendants’ actions have had substantial adverse effects on competition, such as increases in price, or decreases in output or quality. Once that initial burden is met, the burden of production shifts to the defendants, who must provide a procompetitive justification for the challenged restraint. If the defendants do so, the government must prove either that the challenged restraint is not reasonably necessary to achieve the defendants’ procompetitive justifications, or that those objectives may be achieved in a manner less restrictive of free competition.

To establish market power, it was first noted that this industry is highly concentrated. In 2001, Visa and MasterCard had 73 percent of all transactions (46 and 27 percent, respectively), with American Express (20 percent) and Discover (6 percent) having most of what remained. Of course, high concentration is typically necessary but not sufficient for there to be market power. Evidence was put forth that in spite of its attempts to get banks to issue its card, American Express had not convinced a single bank to do so because the exclusive dealing agreement meant that the bank would lose its membership in the Visa and MasterCard consortiums. By comparison, American Express had been successful in getting banks to issue its card outside the United States, in countries where this exclusionary provision was absent. (In 1996, the EC forced Visa and MasterCard to eliminate the exclusionary provision in Europe.) It was felt that the exclusion of these other cards lessened competition. Though the defendants sought to provide an efficiency rationale for their exclusionary contracts, it proved to be unconvincing. In 1999, U.S. District Judge Barbara Jones found this to be an unreasonable restraint of trade and in violation of the Sherman Act. As part of the judgment, Visa and MasterCard were required to repeal the exclusionary features of their contracts with banks; in particular, banks are allowed to issue the cards of rivals to Visa and Mastercard without losing their relationship with Visa and Mastercard. This decision was affirmed by the U.S. Court of Appeals for the Second Circuit in 2003. Intel Intel has been the dominant manufacturer of central processing units (CPUs) since IBM adopted Intel’s CPU for the first IBM personal computer. Intel’s primary customers are original equipment manufacturers (OEMs), such as IBM, Dell, and Hewlett-Packard. Not long after Intel introduced the x86 design in 1985, its much smaller rival Advanced Micro Devices (AMD) began to design its own version. Though initially AMD’s product was generally considered to be inferior, by the late 1990s it had achieved design parity with Intel in some submarkets. Suddenly, Intel was faced with a serious threat to its dominant position.41 In response, Intel was soon signing OEMs to contracts that either required exclusive dealing with Intel or were some form of share contracts. For example, HP’s contract had it receiving rebates only if it purchased at least 95 percent of business desktop CPUs from Intel. There were other exclusionary conditions, such as HP could not sell AMD-based desktops to large businesses and not through direct distribution and retailers. In response to these contracts as well as evidence of other exclusionary actions, Intel found itself under

investigation by both the FTC and the EC. Those agencies claimed that Intel had used its dominant position in the market for x86 microprocessors in a way that violated Section 5 of the FTC Act (which prohibits “unfair methods of competition”) and Article 82 of the EC Treaty (which prohibits abuse of a dominant position in the market). Among its claims, the FTC argued that Intel’s market dominance was extended because Intel used market share discounts to tax OEM purchases of non-Intel CPUs Critical to arguing this case was first establishing that Intel had significant market power. As reported in Table 7.3, it is clear that Intel had a very large market share and only one competitor. Furthermore, this was (and still is) a market for which entry is extremely costly, and as a result, potential competition is not a serious threat. The cost of developing a CPU design and building a manufacturing plant was extremely high. A state-of-the-art manufacturing plant could cost as much $3 billion. Once having the product, there is the additional cost associated with developing brand recognition (which could involve large marketing expenditures) and a reputation among OEMs for reliability (which could take years). Given its reputation, its technology, and its market position, Intel was a dominant firm. With dominance established, the agencies then argued that Intel had abused that dominance for the purpose of monopolizing the market. Table 7.3 Worldwide Intel and AMD Market Shares in x86 PCs, 2002–2006 (%) Unit Share

Revenue Share

Year

Intel

AMD

Intel

AMD

2002 2003 2004 2005 2006

83 85 86 84 79

17 15 14 16 21

87 88 89 88 84

13 12 11 12 16

Source: Patrick DeGraba and John Simpson, “Loyalty Discounts and Theories of Harm in the Intel Investigations,” Journal of Antitrust Enforcement 2 (October 2014): 170–202.

Intel lost the EC case in 2009 and the FTC case in 201042 and was subject to large financial penalties and conduct remedies. According to the FTC’s Order, Intel was prohibited from providing benefits to OEMs in exchange for them buying exclusively from Intel or refusing to buy from other CPU manufacturers. Intel was also required not to retaliate against OEMs if they did business with competitors of Intel. Fines levied by the EC and damages collected by AMD through private litigation totaled more than $2 billion. Tying Tying is the practice of a seller conditioning the purchase of one product on the purchase of another product. For example, the FTC pursued a case in which a drug maker required patients to purchase its bloodmonitoring services along with its medicine to treat schizophrenia. While the drug maker had a patent on the medicine, it was one of many companies offering blood-monitoring services to those using the drug. Earlier in the chapter we used the example of IBM requiring its tabulating machine customers to buy its tabulating cards from IBM. Many similar examples have arisen in antitrust cases, including the tie-in of salt to salt dispensers, ink to duplicating machines, cans to can-closing machines, and staples to stapling machines. These examples have the characteristic that the customer buys or leases a “machine” and then must purchase the inputs that are used with the machine from the same supplier. The inputs used with the machine will vary with the intensity of use that various customers make of the machine. This variable

proportions case is one type of tying arrangement. Tying in terms of fixed proportion also occurs. For example, a movie distributor may require a theater owner to take movie B if it wants movie A. This arrangement is generally referred to as “block booking.” Another example is Microsoft requiring that its Windows operating system be purchased with Internet Explorer. Historically, the courts have viewed tying as a device for extending monopoly over one product, such as duplicating machines, to the tied product, ink. This view is known as the “leverage theory” of tying. In a 1912 case, Chief Justice Edward D. White offered the following observations on the danger of tying (the judge was distressed that the majority opinion declared the tie to be legal): Take a patentee selling a patented engine. He will now have the right by contract to bring under the patent laws all contracts for coal or electrical energy used to afford power to work the machine or even the lubricants employed in its operation. Take a patented carpenter’s plane. The power now exists in the patentee by contract to validly confine a carpenter purchasing one of the planes to the use of lumber sawed from trees grown on the land of a particular person.… My mind cannot shake off the dread of the vast extension of such practices which must come from the decision of the court now rendered. Who, I submit, can put a limit upon the extent of monopoly and wrongful restriction which will arise.43

An empirical problem with this argument is that it does not fit the facts of many cases against tying: Can it sensibly be accepted that G.S. Suppiger Co. tied salt to its salt-dispensing machinery as part of a scheme to monopolize the American salt market? Did Morgan Envelope tie its toilet paper to its dispenser as part of a grand scheme to monopolize the American bathroom tissue market? Why do we see again and again … cases involving the tying of rivets, staples, windshield wipers, repair parts, varnish, etc., when the tying monopolist’s share of the market for the tied product remains minuscule?44

We now know the logic of the original leveraging argument is flawed for the same reasons that the argument for vertical integration being an extension of monopoly is invalid (at least for fixed-proportions production). Suppose a consumer is interested in purchasing a system, such as a smartphone and headphones, that requires components A and B. A consumer is willing to pay up to $500 for this system. Product A is produced by a monopolist at a cost of 175, while the market for product B is competitive with unit cost of 100. In the absence of tying, product B would be priced at 100, and the monopolist would price product A at 400, so that the total expenditure to the consumer would be exactly at the consumer’s maximum willingness to pay. The monopolist’s profit would be 225. Now consider the monopolist engaging in tying by producing both components and selling systems to consumers. The most the monopolist can charge is 500, and it costs the monopolist 275 to produce it, so the monopolist makes 225; the same as without tying. The point is that there is one monopoly profit, and the monopolist can, under certain conditions, extract it without tying. In fact, tying can actually reduce profit. Suppose the version of component B produced by other firms, let us call it B*, is superior to that produced by the monopolist, so that consumers are willing to pay up to $550 for a system with components A and B*. With competition, the price of B* is at its cost of 100, in which case the monopolist can charge 450 for product A and earn profit of 275. If it engages in tying, then it can only charge a price of 500 for the system, as it offers the inferior component B. Its profit declines to 225. Key here is that its market power for component A allows the monopolist to extract the additional surplus created by the superior complements offered by competitors. That additional surplus vanishes with tying, and with it vanishes some of the monopolist’s profit. Of course, tying has been used by firms, so if it is not to extend monopoly power, then what is its purpose? To begin, clearly efficiency reasons exist for some physical ties; that is, the products are physically (and not contractually) linked. There are many examples, such as the automobile. It is possible to imagine a

car sold as a group of separate products: chassis, tires, radio, battery, and so forth. Since consumers are interested in the “package,” transactions costs are reduced by the tie-in. Another efficiency rationale that has been used by defendants in tying cases is one of quality control—the tied good is necessary for the satisfactory performance of the tying good. IBM argued that it had to tie its cards to its tabulating machines because inferior cards would cause the machines to malfunction, resulting in a loss of goodwill from its customers. Of course, if this claim is correct, such tying is socially beneficial. The courts have generally not agreed with this argument, however, and have observed that the manufacturer of the tying good could simply state the specifications necessary for the tied goods. It would be in the interests of the customers to use only the “proper” tied goods. In a 1971 case, restaurant chain Chicken Delight used such a quality control defense unsuccessfully.45 Chicken Delight licensed several hundred franchisees to operate its stores. It did not charge its franchisees a franchise fee or royalties; instead it allowed its franchisees to use its trademark and follow its business methods in exchange for purchasing their cooking equipment and certain supplies from Chicken Delight. The prices for these purchases were higher than the prices charged by other suppliers. The court held that Chicken Delight could have achieved the necessary quality control by specification of the appropriate cooking equipment and supplies. It was therefore unnecessary, in the court’s view, for Chicken Delight to require purchases of these items by franchisees. Several nagging questions remain about the view that simply stating the required specifications is a perfect substitute for the tie-in. One point is that it may be costly to convince buyers of the need for the specifications stated when cheaper alternatives exist. Another point is that Chicken Delight might have a free-rider problem. The reputation of Chicken Delight could be damaged if a few franchisees decided to use cheap, low-quality equipment and supplies, knowing that customers in general would identify with the regional reputation of Chicken Delight for good quality. That is, the few franchisees using inferior supplies would continue to have customers who relied on the overall quality of all Chicken Delight stores (even though their loyal repeat business might be small). Hence, these franchisees could free-ride off the high quality of the rest, and tying the supplies might be a way of combating this problem. A successful defense using the quality control argument was made in a 1960 case involving a cable television system supplier, Jerrold Electronics.46 Jerrold sold only on a complete systems basis, which included installation, equipment, and maintenance. However, the legality of Jerrold’s tying was restricted to the “early years” of the industry, when the technology was in its infancy. After the technology had been in existence for some years, ensuring the availability of competent independent suppliers and service personnel, such tying was no longer legal. A third argument for tying is that it is a way to evade price regulation. For example, when gasoline was under maximum price controls in the 1970s, the excess demand was great. Cars lined up for blocks in certain cities to buy gasoline. Because the price could not be increased to clear the market, a gasoline station might tie its gasoline to other products or services to avoid the price ceiling. For example, one station was alleged to have offered gasoline at the controlled price to anyone who purchased a rabbit’s foot for $5! Perhaps the most convincing efficiency rationale for contractual tying is that it is a form of price discrimination. Because the argument is a bit subtle, we take our time developing it later. Price discrimination is not necessarily anticompetitive and, in fact, generally raises social welfare though perhaps benefiting firms at the cost of consumers. Are there any (valid) anticompetitive theories for tying? Starting in the 1990s, some have been developed and we will review them as well. Price discrimination

Figure 7.7 depicts the usual profit-maximizing monopolist equilibrium where the monopolist is permitted to select a single price. The solution is determined by the usual marginal revenue (MR) equals marginal cost (MC) condition, and the price equals P*. The monopolist’s profit is the area ABCP*. As explained in chapter 3, the area under the demand curve equals the total willingness-to-pay by consumers, and the area under the marginal cost curve equals total variable cost. Thus total potential profit is larger than the actual profit by the amount of the two shaded triangles in the figure. (Total potential profit is equal to the area of triangle RSC; it would be the profit under perfect price discrimination.) In other words, it is in the monopolist’s interest to try to extract a larger profit by price discrimination.

Figure 7.7 Potential Profit Not Captured through Single Monopoly Price

In many of the tying cases, the firm practicing tying had either a patent monopoly or some market power over the tying product. Hence it is useful to think in terms of tying as a pricing scheme designed to extract more of the consumers’ surplus or, in other words, appropriate some of the shaded triangular areas in figure 7.7. A simple block-booking example illustrates the point.47 Assume that the maximum values to theater owners for two movies are as follows: Maximum Value to Theater Owners

Fox Theater York Theater

Movie A $100 $60

Movie B $70 $80

To obtain the maximum revenue, the movie distributor has several possibilities, although some may be ruled out because of illegality or infeasibility. First, perfect price discrimination would entail charging separately the maximum value for each movie to each individual: Perfect Price Discrimination: Revenue = $100 + $70 + $60 + $80 = $310.

Thus the maximum potential revenue is $310. Charging separate prices may not be possible, though. Assume then that the distributor can charge only one price for each movie. An examination of the values in the table indicates that the best it could do would be to charge $60 for movie A and $70 for movie B. This “normal” pricing outcome would yield: Normal Pricing Case: Revenue = $60 + $60 + $70 + $70 = $260. There is one further possibility—block booking. Suppose that the distributor offers a bundle of movies A and B for a single price. The bundled price for movies A and B to the Fox Theater could be $170, but this would cause the York Theater to decline the bundle and generate a total revenue of only $170. Hence the best bundled price would be $140, inasmuch as this would keep both theaters as customers: Block Booking Case: Revenue = $140 + $140 = $280. The point is that block booking yields higher revenue than normal pricing. This approach does not always work, however. In this case, Fox is willing to pay more for A than is York, and York is willing to pay more for B than is Fox. If, for example, Fox will pay more for both movies, block booking gives results identical to normal pricing. We now turn to an illustration of tying of the variable-proportions type.48 The example is of a monopolist of copying machines who has two potential customers with different preferences for copying services. This difference in preferences is an essential part of the rationale for tying. The general idea is that tying gives the monopolist the ability to tailor its prices to fit its customers better than if it could charge everyone only a single price. The monopolist has constant costs of producing copying machines of $1,000 per unit. The customers derive no utility from the machines but only from the copying services that they produce in combination with paper. The number of packages of paper can be assumed to measure the quantity of services consumed by the customers. Assume that the two consumers have the demand curves for copying services (paper) as shown in figure 7.8:

Figure 7.8 Demand for Copying Services with Consumer Surpluses for Zero-Price Case

Demand by customer 1: q1 = 100 − p1 Demand by customer 2: q2 = 200 − 2p2. For convenience, we assume that paper is supplied competitively and at a price of zero (the zero price makes the calculations easier). Consider the monopolist’s problem when confronting the two demand curves in figure 7.8. Ignoring income effects, the areas under the demand curves and above the horizontal axes represent the consumer surpluses. That is, with the price of paper equal to zero, the areas give the surpluses of the consumers from copying services. These are shown as $5,000 and $10,000. So the monopolist could charge $5,000 per machine and sell to both customers or charge $10,000 and sell to only customer 2. That is, customer 1 would not pay any more than $5,000 for a copying machine, since $5,000 extracts his total surplus. (We assume implicitly that the monopolist cannot charge separate prices of $5,000 to customer 1 and $10,000 to customer 2.) The two cases give the following profits: Profit at machine price of $5,000 = 2($5,000 − $1,000) = $8,000 Profit at machine price of $10,000 = $10,000 − $1,000 = $9,000. Hence the monopolist would do better by selling at a price of $10,000 and not selling to customer 1. Now, assume that the monopolist decides to practice tying. It can buy the paper on the market (at the zero price) and mark it up to sell to its copying machine customers. That is, the monopolist simply says that it will now charge a fixed price F for the machine and a price per unit for paper p. All paper must purchased from the monopolist, even though it is cheaper in the competitive market. This requirement may present

enforcement problems for the monopolist, but we will ignore them here. (It is also necessary to ensure that the two customers do not get together and share one machine.) Figure 7.9 shows the profit-maximizing solution.49 The monopolist should charge a machine price of $2,812.50 and a paper price of $25. As shown figure 7.9, the first customer will buy 75 packages of paper at the $25 price. The consumer surplus is then $2,812.50, which is extracted completely through the purchase of the machine. The second customer buys 150 packages and also pays the $2,812.50 price for the machine. Hence, total profit under tying is

Figure 7.9 Tying Solution: Maximizing Profit

The first term is the profit from the machine sales, and the second term is the profit on paper. Note that tying permits the monopolist to extract a higher overall profit. Tying gives the monopolist more flexibility: It can lower the machine price, thereby attracting customer 1 into the market, and make up for lowering the machine price by making profits on paper sales. Notice also that the monopolist is no longer limited to obtaining equal revenues from both customers. Under the solution shown in figure 7.9, the monopolist gets $4,687.50 from customer 1 and $6,562.50 from customer 2. They pay equal machine prices, but customer 2 buys more paper because of its higher demand for copying services. Hence the paper plays the role of metering the demand for copying services, where the customer with the higher demand pays more. In principle, the tying of paper would be irrelevant if an actual meter could be used to record usage and a fee could be charged based on usage. This means that tying here is a form of two-part pricing, where a fixed fee is paid plus a price per unit. Turning now to the public policy concern of whether tying is socially harmful, we can calculate total

economic surplus with and without tying. First, consider the no-tying policy. The monopolist would choose to charge $10,000 for the machine, as already explained. Hence there is consumer surplus of $10,000 (captured by the monopolist from customer 2, and customer 1 does not buy), and the monopolist incurs costs of $1,000. Welfare is Total surplus (no tying) = $10,000 − $1,000 = $9,000. Allowing tying, the total surplus can be seen easily by referring to figure 7.9. It equals the two consumer surplus triangles ($2,812.50 for customer 1 and $5,625 for customer 2) plus the two areas representing payments for paper ($1,875 for customer 1 and $3,750 for customer 2) less the costs for two machines ($2,000). Welfare is then Total Surplus (Tying) = $12,062.50. For this particular example, tying leads to a higher total surplus. However, this finding is not general. As is often true, price discrimination can either increase welfare or decrease it, depending on the particular situation. For the interested student, an example that leads to the opposite result is to suppose customer 2’s demand to q2 = 130 − p. In this new situation, total surplus is $11,450 from no tying and only $11,225 for tying. This conclusion is true even though tying is more profitable for the monopolist.50 A key difference between the two situations is that in the latter case, the consumers have more similar demand curves, and the no-tying solution keeps both customers in the market. Note that if no tying has both consumers in the market, tying must cause total surplus to fall, because consumers go from a paper price equal to marginal cost to a price above marginal cost. Modern theories of leveraging With the use of game theory, economists have developed new theories—some motivated by the Microsoft case—which show how tying can be anticompetitive. One argument supposes the tied product, in addition to being valued as a complement to the monopolized product, is intrinsically valued by some consumers. For example, consider the iPod, which initially had a monopoly on portable digital media players. Headphones are used with an iPod, but a consumer could buy headphones to use with other electronic equipment. There are then two markets: the systems market, whereby a consumer buys an iPod and headphones together, and the standalone market, in which a consumer buys only the headphones. A second way in which tying can be anticompetitive is when a firm supplying the tied good may also enter the monopolist’s primary market. In these cases, tying may be profitable either because it allows a monopolist to extend its market power to the tied good or because tying serves to protect its monopoly market from competition.51 Tying That Extends a Firm’s Monopoly Power Consider the following simple model with three products: A, B1, and B2.52 Product A is of value only in conjunction with B1 or B2. Thus, A and B1 are complements as are A and B2. We refer to the market in which consumers purchase A as the systems market, since consumers are interested in purchasing a system of A and B1 (denoted A/B1) or A and B2 (denoted A/B2). In the systems market, there is one type of consumer who has preferences over systems A/B1 and A/B2. In the standalone market, a consumer has preferences over B1 and B2. A consumer in that market is interested in B1 or B2 but not both and does not care about A. The maximum willingness to pay for (MWP) for those consumers is Systems market: MWP for A/B1 is v and for A/B2 is v + d (where d ≥ 0)

Stand-alone market: MWP for B1 is w and for B2 is w + e (where e ≥ 0)

Since d ≥ 0 and e ≥ 0, B2 is superior to B1 as it is more desired by consumers either in isolation or as part of a system. Two firms are in these markets. Firm 1 produces A and B1, while firm 2 produces B2. Firm 1 then has monopoly power in the systems market, while the two firms compete in the standalone market. Assume there are m consumers in the systems market and n consumers in the standalone market. The cost to produce A is cA and to produce B1 or B2 is cB. First consider an equilibrium when there is no tying, which means that firm 1 offers A and B1 (as separate products) and firm 2 offers B2. A consumer in the systems market buys either A/B1 or A/B2. Letting pi denote the price charged for product i, we will show that one equilibrium has prices of pB1 = cB, pB2 = cB + e, and pA = v + d − cB − e, and all consumers in both markets buy B2. Note that firm 1 charges cost for B1, firm 2 charges a premium for B2 by an amount equal to (or just a little below) the additional surplus it provides to consumers over B1, and firm 1 charges a price for A so that the price of A/B2 just equals the MWP of v + d. The profit of firm 1 is m(v + d − cB − e − cA) and of firm 2 is e(n + m) − k, where k is a fixed cost incurred by firm 2.53 To show that this is an equilibrium, let us first derive conditions whereby consumer behavior is optimal. In the standalone market, a consumer can buy B1 and receive surplus of w − cB or buy B2 and receive surplus of w + e − (cB + e) = w − cB. Being indifferent between these two alternatives, a consumer is content to buy B2. (As mentioned above, we can also have firm 2 pricing a little below cB + e, so that consumers strictly prefer to buy B2.) In the systems market, consumers are presumed to buy A/B2, and this yields surplus of v + d − (v + d − cB − e) − (cB + e) = 0. They could instead buy A/B1, which has surplus of v − (v + d − cB − e) − cB = e − d; in exchange for forgoing the additional surplus of d from the superior complement to A, consumers pay e less. Let us assume d > e, so that consumers prefer to purchase A/B2. This assumption says that the incremental value of B2 (compared to B1) is higher in the systems market than in the standalone market. We next need to show that each firm’s price is optimal given the other firm’s price and that both firms earn nonnegative profit. Toward this end, first note that consumers in the systems market are buying A/B2 and paying a price exactly equal to their MWP. Since a higher price for either A or B2 would induce them to demand zero, firm 1 does not want to price any higher, as then its demand and profit would be zero. And it does not want to price any lower, since it is already selling to everyone in the systems market. Therefore, firm 1’s price for A is optimal. With regard to optimizing prices in the standalone market, firm 1 does not want to price below its cost, as that would incur losses, and pricing above cost would still result in zero demand. Given that firm 1 prices at cost, a consumer always has the option of buying B1 and receiving surplus of w − cB. Since the surplus from buying from firm 2 is w + e − pB2, a consumer prefers to buys B2 only when

or

Thus, firm 2 does not want to price above cB + e, as then its demand is zero, and there is no reason for it to price any lower, since it is already selling to all consumers in both markets when it prices at cB + e. Firms’

prices in the standalone market are then optimal.54 The final condition to check is that both firms are earning nonnegative profit. For firm 1, this requires v + d − cB − e − cA > 0, which is true as long as v is sufficiently large. For firm 2, it requires e(m + n) − k > 0. Assume that both conditions hold. This completes the proof that, in the absence of tying, it is an equilibrium for firm 1 to price A at v + d − cB − e and B1 at cB and for firm 2 to price B2 at cB + e. At those prices, all consumers in the systems market buy A/B2 and all consumers in the standalone market buy B2. Now suppose firm 1 engages in tying by offering a physically bundled product of A and B1, as well as B1 separately. A consumer in the systems market either buys A/B1 or nothing.55 It should be fairly clear that the equilibrium prices are pB1 = cB, pB2 = cB + e, and pA/B1 = v, with all consumers in the systems market buying A/B1 and all consumers in the stand-alone market buying B2. Consumers in the systems market can only buy A/B1, so firm 1 prices it at their MWP of v. Prices of pB1 = cB and pB2 = cB + e form an equilibrium in the standalone market for the same reason as without tying. The resulting profit for firm 1 is m(v − cA − cB) and for firm 2 is ne − k. Assume that (m + n)e − k > 0 > ne − k or, equivalently, (m + n)e > k > ne, so that firm 2 earns positive profit without tying but negative profit with tying. Hence, if firm 1 engages in tying, then firm 2 exits (or, if firm 2 is contemplating entry, it chooses not to enter when it sees that product B1 is tied with A). In that case, firm 1 is left as a monopolist in both markets, in which case its profit-maximizing prices are pB1 = w and pA1/B1 = v, with profit of m(v − cA − cB) + n(w − cB). Firm 1 earns higher profit by tying when

Equation 7.7 holds as long as the value to the standalone product, w, is sufficiently high and/or the size of the standalone market, n, is sufficiently large relative to the size of the systems market, m. Both conditions make sense, as the payoff to tying by firm 1 is that it becomes a monopolist in the standalone market, and the associated profit from doing so depends on how much consumers are willing to pay in that market as well as how big that market is. The cost to firm 1 of tying is that it does not make as much profit in the systems market since, with the elimination of the superior complement B2, systems consumers are not willing to pay as much for A. This incremental value of B2 to a system is measured by d, but firm 2 is able to extract e of that, so the per unit net effect on firm 1’s profit is d − e. In sum, we have derived conditions whereby the monopolist earns higher profit by tying products A and B1. What makes tying work is that it forecloses firm 2 from the systems market; those consumers no longer buy component B2 to use with component A. This choice leads to a reduction in firm 2’s variable profit to a level insufficient to cover its fixed cost. As firm 2 then exits, firm 1 is left with a monopoly in the systems market and also in the standalone market. If the latter market is sufficiently big and the profit margin, w − cB, is sufficiently large, then tying is profitable. We have shown that a monopolist may engage in tying to extend its monopoly, but is it anticompetitive? In addressing that question, note that, in this simple model, all consumers buy, whether or not there is tying.56 Since all consumers always buy, welfare equals the value of the goods produced less the cost of producing them. It is then lower with tying when

The welfare-reducing effect of tying is that the superior product of firm 2 is not supplied. This reduces total surplus by md + ne, which is the reduction in value from consumers using B2 rather than B1. Tying

does have the beneficial effect of avoiding the fixed cost k incurred by firm 2. Tying then reduces welfare when md + ne > k. Tying That Protects a Firm’s Primary Market Essential to the above anticompetitive rationale for tying is the existence of a standalone market for the tied good. As we will now show, a standalone market is not necessary for a monopolist to find tying profitable when it is threatened with entry in its primary market as well. Tying can occur not to extend monopoly power but rather to protect existing market power.57 If there is no standalone market for B1 and B2 in the preceding model (n = 0), an equilibrium is of the form

and all consumers buy A/B2.58 The value of λ is between 0 and 1 and determines how revenue for a system is split between firms 1 and 2. System A/B2 is worth v + d to consumers, and they pay a price equal to that value, with v + (1 − λ)d − cB of the revenue going to firm 1 and cB + λd going to firm 2. To reduce the amount of notation, assume m = 1. The resulting profit is then v + (1 − λ)d − cB − cA for firm 1 and λd for firm 2. Recall that without the standalone market, tying is unprofitable. If firm 1 ties, then firm 2 is entirely foreclosed. Firm 1 sells A/B1 at a price of v and its profit is v − cB − cA, which is lower than v + (1 − λ)d − cB − cA. Tying is unprofitable, because firm 1 no longer shares in the additional surplus created by firm 2’s superior complement to firm 1’s product A. Allowing for multiple periods will be important for the ensuing argument, so let us assume that there are two periods (with no discounting). When there is no tying, the total profit to firm 2 from entry is then 2λd − k, which is assumed to be positive. Thus, in the absence of tying, entry by firm 2 occurs. Now we come to the critical change in the model. Firm 2 can offer not only a version of product B but also of product A. For clarity, we now refer to firm 1’s version of A as A1 and use A2 to denote firm 2’s version. For technological reasons, firm 2 cannot begin supplying A2 until the second period, though it can supply B2 in both periods. Thus, if firm 2 enters, it will provide a superior complement to firm 1’s product in period 1, but it could provide a superior system, by offering the bundle A2/B2, in period 2. Assume that consumers attach value v + f to A2/B2, where f ≥ d. (Thus, A2 does not need to be superior to A1.) Finally, the unit cost of producing A2 is also cA. Suppose there is no tying, and firm 2 enters. It supplies B2 in period 1 and earns profit of λd in that period. Come period 2, each firm can supply a system, so we can think about them competing in terms of systems. Firm 1 offers A1/B1, to which consumers attach value v, and firm 2 offers A2/B2, with higher value v + f. The equilibrium then has firm 1 pricing at cost and firm 2 charging a premium equal to the additional surplus that its system provides: pA1/B1 = cA + cB and pA2/B2 = cA + cB + f. Hence, in period 2, firm 1’s profit is zero and firm 2’s profit is f, so its total profit from entry is λd + f − k, which is positive (since it was already assumed that 2λd − k > 0 and f ≥ d ≥ λd). Now consider tying in this setting. Tying prevents firm 2 from selling B2 in period 1. Its profit from entry is then f − k, as it dominates the systems market in period 2. If we assume f − k < 0, then tying deters entry by firm 2, as it cannot make enough money in period 2 to cover its entry cost. With firm 2 absent from the market, firm 1 prices at v in each period, so its total profit is 2(v − cA − cB). Tying is then profitable if

or

which holds if λ is close enough to 1, which means that, in the absence of tying, firm 1 gets a sufficiently small share of the additional surplus from firm 2 supplying B2. As with the first theory of anticompetitive tying, tying is profitable for an incumbent firm, because, by foreclosing part of an entrant’s market, it makes entry unprofitable. Here tying forecloses the market for the entrant’s complementary product in the first period, while in the previous model, it foreclosed the systems market. A more important distinction is that the objective of tying is not to extend monopoly power into another market but rather to protect the incumbent firm’s future monopoly position in market A. Finally, note that the change in welfare from tying is k − f − d, as the value of the system being supplied is reduced by d in period 1 and f in period 2. If f + d > k, then tying deters entry and reduces welfare. Instrumental in tying as an entry deterrence device is that an entrant has a limited number of periods in which it can earn profit to cover its entry cost. Such a condition may be relevant in highly innovative markets, where a new product has a relatively short technological life span as newer products come along to replace it. Tying that reduces the lifetime of an entrant’s product, where the lifetime is already short, may indeed serve to deter entry. In chapter 9 we will see how this tying argument pertains to the Microsoft case. Antitrust law and policy The Supreme Court originally took a harsh view on tying: This Court has recognized that “tying agreements serve hardly any purpose beyond the suppression of competition.” They are an object of antitrust concern for two reasons—they may force buyers into giving up the purchase of substitutes for the tied product, and they destroy the free access of competing suppliers of the tied product.59

An early and highly influential decision was International Salt (1947).60 The International Salt Company had a patent over salt-dispensing machines used in food processing. The company required all users of the machines to buy their salt from the company as well. They argued that only their salt was of sufficient quality to function properly in their machines, and the tie-in was necessary to preserve goodwill. The Supreme Court, however, disagreed: If others cannot produce salt equal to reasonable specifications for machine use, it is one thing; but it is admitted that, at times, at least, competitors do offer such a product. They are, however, shut out of the market by a provision that limits it, not in terms of quality, but in terms of a particular vendor. Rules for use of leased machinery must not be disguised restraints of free competition.

In the Northern Pacific (1958) case, the Court spelled out a “modified” per se rule for evaluating tying arrangements.61 The case concerned a railroad selling land along its right of way on the condition that the buyer ship over the railroad’s line. A difference between this case and International Salt is that the market power in the salt dispenser case was due to a patent. Here, the tying product was land, and the railroad was found to have sufficient market power to find the tie-in sale illegal: Tying arrangements deny competitors free access to the market for the tied product.… At the same time buyers are forced to forgo their free choice between competing products. For these reasons “tying agreements fare harshly under the laws forbidding restraints of trade.” They are unreasonable in and of themselves whenever a party has sufficient economic power with respect to the tying product to appreciably restrain free competition in the market for the tied product and a “not

insubstantial” amount of interstate commerce is affected.

The doctrine for judging a tying claim can be described as follows. First, there must be separate tying and tied products. Second, the seller must condition its sale of the tying product on the buyer’s purchase of the tied product. Third, the seller must have market power in the tying product. And fourth, the anticompetitive effects must be “not insubstantial” in the tied market. This doctrine reflects a modified per se rule, because it is not sufficient to prove conduct in the form of tying goods. There must be evidence of significant market power and some effect. An interesting and important case decided in 1984 by the Supreme Court involved the tying of anesthesiological services to hospital services.62 The case was a private lawsuit in which Dr. Edwin Hyde, an anesthesiologist, charged the East Jefferson Hospital in the New Orleans area with tying the services of a firm of anesthesiologists, Roux & Associates, to its hospital services. As a result of the tie-in, Dr. Hyde was denied permission to practice at the hospital. The hospital had signed an exclusive contract with Roux & Associates for all of its anesthesiological services. An institutional fact was that every patient who had surgery there paid Roux & Associates directly for their services. This is relevant, because anticompetitive tying is supposed to involve the monopolist of the tying product (East Jefferson Hospital), increasing its profits through the tie of a second product (the services provided by Roux), over which it does not initially have a monopoly. As anticompetitive tying “is not likely to involve the channeling of that profit into someone else’s pocket,”63 the application of the standard leverage theory of tying to this case becomes problematic. The Court unanimously found for the defendant, though there was a difference of opinion among the justices as to the reason. The majority opinion of five justices held that no basis existed for a finding that the hospital had market power. Recall that the modified per se rule for tying requires that there be market power over the tying good. The other four justices issued a concurring opinion but disagreed on the market power issue. They found “no sound economic reason for treating surgery and anesthesia as separate services,” and therefore “the Hospital can acquire no additional market power by selling the two services together.” Of particular note is that the four justices also indicated their desire to drop the per se rule approach and move to a rule of reason for tying. Given the theoretically diverse welfare outcomes for tying, this change would seem to be a good idea. Nevertheless, a 1992 tying case involving Eastman Kodak left the per se approach intact.64 The tied good was repair services, and the tied good was parts for Kodak photocopiers. Tying served to exclude independent service companies from repairing Kodak photocopiers, and they brought the suit against Kodak. Though the market for photocopiers was agreed by all to be unconcentrated, the majority of justices held that it was possible for Kodak to have a monopoly in parts. We will examine this controversial decision in chapter 8. Nevertheless, some progress has been made in moving away from a per se rule. Going back to International Salt (1947), the presumption had been that the firm had market power if the tied good was patented and that “it is unreasonable, per se, to foreclose competitors from any substantial market.” This presumption was true in spite of International Salt never having more than 4 percent of the salt market. This per se perspective was reversed in 2006, when the Supreme Court ruled that tying a good to a patented good is not a per se offense.65 It must be established that the firm has market power in the market for the patented product. Manufacturer-Retailer Restraints While the previously discussed vertical restraints are applicable to many upstream-downstream

relationships, here we turn to two practices that are especially if not exclusively relevant to manufacturers (or wholesalers) and retailers. Resale price maintenance As we have mentioned, resale price maintenance (RPM) can either be a requirement by the supplier that the dealer not sell below a minimum price or a requirement that the dealer not sell above a maximum price. It is possible to describe certain situations where the supplier would prefer the minimum resale price, and others where the supplier would prefer the maximum resale price. The simplest explanation is perhaps that pertaining to the desire of a supplier to require maximum resale prices. To understand this case, refer to the discussion earlier in this chapter on double marginalization. There we discovered that vertical integration could lead to a lower final price and higher combined profits of successive monopolists. If the supplier and dealer both have market power, it is clear that the ability of the supplier to limit the dealer’s price will increase its own profitability. Incidentally, this will also improve society’s welfare, given that the supplier’s monopoly cannot be eliminated. The explanation for the opposite type of RPM—setting minimum resale prices—is more complex. After all, it seems counterintuitive to think that a manufacturer would want to constrain competition among retailers, as that would only seem to lower sales. However, both procompetitive and anticompetitive reasons exist for minimum RPM. Let us begin with the former. Minimum RPM could be attractive to a supplier when it wants to ensure the provision of certain presale informational services that are necessary for effective marketing. Consider a personal computer. Before buying an Apple computer, the consumer would like to learn as much about it as possible. A retail computer store that sells Apples is ideal—the consumer can consult with technically trained salespersons and try out the computer. However, the consumer might then decide to purchase the Apple through a mail-order outlet that has lower prices for Apples because of its lower cost from not having to provide floor space for demonstrations and technically trained salespersons. In other words, the mail-order outlets are free-riding on the retail computer stores. The concern of the supplier is that the mail-order outlets may make it unprofitable for the retail stores to continue to provide the informational services that are necessary for Apple to compete against other computer suppliers. This is a case where setting minimum resale prices may be sensible from the point of view of the supplier and can be procompetitive.66 A graphical explanation may be helpful. In figure 7.10 the marginal cost MC of a retailer is assumed to equal the price charged by the manufacturer (for simplicity, the retailer’s only cost is from buying the manufacturer’s product). Vigorous competition from other retailers brings price down to MC. Assume now that the manufacturer sets a minimum resale price at p1, while continuing to sell to the retailer at MC. This decision might seem irrational, because the demand D would imply that quantity sold would fall to q1. Hence, the manufacturer would necessarily make less money. The point, however, is that the retail margin p1 − MC enforced by the manufacturer is expected to lead to promotional activities by the retailer, which shifts the demand to D′. The net effect is shown in figure 7.10 as an increase in quantity to q′. While the particular case shown in figure 7.10 is one possibility, other cases are also possible, as the demand shift can be of various magnitudes. A detailed analysis of various cases suggests that RPM can be either efficiency increasing or decreasing, depending on the magnitude of the assumed demand shift.

Figure 7.10 An Explanation for RPM: Shifting Out Demand

RPM can result in anticompetitive effects as well. It is conceivable that either a cartel of dealers or a cartel of suppliers might be fostered through RPM. For example, the dealers might get together and insist that the supplier require minimum resale prices for all dealers. This request would be very helpful in making a cartel among retailers work. Of course, it is not immediately clear that it would be in the supplier’s best interest. The product would have to be one that did not face substantial interbrand competition. For example, if Apple dealers could raise their prices, it might be quite profitable if they did not have to reckon with Acer, Hewlett-Packard, Lenovo, and the like. Minimum RPM could also help manufacturers collude.67 Suppose manufacturers desire to collude on wholesale prices, but monitoring for compliance is difficult, because wholesale prices are not publicly observed. In particular, a manufacturer could cheat by providing a secret discount to retailers, who would then lower their retail prices and sell more of that manufacturer’s products. A more effective collusive scheme could be for manufacturers to agree to retail prices through minimum RPM. In that case, there would be no incentive to cheat on the wholesale price, because it could not deliver higher demand due to retail prices being constrained by minimum RPM. Another anticompetitive rationale for minimum RPM is that it could deter entry.68 Consider possible entry into the upstream market, which would reduce total industry (upstream and downstream) profits; hence, the

expansion in retail profits from more product variety is more than offset by the intensification of upstream competition. Existing manufacturers would like to induce retailers to not take on the potential entrant’s products. That action makes the manufacturers better off—as an entrant into their market is prevented—but what about the retailers? Minimum RPM is one way to transfer some of those rents to retailers. The higher retail prices raise retailers’ profits, while manufacturers are better off compared to when there is entry. Antitrust law and policy—minimum RPM In 1911, minimum RPM was made a per se violation of Section 1 of the Sherman Act, despite the possible procompetitive arguments that we have presented (though these arguments were not appreciated at the time). The case was Dr. Miles Medical Co. v. John D. Park & Sons.69 Dr. Miles, a manufacturer of proprietary medicines, had established a set of minimum resale prices that applied throughout its distribution chain. John D. Park, a drug wholesaler, refused to enter into the restrictive agreements and instead was able to buy Dr. Miles’s products from other wholesalers at discounted prices. Dr. Miles brought a suit against John D. Park for interfering with the contracts between Dr. Miles and the other wholesalers. The Supreme Court held that the contracts were illegal, observing that “agreements or combinations between dealers, having for their sole purpose the destruction of competition and the fixing of prices, are injurious to the public interest and void.” It should be pointed out that a conspiracy must exist between the manufacturer and dealer to fix the prices; it is not illegal for a manufacturer unilaterally to set resale prices and refuse to deal with retailers who do not comply. Two cases established the standards necessary to infer such a conspiracy.70 The first case, Monsanto v. Spray-Rite Service, involved a Monsanto herbicide dealer selling at discount prices. Evidence showed that other dealers had complained to Monsanto, and Monsanto subsequently terminated the dealer. The Court said that evidence of complaints was not sufficient unless additional evidence tended to exclude the possibility of independent action by Monsanto. The second case, a Supreme Court decision in 1988, also supports the view that the conditions for RPM to be found illegal per se are quite restrictive. In Business Electronics v. Sharp, two retailers of electronic calculators, Business Electronics and Hartwell, and the manufacturer, Sharp, were involved in a dispute in the Houston area. Hartwell complained to Sharp about Business Electronics’ low prices—below Sharp’s suggested retail prices—and in June 1973 gave Sharp an ultimatum that Hartwell would terminate its dealership unless Sharp ended its relationship with Business Electronics. Sharp then terminated Business Electronics’ dealership in July 1973, in response to which Business Electronics sued, claiming a conspiracy that was illegal per se. Business Electronics won at the District Court level, but the decision was reversed by the Court of Appeals. The Supreme Court affirmed the Circuit Court’s decision, which found that to render illegal per se a vertical agreement between a manufacturer and a dealer to terminate a second dealer, the first dealer must expressly or impliedly agree to set its prices at some level, though not a specific one. The distributor cannot retain complete freedom to set whatever price it chooses.71

Leegin After nearly a century, the Supreme Court reversed itself and made minimum RPM subject to the rule of reason in 2007.72 The case involved Leegin, which manufactured and distributed leather goods, and PSKS, which operated a women’s apparel store and carried Leegin’s Brighton line of products. Starting in 1997, Leegin instituted a new pricing policy whereby it refused to sell to retailers that priced below Leegin’s recommended retail prices. Its rationale was consistent with the procompetitive benefits for minimum RPM

described above: “We realize that half the equation is Leegin producing great Brighton products and the other half is you, our retailer, creating great looking stores selling our products in a quality manner.” The Court interpreted the intent of this policy was to “give [Leegin’s] retailers sufficient margins to provide customers the service central to its distribution strategy.” On learning that PSKS was offering a 20 percent discount, Leegin stopped selling to it. PSKS then sued on the grounds that Leegin was imposing minimum RPM which was per se illegal. The District Court agreed and awarded, after trebling, damages of $4 million to PSKS. Leegin appealed on the grounds that the rule of reason should have been applied, but the Fifth Circuit Court affirmed the original decision. The Supreme Court took on Leegin’s appeal “to determine whether vertical minimum resale price maintenance agreements should continue to be treated as per se unlawful.” The Court noted that the “resort to per se rules is confined to restraints … ‘that would always or almost always tend to restrict competition and decrease output.’ ” Then the Court noted that the economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance … [and] although the empirical evidence on the topic is limited, it does not suggest efficient uses of the agreement are infrequent or hypothetical.

In deciding that minimum RPM would be judged by the rule of reason, the Court made its treatment more consistent with how other vertical restraints are evaluated. Post-Leegin In Leegin, the Supreme Court emphasized that “the potential anticompetitive consequences of vertical price restraints must not be ignored or underestimated” and left it to the lower courts to “establish the litigation structure to ensure that the rule [of reason] operates to eliminate anticompetitive restraints from the market and to provide more guidance to businesses.” As of 2016, the DOJ had not pursued a single case of RPM. However, there is ongoing private litigation in the market for contact lenses that may begin to define the application of the rule of reason to minimum RPM.73 Between June 2013 and September 2014, the four primary manufacturers of contact lenses—Alcon, Bausch & Lomb, CooperVision, and Johnson & Johnson—instituted a policy of minimum RPM for products that make up about 40 percent of sales. The usual procompetitive justification to avoid free-riding on service would not seem to apply here. That service is provided by eyecare providers who prescribe contact lenses and are paid for their services. As they sell contact lenses, they would benefit from minimum RPM, as those customers would be unable to buy at a lower price from discounters. In Leegin, the Court identified three factors that heighten concerns about anticompetitive effects: (1) when several manufacturers adopt minimum RPM; (2) when retailers, rather than manufacturers, proposed minimum RPM; and (3) when manufacturers or retailers have market power. In the contact lenses RPM case, the American Antitrust Institute argued that (1) is clearly satisfied; (2) may be satisfied; and (3) is likely to be satisfied, given that the four manufacturers have 97 percent of the market. Antitrust Law and Policy—Maximum RPM One other form of RPM has drawn antitrust scrutiny: when an upstream supplier limits the maximum price that a retailer can charge. In spite of the lack of compelling arguments at the time that maximum RPM could be anticompetitive, the Supreme Court ruled in 1968 in Albrecht v. Herald Co.74 that it was per se illegal! The Supreme Court changed its view in 1997 with State Oil Company v. Khan, concluding that maximum RPM is neither per se lawful nor unlawful, and that such practices should be evaluated under the rule of

reason.75 Though some economists thought that per se legality of maximum RPM was instead appropriate, there is at least one argument by which the welfare effects of maximum RPM are ambiguous.76 Consider the situation described above, in which retailers free-ride in terms of some nonprice variables that augment demand. Obviously, the manufacturer would like to encourage more of that activity. One solution to this problem, which we will investigate next, is to give each retailer its own geographic monopoly. In that case, a retailer is assured of reaping the additional sales from its spending on promotion and service. But recall from our discussion of double marginalization that a downstream monopolist tends to charge too high a price from the perspective of the manufacturer. While the manufacturer likes marking up its price, it does not like the retailer doing it as well! Thus the creation of local monopolies to prevent free-riding has spawned double marginalization. A solution to this problem is maximum RPM. Given that territorial restrictions already exist, the imposition of maximum RPM has two counteracting welfare effects: It results in a lower price but also less service (as the retailers now receive a lower price than when there was no maximal RPM). The net effect on consumers is ambiguous, and therein lies a rationale for using the rule of reason.77 Territorial restraints Territorial restraints can be closely related to RPM. Earlier we discussed a hypothetical case of Apple Computer and how it might find it profitable to mandate minimum retail prices. Minimum RPM would induce retailers to provide presale informational services to consumers and avoid the free-rider problem. Allocating exclusive marketing territories to its dealers can operate in much the same way, as each dealer would reap the benefits from presale services in its territory. There is also a potential social benefit, because distribution costs might be lower by enabling each dealer to obtain scale economies by virtue of being the exclusive supplier for its territory. The potentially anticompetitive effects of territorial restraints are also similar to RPM—the fostering of collusive behavior among dealers or manufacturers. An interesting case of territorial restraints occurred in the soft drink industry. The major soft drink syrup manufacturers—Coke, Pepsi, Dr. Pepper, and so on— allocated exclusive territories to their bottlers. A 1973 study by the FTC argued that the reduced intrabrand competition had been costly to consumers, because concentration at the syrup level was high. That is, Coke buyers could not benefit from competition among Coke bottlers, but only from competition among Coke, Pepsi, Dr. Pepper, and so on. The FTC was prevented from pursuing the case, however, when Congress passed legislation that specifically exempted the soft-drink industry’s territorial restrictions.78 While it was only in 2007 that minimum RPM lost its per se illegality, territorial restraints have been under the rule of reason for quite some time.79 The key case is Sylvania (1977), which was decided by the Supreme Court.80 GTE Sylvania was a manufacturer of television sets and had less than 2 percent of the national market. In 1962, Sylvania began a new marketing plan. It phased out its wholesale distributors and began to sell its television sets directly to a smaller and more select group of franchised retailers. The objective was to decrease the number of competing Sylvania retailers in the hope of attracting a smaller group of more aggressive and competent retailers who could increase Sylvania’s market share. To accomplish this end, Sylvania limited the number of franchises granted for any given region and required each dealer to sell only from the location at which the dealer was franchised. Interestingly, Sylvania retained the discretion to increase the number of retailers in a region, depending on the success of the retailers in developing the market. In 1965, a Sylvania dealer in San Francisco, Continental T.V., wanted to open a store in Sacramento but

was prohibited from doing so by Sylvania; Sylvania was doing exceptionally well in Sacramento and did not believe another dealer would be beneficial. As a result, Continental filed suit against Sylvania under Section 1 of the Sherman Act. The Supreme Court decided in favor of Sylvania and in so doing made it clear that the case should be decided on a rule of reason basis: Vertical restrictions promote interbrand competition by allowing the manufacturer to achieve certain efficiencies in the distribution of his products. For example, new manufacturers and manufacturers entering new markets can use the restrictions in order to induce competent and aggressive retailers to make the kind of investment of capital and labor that is often required in the distribution of products unknown to the consumer. Established manufacturers can use them to induce retailers to engage in promotional activities or to provide service and repair facilities necessary to the efficient marketing of their products. Certainly there has been no showing that vertical restrictions have or are likely to have a “pernicious effect on competition” or that they “lack any redeeming virtue.”81

Summary This chapter has examined the benefits and costs of vertical integration and vertical restraints toward identifying when such activities are anticompetitive and should be prohibited. Vertical integration can increase the profit of the merging parties and also welfare more generally through enhanced productive efficiency and reduced transaction costs by replacing market transactions with internal ones. Vertical integration can also make firms and consumers both better off by reducing the double markup that occurs when an input supplier with market power charges a price above cost and then the purchaser of the input marks it up again when it sets the final product price to exceed cost. Offsetting these benefits are the potentially anticompetitive effects of vertical integration. When both upstream and downstream markets are oligopolistic, a vertical merger can raise the cost faced by the remaining unintegrated downstream firms. This cost increase can make the merger profitable and result in consumers being worse off due to a higher final product price. As is appropriate, the rule of reason should be used to trade off these benefits and costs when deciding whether a vertical merger is to be allowed. The past fifty years have seen a tremendous change in antitrust policy, as it has evolved from discouraging many vertical mergers in the 1960s to where few were challenged by the authorities in the 1980s. Currently the sentiment is that vertical integration is not a serious antitrust concern except when there is (or will be) a high level of market power in one or both markets. We then considered contractual arrangements, which can have many of the same effects as vertical integration. Exclusive dealing can foreclose a market to competitors by limiting a firm’s demand to one supplier, while tying achieves foreclosure by a firm using its market power in one market to require consumers to buy another product from it. Recent theories have characterized circumstances under which these vertical restraints are profitable and welfare-reducing. Economics has shown that vertical restraints— whether exclusive dealing, tying, resale price maintenance, or territorial restraints—can be either procompetitive or anticompetitive. Thus it is appropriate to evaluate them using the rule of reason, which is now largely the case. Of course, the absence of per se illegality also provides considerable discretion to the FTC and DOJ regarding when they bring a case. The reality is that the number of vertical restraints cases has significantly lessened over time and has also varied with the party of the presidential administration. Questions and Problems

1. Consider a business firm that has an organization that you understand well. Try to explain why the firm buys certain inputs and makes the remainder (in the context of minimizing transaction costs). 2. Assume that firm M (the manufacturer) sells an input (a lawn mower) to firm R (the retailer). Now R sells the lawn mower to the public, incurring a constant cost of $5 per lawn mower for its services. Fixed-proportions production holds for R. Let X represent the number of lawn mowers. Assume also that both M and R are monopolists, and PL is the lawn mower price charged to the public with the demand X = 100 − PL. a. Find the derived demand for lawn mowers facing M. Hint: Find the marginal revenue equals marginal cost condition for R, where R’s marginal cost is the sum of the $5 services cost and the price Px that it must pay M per lawn mower. Solving for X gives the derived demand. b. If M’s total cost function is 10 + 5X + X2, find the equilibrium prices and quantity, PL, PX, and X, and the profits of the two firms. c. Assume now that M and R form a single vertically integrated firm, M-R. Find the equilibrium values of PL and X and the profit of M-R. d. Compare the unintegrated case in part b with the integrated case in part c. Is it true that both the firms and the public would prefer the case in part c? Explain. 3. Assume the same facts as in problem 2, except that monopolist R is now replaced with competitive industry R. a. Find the derived demand for lawn mowers facing the manufacturing monopolist M. Hint: Make use of the fact that perfect competition equilibrium is defined by demand equals supply, where supply is simply a horizontal line, PL = 5 + PX. Solving for X gives the derived demand. b. Find the equilibrium prices and quantity, PL, PX, and X, and the profit of M. c. Assume that M vertically integrates forward into the competitive industry R, thereby extending its monopoly to cover both manufacturing and retailing. Find the equilibrium values of PL and X and the profit of the combined firm M-R. d. Compare the unintegrated case in part b with the integrated case in part c. Is it profitable to monopolize forward? What is the intuitive explanation for your result? 4. Assume a situation where a monopolist of input M sells to a competitive industry Z, and the competitive industry Z has a production function characterized by variable proportions. A second competitive industry sells its output L to the competitive industry Z, and Z combines M and L according to the production function Z = L0.5M0.5. The price of L and its marginal cost are both $1. The demand for the product of industry Z is Z = 20 − PZ. It can be shown that the monopolist will charge $26.90 for M to maximize its profit, given that its marginal cost of M is $1. (This can be found by first obtaining the derived demand facing the monopolist using the price equal marginal cost condition in industry Z, and also using the condition for least-cost production by industry Z.) The competitive industry Z will have a constant marginal cost of $10.37 and sell 9.63 units at a price of $10.37. a. Calculate the competitive industry Z’s actual combination of L and M that it will use to produce the 9.63 units. Find the true economic cost to society of these inputs (not Z’s actual payments to its suppliers; its payment to the monopolist includes a monopoly margin). Hint: The optimal input mix can be found by the simultaneous solution of two equations: the equality of the marginal product per dollar of the inputs and the production function equated to 9.63 units. b. Assume that the monopolist decides to vertically integrate forward into the competitive industry Z, thereby extending its monopoly to cover industry Z. What will be the least-cost combination of L and M and its true economic cost in producing the 9.63 units? Hint: The vertically integrated firm will “charge” itself the marginal cost for M when determining its input mix. c. What is the cost saving that the vertically integrated monopolist will obtain if it produces 9.63 units? That is, what is the saving compared to the cost found in part a? d. What makes this vertical integration profitable? Is it in society’s interest if the monopolist holds its output fixed at 9.63 units after vertical integration? e. In fact, after the vertical monopolization of Z, the firm M-Z would have a constant marginal cost of $2. Given this fact, what is the profit-maximizing price Pz and output Z? Draw a figure to illustrate the overall social benefits and costs of this vertical integration.

5. Assume that the maximum values to theater owners for movies A and B are as follows. The Fox Theater values A at $100 and B at $70. The York Theater values A at $60 and B at $50. Is block booking more profitable than charging a single price for each movie? Explain. 6. For the model of tying, assume there are instead two firms that offer product B2 at cost cB. Derive the conditions for tying to be profitable. 7. Consider a problem faced by Kamera Company. They have developed a patented new camera that can be produced at a constant unit cost of $1. The film is available competitively at a zero price. Consumers derive utility only from the combined services of the camera and film—which can be measured by packs of film consumed per period. Assume that there are two consumers with inverse demands: p1 = 8 − 4q1/3 and p2 = 12 − 3q2/2. a. Consumers will purchase only one camera at most; hence, consumer surplus can be viewed as measuring what they would be willing to pay for a camera. If Kamera must charge both customers the same price for a camera, what is the price it will charge, and what is its profit? b. Assume now that Kamera decides to tie film to its camera. It requires customers to purchase film from Kamera if they wish to buy a camera. Kamera simply buys film on the market at the zero price and resells it. What are the prices of camera and film that Kamera will charge, and what is its profit? Is tying profitable? c. Compare total economic surplus in part a with that in part b. d. If customer 2 has a different demand curve, say, p2 = 10 −5q2/4, it reverses the result in part c. What is the intuitive reason for this reversal?

Notes 1. Ronald Coase, “The Nature of the Firm,” Economica 4 (November 1937): 386–405. 2. A review of the efficiency benefits and anticompetitive costs of vertical mergers is provided in Michael H. Riordan and Steven C. Salop, “Evaluating Vertical Mergers: A Post-Chicago Approach,” Antitrust Law Journal 63 (1995): 513–568. 3. Paul Milgrom and John Roberts, Economics, Organization and Management (Englewood Cliffs, NJ: Prentice-Hall, 1992), p. 25. 4. This example is from Adam M. Brandenburger and Barry J. Nalebuff, Co-opetition (New York: Currency Doubleday, 1996). 5. This pricing scheme is known as a two-part tariff. 6. Brown Shoe Company v. United States, 370 U.S. 294 (1962). 7. The competitive industry is in equilibrium when Pa = C + Pb, that is, when the price of automobiles P equals marginal cost (which is just the sum of the fixed conversion cost C and the price of batteries Pb). So, rewriting this condition as Pa − C = Pb, we have the derived inverse demand curve, Pa − C. 8. For further results, see Fred M. Westfield, “Vertical Integration: Does Product Price Rise or Fall?” American Economic Review 71 (June 1981): 334–346. The analysis here is based on John M. Vernon and Daniel Graham, “Profitability of Monopolization by Vertical Integration,” Journal of Political Economy 79 (July/August 1971): 924–925. 9. This analysis is based on Oliver Hart and Jean Tirole, “Vertical Integration and Market Foreclosure,” Brookings Papers on Economic Activity: Microeconomics (1990): 205–285. 10. See Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs,” American Economic Review 73 (May 1983): 267–271. For a more general discussion, see Thomas G. Krattenmaker and Steven C. Salop, “Anti-competitive Exclusion: Raising Rivals’ Costs to Achieve Power over Price,” Yale Law Journal 96 (December 1986): 209–293. 11. These prices can be derived in the following manner. Starting with firm 1, find the price that maximizes its profit as expressed in equation 7.3. This is found by equating marginal profit to zero:

and then solving it for the firm’s price, p1 = 57.5 + 0.25p2 + 0.5w1. This is firm D1’s best reply function, as it prescribes the profit-maximizing price given its rival’s price (and the input price). In the same manner, one can do this for the other firm, p2 = 57.5 + 0.25p1 + 0.5 w2. We then need to find prices that satisfy these two equations simultaneously:

Substituting the second equation into the first,

and solving for p1 gives us equation 7.5. The same procedure will produce equation 7.6. 12. To derive this price, one must first derive the demand for the input supplied by U2. Substituting the downstream equilibrium prices into equation 7.2, one derives U2’s demand curve:

U2 then chooses its input price to maximize

Taking the first derivative, setting it equal to zero, and solving for the input price yields a price of 72.45. 13. The downstream prices are now 91.63 and 116.69 for the integrated firm and the unintegrated firm, respectively. 14. See Michael A. Salinger, “Vertical Mergers and Market Foreclosure,” Quarterly Journal of Economics 77 (May 1988): 345–366. 15. Yongmin Chen, “On Vertical Mergers and Their Competitive Effects,” RAND Journal of Economics 32 (Winter 2001): 667–685. 16. Janusz A. Ordover, Garth Saloner, and Steven C. Salop, “Equilibrium Vertical Foreclosure,” American Economic Review 80 (March 1990): 127–142. For a debate about this theory, see David Reiffen, “Equilibrium Vertical Foreclosure: Comment,” American Economic Review 82 (June 1992): 694–697; and Janusz A. Ordover, Garth Saloner, and Steven C. Salop, “Equilibrium Vertical Foreclosure: Reply,” American Economic Review 82 (June 1992): 698–703. 17. Michael H. Riordan, “Anti-competitive Vertical Integration by a Dominant Firm,” American Economic Review 88 (December 1998): 1232–1248. 18. Donald O. Parsons and Edward J. Ray, “The United States Steel Consolidation: The Creation of Market Control,” Journal of Law and Economics 18 (April 1975): 181–219. 19. Brown Shoe Co. v. United States, 370 U.S. 294 (1962). 20. John E. Kwoka Jr., “Rockonomics: The Ticketmaster–Live Nation Merger and the Rock Concert Business,” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 6th ed. (New York: Oxford University Press, 2014), pp. 62–91. 21. Robert Pitofsky, “Vertical Restraints and Vertical Aspects of Mergers—A U.S. Perspective,” speech at the 24th Annual Conference on International Law and Policy, Fordham Corporate Law Institute. New York, October 16–17, 1997. 22. Tasneem Chipty, “Vertical Integration, Market Foreclosure, and Consumer Welfare in the Cable Television Industry,” American Economic Review 91 (June 2001): 428–453. The study uses data for 1991. 23. Ayako Suzuki, “Market Foreclosure and Vertical Merger: A Case Study of the Vertical Merger between Turner Broadcasting and Time Warner,” International Journal of Industrial Organization 27 (July 2009): 532–543. 24. The ensuing discussion is based on Massimo Motta, Competition Policy: Theory and Practice (Cambridge: Cambridge University Press, 2004); and Xavier Vives and Gianandrea Staffiero, “Horizontal, Vertical and Conglomerate Effects: The GE-Honeywell Merger in the EU,” in Bruce Lyons, ed., Cases in European Competition Policy: The Economic Analysis (Cambridge: Cambridge University Press, 2009), pp. 434–464.

25. General Electric/Honeywell–Case No COMP/M.2220, European Commission, March 7, 2001, p. 100. 26. The ensuing discussion is based on William P. Rogerson, “A Vertical Merger in the Video Programming and Distribution Industry: Comcast-NBCU (2011),” in Kwoka and White, eds., The Antitrust Revolution, 6th ed., pp. 534–575. 27. An introductory reference for both exclusive dealing and tying is Michael D. Whinston, “Exclusivity and Tying in U.S. v. Microsoft: What We Know, and Don’t Know,” Journal of Economic Perspectives 15 (Spring 2001): 63–80. 28. This analysis is from Philippe Aghion and Patrick Bolton, “Contracts as a Barrier to Entry,” American Economic Review 77 (June 1987): 388–401. 29. Eric B. Rasmusen, J. Mark Ramseyer, and John Shepard Wiley Jr., “Naked Exclusion,” American Economic Review 81 (December 1991): 1137–1145; and Ilya Segal and Michael D. Whinston, “Naked Exclusion: Comment,” American Economic Review 90 (March 2000): 296–309. Our description borrows from Whinston, “Exclusivity and Tying.” 30. For an analysis of how buyer coalitions can ensure that exclusionary contracts are welfare-enhancing, see Robert Innes and Richard J. Sexton, “Strategic Buyers and Exclusionary Contracts,” American Economic Review 84 (June 1994): 566– 584. When buyers are themselves firms who compete in selling the good, these conclusions can change, depending on the institutional setting regarding contract breach. See Chiara Fumagalli and Massimo Motta, “Exclusive Dealing and Entry When Buyers Compete,” American Economic Review 96 (June 2006): 785–795; and John L. Simpson and Abraham Wickelgren, “Naked Exclusion, Efficient Breach, and Downstream Competition,” American Economic Review 97 (September 2007): 1305–1320. 31. United States v. Dentsply International Inc. 399 F.3d 181 (3d Cir. 2005). 32. See Fiona M. Scott Morton, “Contracts That Reference Rivals,” Antitrust 27 (Summer 2013): 72–79. 33. Concord Boat Corp. v. Brunwick Corp. 207 F.3d 1039 (8th Cir. 2000). 34. Just as with exclusive dealing, there are also legitimate reasons for share contracts. For details, see Kevin M. Murphy, Edward A. Snyder, and Robert H. Topel, “Competitive Discounts and Antitrust Policy,” in Roger D. Blair and Daniel Sokol, eds., The Oxford Handbook of International Antitrust Economics, vol. 2 (Oxford: Oxford University Press, 2014), pp. 89– 119 35. Eaton was found guilty of violating Section 2 of the Sherman Act; see ZF Meritor LLC v. Eaton Corp., 696 F.3d 254 (3d Cir. 2012). 36. Standard Fashion Company v. Magrane-Houston Company, 258 U.S. 346 (1922). 37. Tampa Electric Company v. Nashville Coal Company, 365 U.S. 320 (1961). 38. Standard Oil Company of California v. United States, 337 U.S. 293 (1949). 39. A reference is the decision by the U.S. Court of Appeals affirming the lower court’s verdict, United States of America v. Visa U.S, Inc., Visa International Corp. and MasterCard International, Inc. 344 F.3d 229 (2d Cir. 2003). 40. Ibid, pp. 12–13. 41. A reference for this case is Joshua S. Gans, “Intel and Blocking Practices,” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 6th ed., pp. 413–434. 42. FTC, In the Matter of Intel Corporation, Decision and Order, Docket No. 9341, October 29, 2010. 43. Henry v. A.B. Dick Company, 224 U.S. 1 (1912). 44. M. L. Burstein, “A Theory of Full-Line Forcing,” Northwestern Law Review 55 (March–April 1960): 62–95. 45. Siegel v. Chicken Delight, Inc., 448 F. 2d43 (9th Cir. 1971). For an analysis of this case, see Benjamin Klein and Lester F. Saft, “The Law and Economics of Franchise Tying Contracts,” Journal of Law and Economics 28 (May 1985): 345–361. 46. United States v. Jerrold Electronics Corporation, 187 F. Supp. 545 (1960). 47. See George J. Stigler, The Organization of Industry (Homewood, IL: Richard D. Irwin, 1968), chapter 15, for the original analysis of block booking. 48. This example is based on Walter Y. Oi, “A Disneyland Dilemma: Two-Part Tariffs for a Mickey Mouse Monopoly,” Quarterly Journal of Economics 85 (February 1971): 77–96. 49. The solution is found as follows. To keep both customers in the market, the machine price should equal the consumer surplus (after the price of paper is increased) of customer 1. Hence, express the machine price as the area of the consumer

surplus triangle:

The first and second terms are revenues from machines and paper, respectively, and the last term is the machine costs. The value of p that maximizes profit is $25. Note that the highest profit with only one customer is $9,000, and tying would not be employed. 50. In this case, the tying solution is to charge $3,612.50 for the machine and $15 for paper. The profit is $8,225 compared to only $8,000 when not tying. 51. A survey of these theories can be found in Patrick Rey and Jean Tirole, “A Primer on Foreclosure,” in Mark Armstrong and Robert H. Porter, eds., Handbook of Industrial Organization, vol. 3 (Amsterdam: Elsevier, 2007), pp. 2145–2220. 52. This analysis is based on Michael D. Whinston, “Tying, Foreclosure, and Exclusion,” American Economic Review 80 (September 1990): 837–859. 53. There are other equilibria in which pB1 = cB, pB2 = cB + λd, and pA = v + (1 − λ)d − cB, where 0 ≤ λ ≤ 1, so that firm 1 extracts a bigger share of the surplus from system A/B2. To simplify the analysis, we have set λ = 1. 54. Given the assumption d > e, one can prove that firm 1 does not find it optimal to provide a discount to consumers who buy A and B1. 55. To simplify matters, we assume that a consumer in the systems market cannot buy A/B1 and B2 and then replace B1 with B2. Doing so incurs additional expenditure but would increase the value of the system from v to v + d. Our conclusions are robust to that possibility, but allowing for it would introduce unnecessary complications. 56. In a more general model, tying would influence how many consumers buy by affecting the price of a system and the standalone product. 57. This analysis is due to Dennis W. Carlton and Michael Waldman, “The Strategic Use of Tying to Preserve and Create Market Power in Evolving Industries,” RAND Journal of Economics 33 (Summer 2002): 194–220. For another anticompetitive rationale for tying, see Jay Pil Choi and Christodoulous Stefanadis, “Tying, Investment, and the Dynamic Leverage Theory,” RAND Journal of Economics 32 (Summer 2001): 52–71. 58. The equilibrium analyzed for the preceding model was for the case of λ = 1. 59. United States v. Loew’s, Inc. 371 U.S. 38 (1962). 60. International Salt Co., Inc. v. United States, 332 U.S. 392 (1947). 61. Northern Pacific Railway Company et al. v. United States, 356 U.S. 1 (1958). 62. Jefferson Parish Hospital Dist. No. 2 v. Hyde, 466 U.S. 1 (1984). 63. W. J. Lynk, “Tying and Exclusive Dealing: Jefferson Parish Hospital v. Hyde,” in J. E. Kwoka Jr. and L. J. White, eds., The Antitrust Revolution, 2nd ed. (New York: HarperCollins, 1994). 64. Eastman Kodak v. Image Technical Services, Inc., 112 S.Ct. 2072 (1992). 65. Illinois Tool Works, Inc. v. Independent Ink, Inc., 547 U.S. 28 (2006). 66. The “free-riding” argument was originally made by Lester G. Telser, “Why Should Manufacturers Want Fair Trade?” Journal of Law and Economics 3 (October 1960): 86–105. For an analysis of cases that do not seem to fit the free-rider theory, see Howard P. Marvel and Stephen McCafferty, “Resale Price Maintenance and Quality Certification,” Rand Journal of Economics 15 (Autumn 1984): 346–359. In particular, they argue that, in many cases, dealers do not provide tangible presale services. Instead, their idea is that certain high-quality retailers—Macy’s, Neiman Marcus, etc.—“serve as the consumer’s agent in ascertaining the quality or stylishness of commodities.” The retailers who invest in “certifying” the quality of the goods are then subject to free riding by other retailers. 67. This argument is from Bruno Jullien and Patrick Rey, “Resale Price Maintenance and Collusion,” RAND Journal of Economics 38 (Winter 2007): 983–1001. 68. This discussion is based on John Asker and Heski Bar-Issac, “Raising Retailer’s Profits: On Vertical Practices and the Exclusion of Rivals,” American Economic Review 104 (February 2014): 672–686. The argument applies to a range of vertical restraints and not just minimum RPM. 69. Dr. Miles Medical Company v. John D. Park & Sons, 220 U.S. 373 (1911).

70. Monsanto Corporation v. Spray-Rite Service Corp., 465 U.S. 752 (1984); and Business Electronics Corp. v. Sharp Electronics Corp., 485 U.S. 717 (1988). 71. Ibid. 72. Leegin Creative Leather Products, Inc. v. PSKS, Inc., 551 U.S. 877 (2007). All ensuing quotations are from that decision. 73. The ensuing discussion is based on “Action Needed to Address Resale Price Maintenance in Contact Lenses—and Countless Other Markets,” American Antitrust Institute, October 24, 2014. This document was a letter to the DOJ and FTC encouraging them to prosecute. 74. Albrecht v. Herald Co., 390 U.S. 145 (1968) 75. State Oil Co. v. Khan, 522 U.S. 3 (1997). 76. See Roger Blair, James Fesmire, and Richard Romano, “Applying the Rule of Reason to Maximum Resale Price Fixing: Albrecht Overruled,” in Michael R. Baye, ed., Industrial Organization: Advances in Applied Microeconomics, vol. 9 (New York: JAI Press, 2000), pp. 215–230. 77. A discussion of the economic and legal issues surrounding maximum RPM is provided in Guastavo E. Bamberger, “Revisiting Maximum Resale Price Maintenance: State Oil v. Khan (1997),” in John E. Kwoka Jr., and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 4th ed. (New York: Oxford University Press, 2004), pp. 334–349. 78. There is also an argument that reduced competition among sellers of the same brand can result in lessened competition among firms offering different brands; see Patrick Rey and Joseph Stiglitz, “The Role of Exclusive Territories in Producers’ Competition,” RAND Journal of Economics 26 (Autumn 1995): 431–451. 79. For a model allowing manufacturers to choose between minimum RPM and territorial restrictions (as well as some other forms of vertical restraints), see G. Frank Mathewson and Ralph A. Winter, “An Economic Theory of Vertical Restraints,” Rand Journal of Economics 15 (Spring 1984): 27–38. This theory provides a formal argument as to why these two practices should be treated comparably in terms of antitrust practice. 80. Continental T.V., Inc., et al. v. GTE Sylvania, Inc., 433 U.S. 36 (1977). See also Lee E. Preston, “Territorial Restraints: GTE Sylvania,” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution (Glenview, IL: Scott, Foresman, 1989), pp. 273–289. 81. Continental T.V., Inc., et al. v. GTE Sylvania, Inc., 433 U.S. 36 (1977).

8 Monopolization and Price Discrimination

A major policy concern in the United States has long been the so-called dominant firm. Although few true monopolies exist in real-world markets, there are many industries inhabited by a single, dominant firm. Examples throughout history include Intel, Eastman Kodak, Boeing, Xerox, and Google. In years past, many such firms have been involved in monopolization antitrust cases—for instance, Standard Oil, United States Steel, Alcoa, IBM, and Microsoft. At issue is whether they pursued anticompetitive practices in either sustaining or extending their market power. On the issue of monopolization, recall the wording of Section 2 of the Sherman Act: Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several states, or with foreign nations, shall be deemed guilty of a felony.

It is important to note that the law forbids the act of monopolizing and not monopoly itself. Quoting from the earlier Alcoa case,1 Judge Irving Kaufman explained the reason for this distinction in a 1979 case in which Eastman Kodak was charged with monopolization: One must comprehend the fundamental tension—one might almost say the paradox—that is near the heart of Section 2. “The successful competitor, having been urged to compete, must not be turned upon when he wins.”2

The challenge is to distinguish dominant firm situations built up and maintained by superior efficiency from those achieved and maintained by anticompetitive tactics. After reviewing some general issues related to proving a claim of monopolization, an overview is provided on case law, with detailed discussions of the Alcoa and United Shoe Machinery cases. We then move on to some specific forms of monopolization. Predatory pricing occurs when a dominant firm prices so as to induce exit or deter entry. After developing the theory of predatory pricing, antitrust policy is reviewed with particular attention to the standards provided by the Brooke decision. A second monopolization practice is refusal to deal. As its name suggests, refusal to deal is when a supplier denies supply of its product to another firm. An example is a photocopy manufacturer not selling parts to an independent service provider, thereby preventing the latter from servicing those photocopiers the manufacturer has sold. We provide an in-depth analysis of the Kodak case, with attention to its implications for antitrust policy in a wide class of markets. Unrelated to the issue of monopolization, the final section of the chapter discusses price discrimination and the Robinson-Patman Act. Establishing Monopolization Claims As we will see when discussing some important monopolization cases, the bases for dominant firm status

vary considerably. The early cases were concerned with monopolies achieved through mergers. Standard Oil was alleged to have become dominant by both mergers and predatory pricing tactics. Alcoa began its domination of aluminum through patents and economies of scale. More recently, Microsoft achieved its dominance through network externalities that arise when the value of a good to a consumer is increasing in the number of consumers who use that good. Just as scale economies allow a larger firm to produce the same product at lower cost, network externalities allow a firm to deliver a better product at the same cost. Monopolization cases are generally evaluated under the rule of reason, for which there are two parts to establishing guilt: inherent effect and intent. The currently accepted Supreme Court statement on monopolization was given in the 1966 Grinnell case: The offense of monopoly under Section 2 of the Sherman Act has two elements: (1) the possession of monopoly power in the relevant market, and (2) the willful acquisition or maintenance of that power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.3

In this section, various methods for determining whether a firm is dominant are described. We also provide some preliminary remarks on assessing whether a dominant firm’s behavior is intended to monopolize, although how this is actually done varies considerably from case to case. Measuring Monopoly Power The usual graphical depiction of a monopoly is shown in figure 8.1. The monopolist is the sole supplier of output Q, and therefore chooses the profit-maximizing output Q* (where marginal revenue MR equals marginal cost MC). A simple definition of monopoly (or market) power is the ability to set price above marginal cost. Hence, if we divide the price–marginal cost difference by price, we have a well-known index of the amount of monopoly power. It is known as the Lerner index.4 This index, L, is defined as

where P is price, and MC is marginal cost. It is important to note that this relationship between the pricecost margin and the price elasticity of demand holds when the firm’s quantity maximizes its profit.

Figure 8.1 Monopoly Equilibrium

Under the assumption of profit maximization, the Lerner index equals the reciprocal of the absolute value of the price elasticity of demand, which is denoted η.5 For example, if the firm’s price is double the marginal cost and L = 0.5, we can infer that the elasticity of demand is 2. Furthermore, it follows that very large elasticities imply very little monopoly power. Recall that competitive firms face infinitely large elasticities and therefore have Lerner indices of zero. The short-run monopoly power measured by L should not be relevant in antitrust unless it is “large” and is expected to persist for a reasonably long time.6 Hence, barriers, or obstacles, to entry should exist for there to be a serious antitrust concern. One useful lesson from this short review of the economic definition of monopoly power is that it is not an “either-or” concept. It is a matter of degree. Both short-run and long-run monopoly power are logically continuous variables, in the sense that they can take on a whole range of values. The questions about monopoly power that usually interest economists involve its sources and importance, rather than its existence. Courts, on the other hand, often seem to treat the existence and importance of monopoly power as though they were equivalent.7

In the real world, we do not see monopolies as portrayed in figure 8.1. Rather, we see firms with a large share of a market that also comprises other firms offering products that are imperfect substitutes. In fact, disputes over where to draw the boundaries when defining a market occupy much time and effort in monopolization cases. In chapter 6 the definition of the relevant market in horizontal merger cases was examined in depth. The concept explained there was that a market should include all firms and products that a hypothetical cartel would need to control in order to profitably raise the existing price in a permanent way. Indeed, the same general principle is applicable in monopoly situations. A good example is a famous market definition

problem involving flexible wrapping materials, to which we now turn. In the 1956 Cellophane case, the issue turned on whether the relevant market was cellophane only, or whether it was much broader, including other types of flexible wrapping materials, such as wax paper, greaseproof paper, glassine, and foil.8 If cellophane alone were the market, duPont would have had a 75 percent market share, and the Court would have found monopoly power. Fortunately for duPont, the Court held that the relevant market was the broader flexible wrapping materials market, for which duPont’s share was only 18 percent. An important point can be illustrated by an economic error made by the Court in the cellophane case. A high degree of substitution by consumers between two products must exist at competitive prices for the two products to be placed in the same market. Cellophane was indeed a close substitute for other wrapping materials at the going high price for cellophane. However, cellophane’s price contained a monopolistic margin over its marginal cost.9 A rational monopolist would, in fact, raise price until its product became a good substitute for alternatives. Hence, substitutes in consumption should be evaluated at prices that are reasonably close to marginal costs. Summing up, one should begin with the product at issue—cellophane—and ask what other products, if any, duPont would need to control in order to charge a price (say, 5 percent) above its marginal cost. The evidence that we have cited is that no other products were necessary. Hence the market should have been defined as cellophane only. In addition to the demand-side substitution emphasized in the previous case, supply-side substitution is equally important in defining the relative market. Three types of substitution are possible: (1) competitors currently producing the product may have the ability to increase output from existing facilities; (2) producers of products not considered substitutes in consumption may be able to easily convert to production of relevant products (for example, consumers cannot substitute between residential buildings and commercial buildings, but firms constructing commercial buildings could easily shift to home building); and (3) entry of new competition. Franklin Fisher has argued that the tendency of antitrust cases to be focused on determining the relevant market is often seriously misleading. In his view, “monopoly power is present when a firm is sufficiently insulated from competitive pressures to be able to raise prices …without concern for its competitors’ actions because its rivals cannot offer customers reasonable alternatives.”10 His complaint is that the courts often focus on defining the markets—which competitors to include and which to exclude—and then on whether the firm under investigation has a high market share. But this transforms a continuous question—To what degree are two products close substitutes?—to one of inclusion and exclusion from the analysis. The attention paid to market share as the determinant of monopoly power is due in part to the famous opinion of Judge Hand in the 1945 Alcoa case,11 which we discuss later in the chapter. Assessing Intent to Monopolize Given the existence of monopoly power, the second part of the rule of reason test is to determine whether the monopoly was acquired and/or maintained by practices that cannot qualify as superior efficiency or historic accident. That is, a monopoly over widgets because of superior efficiency in producing widgets is not in violation of the Sherman Act. A widget producer who used predatory pricing to bankrupt all of its rivals in sustaining or acquiring a monopoly would be in violation. Distinguishing predatory pricing from vigorous price competition is generally difficult. For example, IBM developed a computer system in the 1960s that consisted of the central processing unit, input-output devices,

memory devices, and other components. A number of companies found it profitable to begin selling memory devices that could be “plugged” into the IBM system. When IBM recognized that it was losing sales to these memory suppliers, it responded by vigorously cutting its own prices. These smaller companies suffered losses, and some were forced out of the market. Was IBM maintaining monopoly power by predatory pricing, or was it simply being forced by competition to lower its prices (to the benefit of its customers)? Development of Antitrust Case Law Roughly speaking, there have been more or less three distinct eras of Section 2 interpretation.12 The first period, 1890–1940, was one in which the courts required, in addition to a large market share, evidence of abusive or predatory acts to show intent. In the second period, 1945–1970, the courts did not require evidence of abusive acts to infer intent; it was sufficient “to achieve a monopoly by maneuvers which, though ‘honestly industrial,’ were not economically inevitable.”13 The third period, 1970 to the present, is characterized by the willingness of the courts to allow more aggressive practices by dominant firms without inferring intent to monopolize. 1890–1940: Standard Oil and United States Steel In 1911 the Supreme Court handed down two significant decisions. In the first, the Standard Oil Company, organized by the Rockefeller brothers, was found guilty of monopolization and dissolved into thirty-three geographically separated companies.14 Two weeks later, James B. Duke’s Tobacco Trust was also found guilty of monopolization and was divided into sixteen successor companies.15 Although both monopolies were accused of numerous predatory and abusive tactics, we will focus on Standard Oil, as it has become the poster child for predatory pricing. The Rockefellers built Standard Oil by acquiring more than 120 rivals. They also were accused of engaging in predatory pricing to drive competitors out of business, of buying up pipelines to foreclose crude oil supplies to rivals, of securing discriminatory rail freight rates, and of conducting business espionage. Standard Oil obtained a 90 percent share of the refining and sale of petroleum products. The Supreme Court stated in its opinion that the crime of monopolization required two elements. First, the firm must have acquired a monopoly position. The 90 percent market share met this requirement. Second, there must be evidence of intent to acquire the monopoly position. The Court found that intent could be inferred from the predatory tactics described. Turning to United States Steel, the government charged it with monopolization in 1911.16 The company was formed in 1901 through mergers that gave United States Steel control of more than 65 percent of the domestic iron and steel business. In 1907 the chairman of United States Steel, Judge E. H. Gary, began a series of dinner meetings with the heads of rival firms. These so-called Gary Dinners were designed to help stabilize pricing and create goodwill among the industry leaders. Apparently they accomplished this mission, because during the trial, no competitors had any harsh words for United States Steel’s conduct. Unlike the predatory and abusive tactics that existed in the Standard Oil and American Tobacco cases, United States Steel was viewed as a “good citizen” by its rivals. One result of its price leadership was a gradual loss of market share that reached 52 percent by 1915. United States Steel seemed to hold a price “umbrella” up for its smaller rivals, allowing these rivals to offer lower prices, thereby increasing their share of the business.

The Supreme Court decided in its 1920 opinion that United States Steel was not guilty. The Court concluded that even if the company did have monopoly power, it had not inappropriately exercised that power: “the law does not make mere size an offense or the existence of unexerted power an offense.” Hence the law seemed to be clear in this period: dominant firms would violate the Sherman Act’s Section 2 only if they engaged in predatory or aggressive acts toward rivals. 1945–1970: Alcoa and United Shoe Machinery Twenty-five years after the United States Steel decision, another significant monopolization case was decided.17 The Aluminum Company of America (Alcoa) was the sole American producer of primary aluminum until 1940. In 1945, Circuit Judge Learned Hand set forth the opinion that Alcoa was guilty of illegal monopolization, even though it had engaged in none of the aggressive and predatory tactics that characterized earlier convictions. Although the decision was not a Supreme Court ruling, it had the effect of one, because the Circuit Court of Appeals was empowered by Congress to act as the court of last resort.18 Alcoa was the holder of the patents of Charles Hall, who in 1886 invented a commercially feasible electrolytic process for converting alumina (concentrated aluminum oxide) into aluminum. Thanks to the Hall patent and other patents, Alcoa had protection from competition until 1909. After 1909, Alcoa was protected from foreign competition by high tariffs on imported aluminum. In addition, Alcoa protected its monopoly by buying up many of the low-cost deposits of bauxite (the ore containing aluminum) and cheap electric power sites (aluminum production is electricity intensive). Many economists also believe that Alcoa made entry less attractive by limit pricing (which was covered in chapter 5), unlike United States Steel’s pricing strategy. Alcoa’s price and output performance in its years as a monopoly were impressive. Aluminum prices generally declined over this period and output grew rapidly. One Alcoa official explained that their profits “were consistently held down in order to broaden markets. Aluminum has no markets that have not been wrested from some other metal or material.”19 The aluminum industry can be divided into four vertical stages, as shown in figure 8.2. Each stage involves a distinct technology and can be located in different regions. For example, bauxite was largely mined in the Caribbean area but processed into alumina near the Gulf Coast ports. Large-scale economies exist in the alumina plants (stage 2 in figure 8.2)—in fact, until 1938 there was only one alumina plant in the United States. Because the reduction of alumina into aluminum ingots requires large amounts of electricity, these plants are located near cheap hydroelectric power sites (in the Northwest and in the Tennessee Valley). The first three stages in figure 8.2 constitute the production of primary aluminum. The fourth stage, fabrication, is technically similar to fabrication of other metals. Hence there are independent fabricators who buy aluminum ingot from primary producers and compete with the fabricated output of the primary producers. Again, until the 1940s Alcoa was the only primary producer in the United States.

Figure 8.2 Vertically Related Stages of Aluminum Industry

It is important to understand the vertical structure of the industry in order to evaluate the market definition selected by Judge Hand. Also, one further technical fact is needed. As a durable good, scrap aluminum can be reprocessed and used by fabricators as a substitute for primary aluminum. This so-called secondary aluminum output was approximately one-quarter of the primary aluminum output in the 1940s. Judge Hand considered three market share definitions for Alcoa:

In the first definition, Alcoa’s consumption of its own primary aluminum production for fabrication purposes is excluded. By adding it back, the second definition is attained, leading to an increase in market share from 33 percent to 64 percent. The third definition yields a share of 90 percent by excluding secondary aluminum from the denominator. Judge Hand stated that 90 percent “is enough to constitute a monopoly; it is doubtful whether sixty or sixty-four percent would be enough; and certainly thirty-three percent is not.” He argued that 90 percent is the correct share for the following reasons. First, “All ingot—with trifling exceptions—is used to fabricate intermediate, or end, products; and therefore all intermediate, or end, products which Alcoa fabricates and sells, pro tanto reduce the demand for ingot itself.” Hence, “Internal use” should appear in the numerator. Second, Alcoa in the past had control of the primary aluminum that reappears as secondary aluminum in the present. Hence, Alcoa “always knew that the future supply of ingot would be made up in part of what it produced at the time, and if it was as far-sighted as it proclaims itself, that consideration must have had its share in determining how much to produce.” For this reason, Judge Hand excluded secondary from the denominator and concluded Alcoa’s share was 90 percent. However, according to Judge Hand, “it does not follow that because Alcoa had such a monopoly, that it had ‘monopolized’ the ingot market: it may not have achieved monopoly; monopoly may have been thrust upon it.” And, “a single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry.” Judge Hand ruled out these possibilities in the Alcoa case. As he put it, The only question is whether it falls within the exception established in favor of those who do not seek, but cannot avoid, the control of a market. It seems to us that that question scarcely survives its statement. It was not inevitable that it should always anticipate increases in the demand for ingot and be prepared to supply them. Nothing compelled it to keep doubling and redoubling its capacity before others entered the field.

The Alcoa decision was a major change in the legal definition of monopolization, because it meant that predatory and aggressive acts were no longer necessary. Simply building capacity ahead of demand could be sufficient to indicate intent to monopolize by a dominant firm.20 The remedy in the Alcoa case was to create two new competitors (Reynolds Metals and Kaiser Aluminum) by the sale of government-owned plants. The plants were built at government expense during World War II for military purposes. Divestiture of Alcoa was not a feasible alternative in any case, since Alcoa had only two alumina plants, and one was almost obsolete. The 1953 United Shoe Machinery case21 was important in providing another example of a business practice that could indicate illegal monopolization by a dominant firm. The leasing practices of United Shoe were found to be exclusionary and therefore evidence of illegal monopolization. United Shoe supplied between 75 and 85 percent of the shoe machinery in the United States, and there was no question of its dominance. United would not sell its machines to shoe manufacturers, but leased them for ten-year terms. There was a requirement that lessees had to use United machines if work was available. Also, the practice of United was to provide free repair services, and the Court viewed this as restricting entry, since rivals of United would have to offer repair services as well. That is, independent repair firms would not exist, given United’s zero charges. Having to offer repair services in addition to shoe machinery would raise a capital requirement barrier. The remedy in the United Shoe case was to purge the leases of their restrictive practices. Though divestiture into three separate manufacturing plants was proposed by the government, the Court held this to be infeasible, because all of United’s manufacturing was conducted in a single plant in Massachusetts.

1970 to Present: Kodak, IBM, Microsoft, and Others Recent decades have witnessed big monopolization cases brought by antitrust agencies, including those against IBM, Xerox, AT&T, Microsoft, Intel, Kodak, and Kellogg (along with two other breakfast cereal manufacturers).22 There were also some private suits that have received much attention, including cases by smaller computer firms against IBM, and Berkey Photo against Eastman Kodak. The Berkey–Kodak case involved a photofinisher (Berkey) putting forth a claim of monopolization against Eastman Kodak—with its 60–90 percent shares of most segments of the photography industry.23 Berkey stated that when in 1972 Kodak introduced its 110 Pocket Instamatic photographic system, which required a new Kodacolor II film, rival film and photofinishing suppliers were foreclosed from that market. According to Berkey, Kodak should have predisclosed its innovation to rivals so that they could compete immediately on introduction. Judge Kaufman concluded that Kodak “did not have a duty to predisclose information about the 110 system.” He went on to explain his reasoning: It is the possibility of success in the marketplace, attributable to superior performance, that provides the incentives on which the proper functioning of our competitive economy rests. If a firm that has engaged in the risks and expenses of research and development were required in all circumstances to share with its rivals the benefits of those endeavors, this incentive would very likely be vitiated.

Some of the other “big cases” mentioned at the beginning of this section were terminated by consent decrees (Xerox, AT&T, and Intel). A consent decree is a negotiated settlement between the two parties that is subject to court approval. In the Xerox settlement in 1975 it was agreed that Xerox would license patents, supply “know-how” to competitors, sell as well as lease copy machines, and alter its pricing policies. In the AT&T settlement in 1982, AT&T divested its telephone-operating companies. Among other things, this separated the regulated telephone utilities from their main unregulated equipment supplier, Western Electric. The Department of Justice (DOJ) filed its case against IBM in 1969, following a lengthy internal investigation. The case, dismissed in 1982, involved huge resource costs—more than $200 million in legal costs, 950 witnesses, 726 trial days, 17,000 exhibits, and 104,400 trial transcript pages.24 The government argued that IBM had about 70 percent of the market for medium-size, business-oriented, “general purpose,” electronic digital computer systems. Not unexpectedly, IBM argued that the market was much broader and should include companies selling “parts of systems” and computer systems for scientific and military purposes. IBM’s share under the broader definition was less than 50 percent. IBM’s dominant position in the computer industry at the time is usually attributed to its strong position before computers existed in the punched-card office equipment business. This position gave IBM strong ties to potential computer users. IBM delivered its first computer, the IBM 650, in 1954. While not being the technical leader in computers, IBM supplied both hardware and software that performed well. According to the government, IBM engaged in numerous practices that enabled it to maintain its monopoly power. These practices were argued to be neither “inevitable nor honestly industrial.” They were leasing, bundling, differentiation of software, fighting machines (which are products intended to harm competitors), tying of products, manipulation of purchase-to-lease price ratios, and education allowances. In the 1990s, major monopolization cases were brought against Kodak (before digital photography wiped out its market) and Microsoft in operating systems and applications. The Kodak case is discussed later in this chapter, and the Microsoft case will be covered in chapter 9. Predatory Pricing

In 1994, Frontier Airlines entered the Billings–Denver route with a fare of around $100, about half of the fare offered by the incumbent firm, United Airlines. United responded with a comparable fare. After about a year, Frontier withdrew from the route, in response to which United’s fare rose precipitously. Frontier Airlines then logged a complaint with the DOJ that United had acted in an anticompetitive manner by engaging in predatory pricing.25 What exactly is anticompetitive about what just happened? Might one say that this is just “normal” competition at work? To get a better handle on these questions, let us examine matters a bit more closely. Start with a monopolist for which the firm (and market) demand curve is D(p) (see figure 8.3a). Initially, it is pricing at the profit-maximizing level of pm, where marginal revenue, denoted MR, is equated with its marginal cost, denoted cI. Now suppose entry occurs by firm E, which has marginal cost cE. The entrant prices at so that the incumbent firm’s demand is now DI(pI; ), which is less than its demand before entry. The competitor takes part of the incumbent firm’s demand and, in fact, takes more when it prices lower (this is why the incumbent firm’s demand is shown to depend on the entrant’s price). Given that demand curve, the new marginal revenue curve is MR′, for which is the profit-maximizing price. An analogous situation prevails for the entrant and is depicted in figure 8.3(b). Note that is optimal for the entrant, given that it faces demand curve DE(pE; ). These prices form an equilibrium in that each firm’s price maximizes its profit given its rival’s price. As we would expect, entry causes the incumbent firm’s price to fall—from pm to —which is part of normal profit maximization and competition. The entrant earns positive profit, measured by the shaded rectangle in figure 8.3b.

Figure 8.3 (a) Pre-entry and (b) Postentry Situations with Normal Competition

This outcome describes the type of competition we would like to see. Now consider the incumbent firm pursuing a more aggressive pricing policy with the explicit intent of reducing the entrant’s profit (as opposed to raising the incumbent firm’s profit). This situation is depicted in figure 8.4. The incumbent firm now prices at which is below Relative to when its competitor’s price is the entrant’s demand curve is shifted in to DE(pE; ) and the price at which it maximizes profit, is lower than It now incurs losses, as price is less than average cost. Further note that the incumbent firm’s profit-maximizing

price, denoted in figure 8.4(a), exceeds which is the price that maximizes its current profit. This scenario may describe an episode of predation. The incumbent firm prices below that which maximizes current profit. The potential benefit is in terms of higher future profit by inducing exit by the new rival or deterring future entry in other markets.

Figure 8.4 Postentry Situation with Possible Predation

In sum, when a firm enters a market, it does so at a price below the preexisting price so as to lure business away from existing suppliers. In response, we expect incumbent firms to also lower their prices. This behavior is deemed to be predatory pricing only when the intent is exclusionary; that is, it is profitable only because it will result in a higher future price in this or some other markets by virtue of inducing exit or deterring entry in those other markets. Predatory pricing has also been defined as a price designed to induce an equal or more efficient firm to exit. However, this definition is problematic. As we will see, a price can be exclusionary even when it does not induce exit. By pricing aggressively and making entry sufficiently unprofitable ex post, a firm may be able to discourage future entry. (The reason that a firm may not exit but may regret having entered is that there are sunk or unrecoverable costs associated with entry.) Furthermore, entry by even a less efficient firm can be welfare-improving. Though it is producing at higher cost, this may be compensated by the lower price it induces through intensified competition. So it is not clear that we would want to allow strategic behavior that drives out a mildly less efficient competitor. The relevance of predatory pricing does not rest on whether an incumbent firm is able to impose losses on a rival. The real issues are instead the following. First, when is it that an incumbent firm’s current price can influence entry and exit? Here it is important to recognize that entry and exit are based on future anticipated profit and not directly on what a firm is currently earning. Even if a firm is incurring losses, it can borrow against future earnings if the capital market thinks it will have future earnings. Thus, the issue comes down to the beliefs of firms and the capital market as to the future profit stream. When can an incumbent firm’s pricing behavior substantively influence those beliefs? Second, presuming that predatory pricing can “work,” when is it optimal for an incumbent firm to use it? When does predation generate a higher profit

stream than accommodating rivals or acquiring them? At the heart of the issue of the optimality of predatory pricing is a simple intertemporal tradeoff. A necessary condition for predation is that a firm is pricing differently from that which maximizes current profit and for the purpose of reducing future competition in order to raise future profit. This tradeoff is depicted in figure 8.5. In response to entry, an incumbent firm can accommodate by pricing in a manner that maximizes current profit and accepting the existence of a competitor. Alternatively, it can price aggressively —which reduces profits in the near term—and be rewarded with higher future profits on exit of the competitor. Predation is preferred over accommodation when the value of the near-term forgone profit is smaller than the (discounted) value of the long-term profit gain.

Figure 8.5 Profit Pattern under Predatory Pricing

Associated with any predatory pricing scheme is a demonstrative market and a recoupment market. The demonstrative market is the one in which aggressive pricing takes place. That is where an incumbent firm incurs a cost through forgone profit. The return to that strategy is reaped in the recoupment market. As depicted in figure 8.5, the demonstrative market is the incumbent’s firm market in the periods prior to exit and the recoupment market is that same market after exit, or one could view this situation from a multimarket perspective. A retail chain can price predatorily in Cleveland—the demonstrative market—with the anticipation of such behavior deterring entry in Cincinnati, Detroit, and Indianapolis—the recoupment markets. Before moving on, the reader should be warned that predatory pricing is the “black hole” of antitrust. As with black holes, one cannot directly observe predatory pricing: What defines it is the intent to exclude, and

intent is at best inferred. Some economists believe predatory pricing is exceedingly rare if not nonexistent, and for that reason it has also been referred to as the Loch Ness monster of antitrust because of “occasional sightings that on further investigation turn out to be something else.”26 At the same time, others believe there are well-documented episodes.27 The debate over the importance of predatory pricing has raged for more than a century and does not appear to be abating. Theories of Predatory Pricing The theory of predatory pricing has been slow in developing. Though predatory pricing has been an antitrust issue since the turn of the twentieth century, it came under careful theoretical scrutiny only in the late 1950s with a reexamination of the Standard Oil case. That study came to the conclusion that Standard Oil had not priced predatorily after all, on the grounds that such a strategy was unprofitable. This famous study led to rising skepticism as to the relevance of predatory pricing as a phenomenon. More recently, game theory was used to show that predatory pricing can be quite rational. These theories are reviewed here after we discuss the Standard Oil study that launched the theoretical examination of predatory pricing. The section concludes with a discussion of various pricing theories that produce paths that look predatory but actually are not. When considering any candidate predatory price path, it is important to consider alternative efficiency rationales for low pricing. Standard oil and the claimed irrationality of predatory pricing In reexamining the trial record of the 1911 Standard Oil case, John McGee came to the conclusion that predatory pricing had not occurred.28 In substantiating this claim, he provided some challenges that any theory of predation must overcome. In the Standard Oil case, as well as in most litigated cases of predatory pricing, the predator firm had much greater sales than the prey firm. If firms are pricing below cost, the cost of predation is then actually more severe for the firm inflicting it. If so, the prey should outlast the predator in this price war. But then how is predation effective? One common response is that the predator has “deep pockets” to finance these losses. But Professor McGee’s reply is that if the prey is more efficient, the capital markets can provide the necessary financing, so it can withstand the price war. A theory of predation would have to explain why that does not occur. Let us suppose the predator succeeds in driving out its competitor, after which it raises price to reap the rewards. Why would not another firm enter in response to that high price? Is not then success at best brief? Finally, even if such a pricing strategy did succeed in causing exit, and something prevented other firms from subsequently entering, predation is a very costly way in which to enhance one’s market power. Would not it be more profitable for the predator to buy out the prey? Indeed, this outcome would seem to be a Pareto improvement, as both firms avoid losses from such a course of action. While at present the last criticism can be countered with the retort that the antitrust authorities would not allow a dominant firm to acquire or merge with a competitor, the other criticisms of the predatory story are not so easily dismissed. This will take some care, and to that task we turn next. Overview of the modern theories of predatory pricing The McGee analysis erected a formidable challenge to those who believe predatory pricing is a cogent theory. They had to show that predatory pricing could work, and that it was the most profitable response for an incumbent firm to entry. Few challengers initially came forward, but, beginning in the 1980s, with the aid

of game theory, the sword was eventually pulled from the stone. We now know that predatory pricing is a logically valid means toward maintaining dominance in an industry. To what extent it actually occurs remains an open question. There are at least three classes of predatory pricing theories. The theory of financial market predation explains why imperfect capital markets may not provide adequate financing for a more efficient firm to survive predation.29 The argument is as follows. Suppose the capital market is unable to perfectly monitor a firm’s management. There is then an inference problem. When losses are observed, it may be due to factors outside management’s control, such as a weak economy or predation, but it may also be due to incompetent or negligent management (for example, one that makes bad project choices or does not exert much effort). If the capital markets automatically financed any losses, then they would provide weak incentives to managers doing their job well. So the capital market does not, which provides the opportunity for a predatory pricing strategy to bankrupt its prey. By pricing aggressively and imposing losses, it makes investors inclined to limit financing, since investors do not know to what extent the losses are due to the competitor’s prices or to mismanagement. This argument is a direct response to the claim that a more efficient firm will have the necessary financial resources to withstand a predatory pricing. As just shown, the firm may not. The second class of theories shows how an incumbent firm can develop a valuable reputation by aggressively responding to entry. Such a reputation can enhance the incumbent firm’s profitability by inducing exit or deterring entry in other markets. The basic mechanism is one of signaling. A relevant trait of the market, such as the incumbent firm’s cost, is known to the incumbent firm but not to competitors. The incumbent firm’s price can then influence the profitability of entry and exit, because it contains information about this unknown trait. An example of this theory will be examined below. These reputational models have also responded to the criticism that acquisition of a competitor is more profitable than predation. Theoretical analysis has shown that aggressive behavior can influence the takeover price, so that predatory pricing can be used in conjunction with acquisition.30 Aggressive pricing can signal that the incumbent firm’s cost is low, and therefore a competitor’s profit in the event that there is no acquisition would be low, which encourages the firm to sell out at a lower price. An examination of American Tobacco found that the price at which it acquired competitors from 1891 to 1906 was indeed significantly reduced because of its reputation for aggressive pricing.31 The third class of theories shows how predation can reduce the information acquired by a new firm, and this can deter expansion; it is referred to as test market predation. For example, in 1971 Proctor & Gamble (P&G) was considering selling its Folger brand of coffee in the eastern part of the United States. As was standard practice for P&G, it introduced the coffee in some test markets to gauge demand and, in particular, how it fared in comparison with General Foods’ Maxwell House, which was the leading brand. To be an informative experiment, P&G would need to observe demand at the prices that were expected to prevail in the long run. General Foods muddied up the experiment by engaging in an intense price war with price set below average variable cost. Though P&G learned about demand at very low prices, it did not learn about demand at the relevant prices. By limiting the amount of information that a test market experiment reveals, predatory pricing can make full-scale entry more risky, which could deter it. In this case, P&G chose to postpone its entry plans. It is worth noting that the Federal Trade Commission (FTC) ultimately decided that P&G did not engage in predatory pricing.32 Both the reputation and test market predation theories show that the cost of successful predation need not be high, contrary to the claim of McGee. In those analyses, the objective is not to bankrupt a new firm but rather to signal that future profit is low, which can serve to induce exit or deter entry in other markets of the incumbent firm.

A reputational theory of predation One mechanism by which predatory pricing works is by establishing a reputation for aggressive pricing. Suppose, for example, that a competitor is uncertain of a dominant firm’s cost. But it knows that firms with lower cost tend to price lower. Hence, if the dominant firm prices low in response to entry, this may suggest that its cost is low, and thus one can anticipate similarly aggressive behavior in the future. Inducing such beliefs can then prove profitable to the incumbent firm by inducing exit—as a new firm becomes convinced that the incumbent firm will persist with low pricing because it has low cost—or deterring future entry. Of course, its competitors are no fools and will realize the incumbent firm could be trying to induce false beliefs. Regardless, we will show that the incumbent firm can strategically deter entry: By pricing below that which maximizes its current profit, the incumbent firm deters entry that would have occurred had it priced to maximize current profit. Consider an incumbent firm that is currently a monopolist in two markets, A and B, which are identical in the sense that the associated profit functions are the same. A potential entrant is considering entering these two markets (or, alternatively, there are two entrants and each is considering entry into one of them). Entry requires incurring a fixed (and sunk) cost of k. To simplify matters, assume that, in response to entry, a firm can either set a price of L(ow) or M(oderate). The profit to an entrant is denoted πE(p′, p″) when the incumbent firm prices at p′ and the entrant prices at p″. For example, an entrant earns πE(M, L) if it prices at L and the incumbent firm prices at M. Assume that the entrant’s optimal pricing strategy is to match the price of the incumbent firm:

In addition, assume that entry is profitable when the incumbent firm does not set a low price:

The incumbent firm can be of three possible types. It can have low cost (denoted l), moderate cost (denoted m), or high cost (denoted h). Though the incumbent firm knows its cost, the entrant does not know. When it is low cost, a price of L maximizes the profit in that market, while, if it is moderate or high cost, it is optimal for it to price at M. That is, these are the short-run profit-maximizing prices. Let πc (p′, p″) denote the incumbent firm’s profit when its cost is c, the incumbent firm prices at p′, and the entrant prices at p″. We have then assumed:33

Finally, let and be monopoly profit for the incumbent when it is low cost, moderate cost, and high cost, respectively. If information were complete, so that the potential entrant knew the incumbent firm’s cost, entry would occur if the incumbent firm was moderate or high cost but would not occur if it was low cost. We know that pricing low is best for the incumbent firm when it has low cost, and given that the incumbent firm sets a low price, the profit-maximizing response of the new firm is to also price low. This price yields profit of πE(L, L) to the new firm, which, by equation 8.1, is insufficient to cover its cost of entry. Entry (in either market A or

B) is unprofitable. By analogous reasoning, one can show that entry is profitable when the incumbent firm is moderate or high cost, as in both cases it prices at M. It follows that, under complete information, the incumbent firm earns, in each market, πI(L,L) when it is low cost and πI(M, M) and πh(M, M) when it is moderate cost and high cost, respectively. Let us return to when there is incomplete information. Suppose that, if entry occurs, it takes place in market A and then, in response to what happens in market A, there is possible entry in market B. Consider the following strategy for the incumbent firm in response to entry in market A: If the incumbent firm has low or moderate cost, it prices at L in market A. If it is high cost, it prices at M in market A. (8.2)

We will later show that this pricing strategy is optimal for the incumbent firm. In the meantime, let us derive optimal entry decisions given the incumbent firm uses this strategy. Suppose all prospective firms know this is the incumbent firm’s pricing strategy (so that, if there is successful predation, it is not due to entrants being misinformed). Then entry into market A is optimal if

where ρl is the probability the entrant assigns to the incumbent firm being low cost, and ρm is the probability that the incumbent firm is moderate cost.34 For example, if the three cost types are equally likely, then ρl = 1/3 and ρm = 1/3. Given (8.1), (8.3) holds when the probability that the incumbent firm is high cost is sufficiently high. Given that market A has been entered, then if there is entry in market B, the incumbent firm will have no further concerns about future entry. Hence, it choose price in market B in order to maximize its profit, which means a low price when it is low cost and moderate price otherwise. Let us next consider the decision to enter market B, depending on how the incumbent firm priced in response to entry in market A. If the incumbent firm priced at M in market A, then the potential entrant should infer that its cost is high, since, according to strategy 8.2, only a high-cost firm prices at M. Entry into market B will then occur, since πE(M, M) − k > 0. If the incumbent firm instead priced at L, then the potential entrant cannot infer its cost, because both a low-cost and a moderate-cost incumbent firm prices aggressively in market A. In that situation, let β denote the probability that the entrant assigns to the incumbent firm having low cost given that it believes the incumbent firm is either low cost or moderate cost.35 Assume that

so that, in expectation, entry is unprofitable if the potential entrant believes that the incumbent firm has low or moderate cost.36 Thus, if the incumbent firm prices at L in market A, entry into market B will be deterred. Let us assume that inequality 8.3 is true—so that entry occurs in market A—and consider the optimal pricing response of the incumbent firm in market A. In other words, we want to show that strategy 8.2 is optimal. A price of L is optimal when the incumbent firm is low cost, because it maximizes profit in market A and, in addition, deters entry into market B (as we argued above). A low price is optimal when it is moderate cost only when

Compared to pricing at M, a price of L yields lower profit in market A of

compared to

but it generates higher profit in market B of compared to by deterring entry. Assume inequality 8.5 holds. Finally, pricing at M is optimal for a high-cost incumbent firm (as prescribed in equation 8.2) when

This condition is true when a high-cost firm finds it too costly to price low in response to entry. It prefers to price moderately and allow entry to occur into market B. Assume that inequality 8.6 holds. Both expressions 8.5 and 8.6 can hold when the difference between high cost and moderate cost is sufficiently large, so that it is too costly for a high-cost firm to price aggressively but not too costly for a firm with moderate cost. Let us compare what happens under incomplete information with what happens under complete information (and thus no opportunity exists for predatory pricing). When the incumbent firm has low or high cost, the outcome is the same under both information scenarios. With incomplete information, a low-cost incumbent firm prices aggressively in response to entry in the first market, which serves to deter any further entry. This behavior is the same as under complete information; entry ought not to occur when the incumbent firm is highly efficient. With incomplete information, a high-cost incumbent firm prices moderately in response to entry, which induces yet more entry. This outcome is the same as under complete information, and indeed, entry ought to occur when the incumbent firm is highly inefficient. In sum, there is no anticompetitive behavior when the incumbent firm has either low or high cost. Anticompetitive behavior in the form of predatory pricing does arise, however, when the incumbent firm has moderate cost. Under incomplete information, the incumbent firm prices below that which maximizes the profit it earns in market A. This aggressive response to entry is rewarded with the deterrence of entry into its other market. Thus, compared to when there is complete information, price is lower in market A— which is good—but entry into market B does not occur—which can be bad.37 The demonstrative market is then market A, and recoupment occurs in market B. In sum, by pricing aggressively in response to entry, an incumbent firm with moderate cost is able to signal that it is not high cost. By leaving sufficient uncertainty about whether it is low or moderate cost, it is able to deter any further entry. Predatory pricing can then be anticompetitive and profitable for an incumbent firm. Efficiency Rationales If an incumbent firm prices below that which maximizes short-run profit, it does not necessarily mean that its intent is to induce exit or deter entry. There are other ways in which a low price in the short-run can benefit a firm in the long run. When evaluating the predatory intent of a price path, it is important to consider legitimate business justifications for such pricing. When a firm gives out free samples, it is clearly pricing below that which maximizes short-run profit; a zero price is certainly below marginal cost! But rarely is such an activity anticompetitive. Promotional pricing is just a way in which to expand future demand by informing consumers of the value of one’s product. For some products, such as aircraft manufacturing, it is well documented that the marginal cost of production declines with experience. This is attributed to learning-by-doing, whereby through the actual production of a product, the firm learns more effective practices that serve to lower cost. When competing with other firms, it may then behoove a firm to produce at a high rate and price low, even below current marginal cost, because it will gain experience and benefit by virtue of lower future marginal cost. Network externalities are a third rationale for a low price in the short run. Network externalities are

present when the value of a good to a consumer depends on how many other consumers use that good. For example, the value of a word-processing package like Microsoft Word depends on how many other people use Word. The more people who use it, the more people with whom one can exchange documents without compatibility problems. Other common examples are communication networks, like telephone and email. The more people connect to the network, the more valuable it will be to everyone. Now consider a market with network externalities in which firms are competing. An example is the video market in the 1980s with the competing formats of VHS and Beta. The more people who have VHS video cassette recorders, the more movies will be available in the VHS format and thus the more valuable is VHS over Beta. It can then be optimal for a firm to price low when selling VHS machines in order to expand its customer base. It will be rewarded in the future with higher demand (and the ability to charge higher prices), because those additional sales make purchasing a VHS machine more valuable. Like learning-by-doing, network externalities can lead to low short-run prices as firms are engaging in a long-run competitive battle. Antitrust Policy The legal definition of predatory pricing is necessarily of great importance. In effect, it will be the rule by which firms must play the “competitive game.” For example, suppose that price cutting is deemed predatory when a price is below average total cost. A dominant firm, faced with entry by an aggressive new rival, must then be careful not to price below its average total cost when responding to that new rival. The result may be that healthy price competition is stifled. Of course, errors can be made in the other direction as well. If the test for predatory pricing is made too permissive, monopoly power might be fostered through predatory pricing. The appropriate definition depends on whether one is more concerned with false positives— mistaking competition for predation (known as type I error in statistics)—or with false negatives— mistaking predation for competition (type II error). It is important to bear in mind these two types of errors when evaluating different judicial standards with regard to predatory pricing. Prior to 1975, the legal standard for judging whether some prices were predatory rested on the presence of market power and the intent to exploit that power in order to sustain or expand it.38 In implementing that standard, the focus was more on preventing “unfair” treatment of small firms than achieving economic efficiency. Contributing to the lack of emphasis on efficiency was the absence of an operational rule for determining when the observed behavior was best explained by the desire to preserve and expand market power in a manner harmful to consumers. Two key events radically changed how claims of predatory pricing were evaluated, and ultimately raised the bar for proving a Section 2 violation. The first event occurred in academia rather than judicial or legislative arenas. In 1975, two Harvard Law School professors, Phillip Areeda and Donald Turner, published an article in the Harvard Law Review in which they proposed a simple bright-line rule for assessing claims of predatory pricing using only price and cost data.39 The second event occurred in 1993 with Brooke Group v. Brown and Williamson Tobacco. In that decision, the Supreme Court laid out a twotier test for predatory pricing that has proven difficult for plaintiffs to pass. Areeda-turner rule and other single-parameter rules When considering the Areeda-Turner rule, it is helpful to refer to the typical short-run cost curves shown in figure 8.6. Its creators argued that any price below marginal cost (MC) will cause the monopolist to lose money on some units of output, which is consistent with the predatory pricing strategy. Also, pricing below short-run marginal cost is well known to be economically inefficient (from a short-run perspective). For

these reasons, they would classify such prices as predatory and therefore illegal. However, for quantities to the right of Q*, average cost (AC) is less than marginal cost. Because prices above average cost (but below marginal cost) would not exclude equally efficient rivals, Areeda and Turner would permit such prices (even though those quantities would be economically inefficient).

Figure 8.6 Region Showing Predatory Prices under ATC Rule That Are Not Predatory under Areeda-Turner Rule

Given that estimating marginal cost is difficult, Areeda and Turner proposed to substitute average variable cost (AVC) for marginal cost. Their conclusion is as follows: 1. A price at or above reasonably anticipated average variable cost should be conclusively presumed lawful. 2. A price below reasonably anticipated average variable cost should be conclusively presumed unlawful. The Areeda-Turner rule has had a substantial impact on jurisprudence.40 Giving the courts a manageable method for evaluating claims of predatory pricing led to its rapid acceptance. The adoption of the rule also raised the bar for plaintiffs. Prior to its adoption, plaintiffs won around 75 percent of claims of predatory pricing. From 1975 to 1993, that success rate fell below 20 percent.41 One could conclude that many claims of predatory pricing were unable to stand up to a more rigorous examination of the facts. Having introduced an operational approach, the Areeda-Turner rule stimulated efforts to refine and improve it. Several economists independently proposed some version of what we term the average total cost (ATC) rule, because it was thought the Areeda-Turner rule would allow too many predators to escape prosecution. It was claimed that “a monopolist with abundant financial reserves could, under the [AreedaTurner] rule, drive less financially secure but equally efficient rivals from the market without fear of prosecution merely by pricing below ATC and above AVC.”42 The shaded region in figure 8.6 illustrates

how the Areeda-Turner rule differs from the ATC rule. An alternative to cost-based rules was put forth by Oliver Williamson.43 The idea is that since entry shifts in a firm’s demand, a legitimate competitive response should be to reduce quantity. Thus, an incumbent firm increasing quantity in response to entry is consistent with predation. The Output Restriction Rule states that in the period after entry occurs, the incumbent firm cannot increase quantity above the pre-entry level. One suggestion for the “period after entry” was twelve to eighteen months. Other rules were proposed, though all had serious weaknesses. The brooke case and the two-tier rule The first attempt at a more nuanced approach was proposed by Paul Joskow and Alvin Klevorick. They suggested using a two-stage approach to identify predatory pricing.44 The first stage would require an examination of the market structure to determine whether it is likely to be susceptible to predation. For example, if entry barriers are low, the finding would be that predation is not likely to be a viable strategy, and the case would not be pursued. The second stage would use the type of cost-based or pricing behavior tests described above. A more sophisticated view of predation was expressed in the Supreme Court’s decision in Matsushita v. Zenith in 1986.45 The case began in 1970 with charges made by American television set manufacturers that seven large Japanese firms were conspiring to drive them into bankruptcy. By setting monopoly prices in Japan, the seven firms were argued to be using those profits to subsidize “below-cost” U.S. prices. Ultimately, the Japanese were supposed to set monopoly prices in the United States, too. A careful economic analysis showed, however, that the purported strategy is, under reasonable assumptions, unprofitable for the Japanese firms.46 Even if it eventually resulted in a monopoly that would last forever, the Japanese firms would fail to break even, which raises serious doubts about the credibility of a claim of predation. In coming to that conclusion, the analysis assumed a ten-year predation period with the price of television sets being reduced by 38 percent due to predation. With an assumed demand elasticity of about 1.2, the postpredation price ranged from 119 to 138 percent of the “but for predation” price. Finally, a 12.2 percent opportunity cost of capital was assumed, equal to the average return on assets in Japan at that time. Under these assumptions, a lifetime of monopoly profits could not compensate for ten years of predatory profits. The Supreme Court observed that it is not enough simply to achieve monopoly power, as monopoly pricing may breed quick entry by new competitors eager to share in the excess profits. The success of any predatory scheme depends on maintaining monopoly power for long enough both to recoup the predator’s losses and to harvest some additional gain.

The Court then noted that “if predatory pricing conspiracies are generally unlikely to occur, they are especially so where, as here, the prospects of attaining monopoly power seem slight.”47 The Matshushita (1986) decision laid the groundwork for a landmark Supreme Court decision in 1993. In Brooke Group v. Brown and Williamson Tobacco, the Court established a new standard for judging predatory pricing.48 Liggett (owned by Brooke) had introduced a generic, low-price cigarette in a simple black-and-white package. Brown and Williamson, a competitor to Liggett, responded with a similarly packaged generic cigarette and proceeded to continually undercut Liggett’s price to the point that price was below average variable cost. Price remained below cost for eighteen months, after which Liggett raised its price. When evaluating this case, the Supreme Court found that the market structure was such that Brown and Williamson could not recoup its predatory losses. As Brown and Williamson had only a 12 percent

share of the cigarette market, the Court argued that recoupment would require coordinated pricing with other companies, which was perceived as unlikely. The Brooke decision laid out a clear two-tier policy for judging predatory pricing. First, price must be set below an appropriate measure of cost. Second, there must be a “dangerous probability” (under the Sherman Act) or “reasonable possibility” (under the Clayton Act) of subsequent recoupment of lost profits. Necessary conditions for recoupment are high market concentration, high entry barriers, and ample capacity to supply the demand created by the prey’s exit. To avoid conviction, it is sufficient for defendants to establish that price is not below cost (with the most common cost measure being average variable cost), and, even if that criterion is violated, to show that they could not recoup their losses so that predation is not a profitable strategy. It is also possible to win by just showing an inability to recoup. In a case in 1989, the court dismissed the claim because ease of entry and low market shares were such that the “market structure … made recoupment impossible.”49 The requirement that predatory pricing must be shown to be profitable—that is, a firm can recoup the cost of aggressive pricing—is a clear distinction from the standards applied to other forms of exclusionary behavior, where one must only show it harms competition. As with the Areeda-Turner rule, the addition of the recoupment condition resulted in a more stringent standard for plaintiffs. While plaintiffs won 17 percent of cases from 1982 to 1993, they did not win a single case in the six years after the decision. Even more striking is that all but three of thirty-nine reported decisions were dismissed or failed to survive summary judgment.50 This new standard set a high hurdle for proving a firm engaged in predatory pricing. Some recent cases Part of the failure of plaintiffs in predatory pricing cases reflects skepticism among judges about the plausibility of predatory pricing and the “judicial neglect of modern strategic theories of predatory pricing.”51 Exemplifying this point is a case in 2001 in which American Airlines was the alleged predator.52 American Airlines had responded aggressively to entry on routes out of its Dallas–Fort Worth (DFW) hub by low-cost carriers. After initially just matching the lower fares of these entrants on a limited basis, it expanded the availability of these lower fares and raised capacity by increasing the number of flights and using bigger planes. It even entered the DFW–Long Beach route, which it had previously abandoned but now was served by one of the low-cost carriers. Entering a market now that there are more competitors is inconsistent with models of competition but is well explained by models of predation. The cost of this strategy was estimated to be around $41 million, which one could not reasonably argue could be recouped in the affected markets. The DOJ then used a “reputation for predation” argument to contend that American Airlines could reap the benefits by deterring entry and expansion in other route markets. However, the district court was unconvinced, and the defendants won the case. Whether recoupment could occur in a market different than the one in which predatory pricing took place is a matter that has not been definitively resolved by the courts, though the DOJ believes it can be effectively argued.53 The Third Circuit Court recognized this possibility: “The predator needs to make a relatively small investment (below-cost prices in only a few markets) in order to reap a large reward (supracompetitive prices in many markets).”54 Other courts have not necessarily taken that view, however. An encouraging development for prosecuting predatory pricing was another airlines case a few years later.55 It involved the response of dominant carrier Northwest Airlines to the entry of low-cost carrier Spirit Airlines. The pre-entry scenario for this route had Northwest pricing between $189 and $411 and offering 8.5 flights a day. After Spirit entered and started gaining significant sales, Northwest dropped all fares to

$69, added two additional flights a day, and increased the size of its planes. It took only a few months for Spirit to exit these routes. The district court granted summary judgment for Northwest, but the Sixth Circuit Court reversed on appeal. The primary basis for their decision was that Northwest possessed significant market share and would be able to recoup its losses once Spirit exited, partly due to high barriers to entry. However, the legal case ended when Northwest entered into bankruptcy proceedings. One final case worth mentioning is Weyerhaeuser (2007).56 All of the cases mentioned thus far involved the claim that a dominant firm set low prices in the output market to impose losses on a rival firm and induce it to exit. The claimed predatory strategy in Weyerhaeuser was that the dominant firm was driving up the price of an essential input, which served to raise the cost and lower the output of a rival firm with again the objective of causing it to leave the market. The market was for hardwood lumber, and the essential input was sawlogs. Ross-Simmons operated a hardwood-lumber sawmill and alleged that Weyerhaeuser had bid up the prices of sawlogs with the intent of imposing negative profit margins on Ross-Simmons and inducing it to exit, which it did in 2001. In finding Weyerhaeuser guilty, the district court rejected the defendant’s argument that the claim should be judged according to the criteria laid out in Brooke Group. Weyerhaeuser appealed on those grounds, but the Ninth Circuit Court affirmed the jury’s verdict. The Ninth Circuit Court reasoned that Brooke Group “established a high liability standard for sell-side predatory pricing cases because of its concern that consumers benefit from lower prices and that cutting prices often fosters competition.” With Weyerhaeuser bidding up the input price, it was not clear that consumers would benefit, and thus the conservative standard of Brooke Group was seen as being inapplicable. (Of course, if Weyerhaeuser is paying a higher input price, then more sawlogs would be supplied, which could then result in more output and lower prices in the output market.) The Supreme Court took on the case in 2007 and reversed the decision: “The test this Court applied to predatory-pricing claims in Brooke Group also applies to predatory-bidding claims.” Refusal to Deal and the Essential Facilities Doctrine Antitrust laws are constructed on the premise that competition among firms is to be promoted. A tension with this premise emerges with a monopolization practice known as refusal to deal. The courts have identified circumstances under which a firm with market power has a duty to deal with a competitor, whether it is to supply a product—either an input or a complement to a rival’s product—or provide valuable information. When a firm refuses to deal and that refusal is deemed to have anticompetitive intent, the firm can be found in violation of Section 2 of the Sherman Act. Refusal to deal can arise under an array of circumstances.57 Two firms can provide products in the same market, and one firm refuses to engage in a joint venture with the other. Exemplifying this situation and representing a landmark decision by the Supreme Court in 1985 is Aspen Skiing Company v. Aspen Highlands Skiing.58 Aspen Skiing Company owned three of the four major mountain ski facilities in its geographic market, with Highlands owning the fourth. The two companies had sold a pass that allowed a skier to ski on any of the four mountains, with revenue being allocated on a pro rata basis. After several years, Aspen Skiing demanded a higher share of revenue. After Highlands refused to agree to it, Aspen Skiing discontinued the joint venture. Soon thereafter, Highland’s share of skiers fell from 15 to 11 percent, and it sought, without success, to reestablish its arrangement with Aspen Skiing. The Supreme Court found

that there were no valid business reasons for Aspen Skiing’s refusal to return to the joint venture and that its refusal was anticompetitive in intent. However, in a later decision, the Court stated that Aspen Skiing is “at or near the boundary of Section 2 liability.”59 A second type of refusal to deal is when a dominant firm refuses to deal with any customer or firm that is supplied by a competitor. This is an example of exclusive dealing. This type is covered in chapter 7 as part of vertical restraints; see the discussion there, especially concerning the Standard Fashion case. A third type is when a firm has a monopoly over an input and also competes in the final product market. Refusal to deal arises when the input monopolist denies the input to competitors in the downstream market. The economics of this situation are explored in the discussion of vertical integration in chapter 7. There are many refusal-to-deal cases of this sort, including the landmark 1992 case involving Kodak, which we explore later. Essential Facilities Doctrine From a legal perspective, an important type of refusal to deal is when it involves an essential facility. The essential facilities doctrine “imposes a duty on firms controlling effectively irreproducible bottleneck resources to share those resources with their competitors on reasonable terms.”60 At issue here is a vertical restraint. This doctrine originated in 1912 with the Terminal Railroad decision by the Supreme Court.61 A company controlled a key bridge at the St. Louis crossing of the Mississippi River. It was acquired by some railroads that then controlled all railway bridges and switching yards into and out of St. Louis. They then proceeded to deny access to competing railroads, which effectively shut the latter out of offering rail services to and through St. Louis. The court concluded that this act was an attempt to monopolize and mandated that rival railroads be given access to the bridge, terminals, and approaches. The conditions to apply the essential facilities doctrine were most clearly laid out in the case between AT&T and MCI.62 MCI was competing with AT&T in the long-distance telephone market, and the essential facility was AT&T’s local telephone network. For customers to use a non-AT&T long-distance supplier, they had first to connect to the local network, so as to access MCI’s network, and then connect with the local network of the person they were calling. The Seventh Circuit Court of Appeals found in favor of MCI and mandated access to the monopolist’s local network by competing long-distance providers. In its decision, the Court stated that to establish antitrust liability under the essential facilities doctrine, one must prove: (1) control of the essential facility by a monopolist; (2) a competitor’s inability practically or reasonably to duplicate the essential facility; (3) the denial of the use of the facility to a competitor; and (4) the feasibility of providing the facility to competitors.63

The doctrine requires monopoly control of the facility and its denial to rivals. The other two conditions warrant comment. If it is reasonable for a competitor to create its own facility, such as its own bridge in the case of Terminal Railroad, then monopolization has not occurred. There must then be some entry barriers, or it must be excessively costly to build. The fourth condition recognizes that legitimate business rationales may exist for refusing service. These four conditions are considered relatively stringent, as liability is rarely found under this doctrine. The essential facilities doctrine tends to arise in markets involving networks and natural monopolies. Whether it is a railroad network, a telecommunications network, an electric power network, or yet some other type, provision of a service may require the cooperation of a competitor that owns a portion of the

network. The real issue is whether that portion is a bottleneck; there is no reasonable way around it. A bottleneck is more likely when it is a natural monopoly; that is, the least-cost way in which to supply that part of the network is to have a single firm. In the case of long-distance telephone service in the 1980s, it was not feasible to avoid the existing local telephone network, because it would not be profitable for a longdistance supplier to build its own local network. Another example of a network arose in Otter Tail Power Co. (1973).64 In that case, municipalities sought to establish their own municipal power distribution systems, but to do so required purchasing electric power. Otter Tail Power, the local regulated utility currently distributing power, chose to withhold access to its power transmission lines. Those transmission lines were the essential facility, as they were the only means to deliver power to the municipalities. Otter Tail Power was found in violation of Section 2 of the Sherman Act, because it refused to transmit competitive electricity over its lines and refused to sell wholesale power to the municipal systems. After the Telecommunications Act of 1996, local telephone companies were obligated to provide access to their networks to other companies providing telecommunication services, such as suppliers of longdistance service. Verizon was one of those local telephone companies and, in Trinko (2004),65 competitors to Verizon complained that Verizon provided access, but it was of inferior quality. The Supreme Court ruled in favor of Verizon and, in doing so, it expressed a long-held criticism related to the essential facilities doctrine: Excessive intervention might harm dynamic efficiency. Verizon’s investment in its network was a vital component of dynamic competition, and restricting what it could do with it could stifle such investment and lead to inefficiencies. But the Court also recognized that the right to refuse to deal is not without limit if competition is to be promoted. In balancing these concerns, the Court did not see sufficient harm in the claims made by Verizon’s competitors. While the Court did not repudiate the essential facilities doctrine, its decision opened questions about its future relevance. Intellectual Property Rights The basis for intellectual property rights (such as patents and copyrights) is to reward innovators with a temporary monopoly. A patent holder cannot, however, do as it pleases. With a legal monopoly does not come the right to further monopolize. Establishing how intellectual property rights and antitrust law mesh is an important issue for determining the returns to innovation and thereby the incentives to invest in research and development That antitrust law places limits on the use of intellectual property rights is well established. The Supreme Court had an opportunity to reaffirm this position in a case involving Kodak, which we examine in greater detail below. Kodak refused to sell its patented replacement parts for its photocopiers to independent service operators who were competing with Kodak in serving and repairing Kodak copiers. In supporting the Ninth Circuit’s overruling of a lower court’s summary judgment in favor of Kodak, the Supreme Court stated that the Court has held many times that power gained through some natural or legal advantage such as a patent, copyright or business acumen can give rise to [antitrust] liability if “a seller exploits his dominant position in one market to expand his empire into the next.”66

Intel and intergraph Intellectual property rights were at the center of a highly publicized case against Intel in the late 1990s. By a three-to-one vote, the FTC chose to pursue an antitrust action, alleging that Intel had engaged in monopolization practices in violation of Section 5 of the FTC Act. Intel was the dominant producer of

microprocessors used in personal computers and workstations. Intergraph made computer workstations and, as a customer of Intel, received advance information about the next generation of microprocessors from Intel. This business relationship began to sour when Intergraph sued Intel for patent infringement. In a retaliatory act, Intel withheld the type of proprietary information that was essential to Intergraph. Intel engaged in similar practices against Compaq and Digital, two other computer manufacturers. On the eve of the trial in March 1999, the FTC settled with Intel. However, Intergraph continued with its litigation. Reaffirming the limits of the essential facility doctrine and refusal to deal as a monopolization practice, the court of appeals for the Federal Circuit ruled in Intel’s favor on the grounds that Intel was not a competitor to Intergraph in the market in which Intel was alleged to have monopoly power, and thus there could be no anticompetitive intent. Intergraph argues that the essential facility theory provides it with the entitlement, in view of its dependence on Intel microprocessors, to Intel’s technical assistance and other special customer benefits, because Intergraph needs those benefits in order to compete in its workstation market. However, precedent is quite clear that the essential facility theory does not depart from the need for a competitive relationship in order to incur Sherman Act liability and remedy.… There must be a market in which plaintiff and defendant compete, such that a monopolist extends its monopoly to the downstream market by refusing access to the facility it controls. Absent such a relevant market and competitive relationship, the essential facility theory does not support a Sherman Act violation. Intergraph also phrases Intel’s action in withholding access to its proprietary information, pre-release chip samples, and technical services as a “refusal to deal,” and thus illegal whether or not the criteria are met of an “essential facility.” However, it is well established that “in the absence of any purpose to create or maintain a monopoly, the [Sherman] act does not restrict the long recognized right of a trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.” United States v. Colgate & Co., 250 U.S. 300, 307 (1919).67

Though this case fell outside the realm of the Sherman Act, the Court did note that mandated access to intellectual property may be imposed where the defendant has demonstrated anticompetitive intent in refusing to license access to it. Reverse payments and pay-to-delay In FTC v. Actavis (2013),68 the Supreme Court established an important precedent regarding the boundary between antitrust law and patent law. While the case concerned anticompetitive agreements (and thus comes under Section 1of the Sherman Act and Section 5 of the FTC Act) rather than monopolization (Section 2 of the Sherman Act), we cover it here, because it pertains to the issue of intellectual property rights. The legal situation was generically described by the Court: Company A sues Company B for patent infringement. The two companies settle under terms that require (1) Company B, the claimed infringer, not to produce the patented product until the patent’s term expires, and (2) Company A, the patentee, to pay B many millions of dollars. Because the settlement requires the patentee to pay the alleged infringer, rather than the other way around, this kind of settlement agreement is often called a “reverse payment” settlement agreement. And the basic question here is whether such an agreement can sometimes unreasonably diminish competition in violation of the antitrust laws.

The details are as follow. Solvay Pharmaceuticals owned the patent on the drug AndroGel, which was approved by the U.S. Food & Drug Administration in 2000. A few years later, Actavis and Paddock Laboratories developed generic drugs modeled after AndroGel. This development spawned patent litigation, and they claimed that their drugs did not violate Solvay’s patent. Before those drugs were brought to market, the patent litigation suit was settled in 2006. As part of the settlement, Actavis agreed not to bring its product to market until 2015, which would be sixty-five months before Solvay’s patent expired, and to

promote AndroGel to urologists. In exchange, Solvay was to pay Actavis an estimated $19 to $30 million annually for a period of nine years. Paddock agreed to similar terms. The FTC filed a lawsuit against Solvay, Actavis, and Paddock on the grounds that they had violated Section 5 of the FTC Act by agreeing “to share in Solvay’s monopoly profits, abandon their patent challenges, and refrain from launching their low-cost generic products to compete with AndroGel.”69 The FTC lost the case, and on appeal, the Eleventh Circuit affirmed the decision of the district court. The Supreme Court took on the case and ruled in favor of the FTC: The FTC alleges that in substance, the plaintiff agreed to pay the defendants many millions of dollars to stay out of its market, even though the defendants did not have any claim that the plaintiff was liable to them for damages. That form of settlement is unusual. And … there is reason for concern that settlements taking this form tend to have significant adverse effects on competition.70

The FTC argued that reverse-payment settlement agreements should be presumptively unlawful, which meant that the burden was on the defendant to argue for procompetitive effects. Instead, the Court decided that they are to be judged using the rule of reason, which meant that the plaintiff must first argue that a settlement is anticompetitive. If they succeed in doing so, then the defendant can counter by claiming mitigating procompetitive benefits. Soon after this decision, an article written by eminent antitrust scholars—both economists and lawyers— provided guidance to courts for applying the rule of reason in reverse-payment cases.71 The defining features of a reverse-payment case are that the patentee provides a payment to the alleged infringer, and the latter agrees to delay its entry into the market. This situation is relevant not only for patented drugs but also for any patented technology. The basis for their proposed approach is that the amount of payment can inform the court regarding whether the patentee is paying for reduced competition. As the payment settles a patent infringement suit, the patentee is at least willing to pay up to the expected litigation costs that it would avoid. Thus, if the payment is below expected litigation costs, then it does not suggest that the motives are anticompetitive but rather just a desire to minimize litigation expense. The settlement might also have the nonpatentee parties perform some services for the patentee, as was the case with Actavis promoting AndroGel. If the sum of the avoided litigation costs and the estimated value of those benefits to the patentee is more than the reverse payment, then again there is no anticompetitive concern. However, if the payment exceeds the sum of litigation cost and the benefits conferred by the alleged infringers, then the evidence supports an inference that the settlement is anticompetitive. In other words, when procompetitive factors are insufficient to rationalize the magnitude of the payment, then the court should infer that the settlement is intended to reduce competition and, therefore, is unlawful. A simple model will illuminate this point.72 Suppose the remaining patent life is T years. If patent litigation is pursued to its fruition, there is probability ρ that the patent is found to be infringed (so the patent owner maintains a monopoly and earns annual profit of πM), and probability 1− ρ that the patent is found not to be infringed, in which case entry occurs and the patent owner receives annual profit of πD (where D stands for “duopoly”). It is assumed that πM > πD, as competition results in a lower price-cost margin and fewer sales for the patent owner. Hence if the patent owner chooses to litigate, then its expected profit is T(ρπM + (1− ρ)πD) − L, where L is the cost of litigation. Alternatively, the patent owner can settle, which will delay entry by E years, where 0 ≤ E ≤ T, and require a reverse payment R. (If any benefits from services are provided by the other parties as part of the settlement, then R is net of those benefits.) In that case, the patent owner’s profit is EπM + (T − E)πD − R, as

it earns monopoly profit πM for E years and πD for the remaining T − E years. Settlement with reverse payment is then more profitable than litigation when

Inequality 8.7 can be rearranged to yield

According to inequality 8.8, settlement is more likely to be the better option when, for example, settlement delays entry more (E is greater) and the likelihood of the court deciding the patent was infringed is lower (ρ is smaller). Next consider the impact of settlement on consumer welfare. For this purpose, let CSM denote consumer surplus when the patent owner has a monopoly, and CSD denote consumer surplus with entry. We have CSD > CSM, as price is lower with entry. Consumers are worse off with a settlement when

which can be simplified to E > ρT. Intuitively, a settlement harms consumers when the time until entry under settlement exceeds the (expected) time until entry under litigation. If the reverse payment exceeds litigation costs (so R − L > 0) and if the patent owner chose to make a reverse payment (which means it is the more profitable course of action), it follows from inequality 8.8 that E > ρT, which implies that settlement reduces consumer welfare. This analysis provides an operational approach to applying the rule of reason: If the patent owner made a reserve payment and if the reverse payment (net of any benefits) exceeds litigation costs, then settlement should be deemed unlawful because it results in consumer harm. Kodak and Monopoly Power in Aftermarkets Eastman Kodak v. Image Technical Services73 was a landmark decision that had the potential to spawn many antitrust cases and to influence firm behavior in a wide array of markets. The case concerned the aftermarket (in the form of service and repair) for micrographic equipment and high-volume copiers sold by Kodak.74 Originally, the aftermarket was served by both Kodak and independent service organizations (ISOs). To operate in the aftermarket, ISOs needed Kodak’s patented replacement parts, which Kodak chose to sell to them. Soon after losing a service contract in a vicious price war with Image Technical Services, Kodak changed its policy regarding the sale of parts. Purchasers of parts now had to provide proof of ownership of the equipment, which meant that ISOs could no longer buy them. Seventeen of these ISOs brought suit on the grounds that Kodak’s refusal to deal was intended to leverage its monopoly over parts to the service market. After a series of judicial events, which are summarized in table 8.1, the courts ultimately found Kodak guilty of monopolization practices. Table 8.1 Timeline for Eastman Kodak Co. v. Image Technical Services, Inc. 1987

Seventeen independent service organizations (ISOs), including Image Technical Services, sue Eastman Kodak in District Court for using its monopoly over parts to monopolize the service market.

1988 1990 1992 1995 1996 1997

The District Court of the Northern District of California grants summary judgment in favor of Kodak. After the plaintiffs appeal the district court’s decision, the Ninth Circuit Court of Appeals overturns the summary judgment and remands the case for trial. After Kodak appeals the circuit court’s decision, the Supreme Court agrees with the circuit court. Jury trial begins in the District Court of the Northern District of California. Kodak is found guilty of monopolization. Damages of $24 million are assessed, which are then trebled to $72 million. The Court issues a ten-year injunction requiring Kodak to sell parts to ISOs at nondiscriminatory prices. After Kodak appeals the district court’s decision, the Ninth Circuit Court of Appeals upholds liability but requires a new trial for calculation of damages.

Part of the importance of the Kodak decision is that it is relevant to any aftermarket. For the purposes of this decision, a market is an aftermarket if it has three elements: (1) a consumer is purchasing a system composed of multiple components; (2) the components are purchased at different points in time; and (3) to some degree a consumer is locked into buying a company’s system after having bought some of the components.75 In the case of Kodak, a “copier system” is composed of the copier and service, because if it breaks down and is not repaired, then the consumer receives no value. The second criterion is satisfied, because the copier is typically purchased before service is purchased.76 The essence of lock-in is that some of the cost incurred in association with the equipment cannot be recovered if the customer purchases another company’s equipment. For example, if employees are trained on a particular company’s equipment, then additional training costs are associated with purchasing new equipment. Or if the resale market is imperfect, then much of the original expenditure on equipment may not be recoverable. Lock-in is also described as the presence of switching costs, which are the costs associated with a customer’s changing the product being used. Lock-in gives a company an advantage over competitors, in that a customer will not leave unless the aftermarket price premium is sufficiently high so as to exceed switching costs. Aftermarkets include more than markets for service and repair of equipment. By this criteria, computer software is an aftermarket for hardware. The two form a system, are typically bought at different points in time, and there is lock-in (for example, previously purchased software may only work with that hardware). Another example is computer operating systems (primary good) and applications (aftermarket good). The Kodak decision is then relevant to many markets. Several antitrust issues are at work here. If Kodak has market power in the equipment market, to what extent can it leverage that power to the market for service? In this context, refusal to deal is then just like tying, in that Kodak is in essence requiring customers to buy both its equipment and its service. As the economics of tying is covered extensively in chapter 7, we will not review it here.77 Though the plaintiffs originally alleged that Kodak had tied service with its monopoly over parts, they dropped that claim at the end of the district court trial. The antitrust issue that draws our attention is whether a firm that has minimal market power in the primary market (for example, equipment) can have extensive market power in the aftermarket (for example, service). To analyze this question, consider the following simple scenario. Suppose there is one consumer type (or firms can price discriminate and are competing for each individual customer) who attaches value V(s) to the primary good when she anticipates purchasing s units in the aftermarket. For example, V(s) is the value of a copier when a customer anticipates buying s units of service. A consumer decides from which firm to buy the equipment and then how much service to purchase in the aftermarket. Due to lock-in, a consumer buys both goods from the same firm. There are two competitors, firm 1 and firm 2. For firm i, pi denotes the price of its equipment and ri the per unit price of service. Firms have identical primary and aftermarket goods with per unit cost of f for the primary good and c for the aftermarket good. Since firms’ products are identical, they do not have much

market power in the equipment market. Given that a consumer has purchased a primary good, let us derive the demand for the aftermarket service. A consumer wants to choose a level of service, s, to maximize her net surplus V(s) − rs. The value to purchasing s units, V(s), and its cost rs are depicted in figure 8.7a. Maximizing net surplus then means maximizing the vertical distance between these two curves. The optimal value is shown as s*(r). Note that it occurs where the slope of V(s) (that is, the marginal value of another unit of service) is equated to the slope of rs (that is, the price of service). If they are not equated (for example, the marginal value of service exceeds its price), then a consumer can increase net surplus by purchasing more, as it adds to surplus by an amount equal to the marginal value of surplus less price.

Figure 8.7 (a) Net Surplus to a Consumer from Equipment and Service (b) Change in Demand for Service in Response to a Change in the Price of Service

The consumer’s optimal demand for service is s*(r). As price r rises, the curve rs rotates up, and the point at which the slope of V(s) is equated to the slope of rs falls. This is shown in figure 8.7b, where the price of service rises from r′ to r″, and the amount of service demanded declines from s*(r′) to s*(r″). In other words, the demand for service is downward sloping. Initially, let us assume that firms compete by simultaneously setting prices for equipment and service. These prices are contractually guaranteed. The consumer will purchase one unit of equipment (and some level of service) from the firm offering the highest net surplus. Suppose firm 1 expects firm 2 to offer equipment and service prices that give the consumer net surplus of W2. Firm 1 will then want to choose p1 and r1 so as to maximize profit, subject to getting the customer’s business (assuming that it yields at least zero profit). We can then represent firm 1’s problem as

The expression p1 + r1s*(r1) − cs*(r1) − f is firm 1’s profit from selling the primary good and s*(r1) aftermarket units. As V(s*(r1)) − r1s*(r1) − p1 is the net surplus to the consumer buying from firm 1, the constraint requires it to be at least as great as the net surplus from buying from firm 2.78 In describing the solution to firm 1’s problem, the first step is to show that at the profit-maximizing prices, the constraint is binding; that is, the net surplus provided by firm 1 equals W2. Suppose it were not true, so that firm 1 offered net surplus in excess of W2. By raising the price of the primary good, p1, just a little bit, net surplus still exceeds W2, so that the consumer still buys from firm 1. But now firm 1’s profit is higher by virtue of selling the primary good at a higher price. We conclude that, at a profit-maximizing solution, the net surplus from buying from firm 1 equals the net surplus offered by its rival:

Solving this expression for p1, it follows that the price of the primary good must be set so that

Substituting the right-hand side of equation 8.11 for p1 in expression 8.10, the firm’s problem is now:

which simplifies to

As part of a profit-maximizing solution, firm 1 then chooses a service price to maximize the value of the good, V(s*(r1)), less the cost of producing it, cs*(r1). But that just means maximizing total surplus, and we know the solution to that problem is to price at marginal cost. Hence, the profit-maximizing service price is c. Using equation 8.11, the profit-maximizing prices are then

Intuitively, a firm wants to set the service price so as to maximize the total surplus between the customer and the firm. Though it results in no profit from selling service, the firm can extract profit through the equipment price. The final step is to show that competition results in the social optimum of price equaling marginal cost in both the primary market and aftermarket. To prove that marginal cost pricing is an equilibrium, suppose firm 2 prices at cost: r2 = c, and p2 = f. It follows that the net surplus from buying from firm 2 is W2 = V(s* (c)) − cs*(c) − f. Substituting this into equation 8.12, the optimal prices for firm 1 are r1 = c and p1 = f. In other words, if firm 2 prices at cost, then firm 1’s optimal response is to do so as well. By the symmetry of the model, it follows that firm 2 optimally prices at cost given that firm 1 does. Competition drives firms to offer the highest surplus they can while covering their costs. We conclude that if firms offer identical products and they can simultaneously commit to both equipment and service prices, equilibrium has marginal cost pricing. In the above setting, firms did not have market power in the primary market, because they had identical products. In this situation, the lack of market power resulted in no market power in the aftermarket and thus no basis for an antitrust case. In fact, this argument was originally made by Kodak’s attorneys: Very early in the fact-finding process, Kodak moved for “summary judgment.” Kodak argued that there was no allegation that it had market power in the equipment market. Kodak claimed that equipment customers had many alternatives available to them and made purchase decisions based on the total cost of ownership, and thus any attempt by Kodak to extract higher profits from maintenance customers would result in equipment customers’ taking their business elsewhere. Thus, because it could not have service market power, Kodak argued that as a matter of law it could not be found guilty of tying or monopolizing service markets.79

Though the district court judge did grant summary judgment, the circuit court reversed the decision. A key assumption in the above analysis is that a firm commits to its service price at the time the customer is purchasing the equipment. However, it is often difficult to write a complete service contract. When there is no such commitment, which we will now assume, firms will not set the aftermarket price at marginal cost. There may then be a role for the antitrust authorities. To enrich the setting a bit, suppose firms face a series of customers and, for simplicity, there is just one customer each period. A firm sets its primary good price for the new customer and its service price for old customers, who are now in the aftermarket. Furthermore, we assume some lock-in. Let us show that pricing at marginal cost is not an equilibrium by showing that a firm’s optimal response to its rival pricing at cost is to set its service price above cost. Thus, firms pricing at cost is not an equilibrium. If a firm prices both equipment and service at cost, then a firm earns zero profit in each period. Now suppose it instead prices service above cost. As long as service is not so much above cost that an equipment owner would choose to buy new equipment (in other words, there is some lock-in), the firm can earn positive profit from selling service. Because its future profit stream cannot be negative (because the firm can assure itself of nonnegative profit by pricing at or above cost in the future), its total profit is now positive. Thus, it cannot be optimal for it to price at cost.80 The firm is able to generate profit through what is called installed base opportunism. Due to lock-in, the firm has some market power over those consumers who have purchased in the past (the installed base). This market power allows it to price service above cost. In the earlier scenario, the service price was set before lock-in, at which point the firm had no market power. The general point is that unless customers contract the aftermarket good’s price at the time of purchase of

the primary good, and assuming there is lock-in, a firm will have market power in the aftermarket even when it has no market power in the primary good market. Of course, setting a high service price can develop a reputation for “ripping consumers off.” This reputation is costly, because the price that new customers are willing to pay for equipment is reduced by the prospect of a higher service price. While this reputational effect constrains how high a firm will set its service price, it will still set the price above marginal cost. Though firms do have market power in the aftermarket, it is unclear what the role of antitrust law should be here. Simply requiring Kodak to supply ISOs is insufficient, as Kodak can perform a “price squeeze” by pricing its parts so high that ISOs are unable to profitably compete with Kodak. To avoid that outcome requires the judiciary to ensure reasonable pricing of parts, but then the judiciary would be performing a regulatory role, a task for which it is ill suited. Price Discrimination and the Robinson-Patman Act The Clayton Act provision that price discrimination is illegal when it substantially lessens competition or tends to create a monopoly was amended in 1936 by the Robinson-Patman Act. As noted by Judge Richard Posner, “The Robinson-Patman Act … is almost uniformly condemned by professional and academic opinion, legal and economic.”81 In recent years the FTC has brought very few price discrimination complaints, although actions are still brought by private parties. The Robinson-Patman Act was passed during the Great Depression with the intent to protect small, independent retailers from the newly emerging chains. Contrary to the current rationale for antitrust actions, it was designed to protect competitors, not competition. For example, the grocery chain A&P was found to be in violation for buying directly from suppliers and performing its own wholesaling function. The “discrimination” was that A&P was paying lower input prices for supplies than did independent stores, who bought through brokers or wholesalers. That A&P’s lower distribution costs benefited consumers with lower prices was not relevant. The economic definition of price discrimination—charging different customers prices that are not in proportion to marginal costs—is almost completely turned on its head in the enforcement of the RobinsonPatman Act. For example, in cases where some injury to competition has been found, the act has been interpreted as holding simple price differences to be illegal, regardless of cost differences. (Strictly, cost differences are a possible defense in the language of the law, but in practice it is extremely difficult to employ this defense. A second possible defense is that the discrimination was required to “meet competition.”) Theory of Price Discrimination There are three types of price discrimination: first-degree or perfect discrimination (in which the monopolist obtains all consumer surplus), second-degree discrimination (such as tying, which was covered in chapter 7), and third-degree discrimination (in which customers are partitioned into groups and each group is charged a different price, such as discounts for children and senior citizens at movie theaters). The distinction between second- and third-degree discrimination is that in second-degree discrimination all customers confront the same price schedule, but they pay different average prices, depending on their preferences;82 while in third-degree discrimination the seller separates customers into different groups based on some external characteristic (such as age) and has the groups face different prices. Crucial to enacting third-degree price discrimination is preventing resale. If a child’s tickets could be used by adults, no adult

would pay the higher price of an adult ticket. It is also necessary for the seller to possess some degree of market power; otherwise, prices cannot differ (inasmuch as price equals marginal cost in competitive markets). For the purpose of explaining the economics of third-degree discrimination, consider duPont’s patented superstrength synthetic fiber Kevlar.83 To simplify, assume that Kevlar can only be used in undersea cables and tires. Because tire companies have the option of using low-cost substitutes, such as steel and fiberglass, the demand for Kevlar by tire companies is more elastic (at a given price) compared to the demand of cable companies, where Kevlar’s technical characteristics make it far superior to the alternatives. Suppose the demand functions for Kevlar are qc = 100 − pc (for use in undersea cables) qt = 60 − pt (for use in tires). For simplicity, let the marginal cost (MC) be constant at $20. This situation is shown in figure 8.8, where cable demand is shown on the right and tire demand on the left. Tire demand has been “flipped” to the left of the origin, and so we measure its output as increasing from right to left. This reorientation makes for a cleaner diagram.

Figure 8.8 Price Discrimination That Decreases Total Surplus

Consider the profit-maximizing solution when duPont has the option of charging different prices to tire and cable companies. Optimal quantities and prices are determined as follows. Set the marginal revenue (MRc) from cable companies equal to the marginal revenue (MRt) from tire companies, and then set both equal to MC. If the marginal revenues differed then duPont would find it profitable to shift a unit from the

market with the lower marginal revenue to the market with the higher marginal revenue. Doing so would raise revenue without changing cost, in which case profit would go up. The solution (MRc = MC at point N and MRt = MC at point G) is to sell 40 units to the cable market at a price of $60 and 20 units to the tire market at a price of $40. The profits in the two markets are found by computing revenues less costs, which gives us $1,600 from cable users and $400 from tire users for a total profit of $2,000.84 It is instructive to note that the higher price is charged in the market in which the price elasticity of demand is lower. At the equilibrium prices, the elasticity of demand is 1.5 in the cable market and 2 in the tire market.85 If the elasticities were not different, the prices would be the same and discrimination would not be profitable.86 Next, suppose discrimination is not allowed, so equilibrium has one price being charged to all consumers. Here it is necessary to aggregate the two demands by adding them horizontally to get the total demand curve. This curve is kinked where the kink is at point A (see figure 8.8). Above point A, the total demand curve corresponds to cable demand only, because the tire users will pay no more than $60. The marginal revenue curve associated with the total demand curve, SMR,87 intersects MC at a total output of 60 (at point T). Hence the single price is $50, and, at this price, cable users buy 50 units and tire users buy 10 units. Profit is $1,800 which, as anticipated, is lower than when discrimination is permitted. The following table summarizes our results: Discrimination

Single Price

pc = $60 qc = 40 pt = $40 qt = 20 qc + qt = 60 Total profit = $2,000

pc = pt = $50 qc = 50 qt = 10 qc + qt = 60 Total profit = $1,800

The Kevlar example makes it clear that third-degree discrimination, the target of the Robinson-Patman Act, is profitable. The next question to examine is the efficiency of price discrimination. For our Kevlar example, we can compare total surplus under the two scenarios: discrimination and single price. It turns out that in situations where the demand curves are linear and where both demand groups buy positive amounts in the single-price case, total surplus always falls when discrimination is allowed. Such is the case in our example. In figure 8.8 it is easy to see this result graphically. In the cable market, the area of the shaded trapezoid ANBC gives the loss in total surplus when moving from the single price of $50 to the discriminatory price of $60. The total value88 of Kevlar to cable users is reduced by the area under the demand curve between outputs of 40 and 50. Subtracting the cost saving from reducing output (the area under MC and between these two outputs) yields the trapezoid ANBC, which has area of $350. Similarly, the area of shaded trapezoid HGFE in the tire market gives the gain in total surplus when moving from the single price of $50 to the lower discriminatory price of $40. This area equals $250. Hence the net change in total surplus is a gain in the tire market of $250 minus a loss in the cable market of $350, or a net loss of $100. For cases like this example (with linear demands), it is always true that the output changes in the two markets are exactly equal and opposite in direction—that is, total output is unchanged. Hence the two trapezoids are equal in width, but the loss trapezoid is taller. To help explain this result, note that total output is unchanged; all that happens is a reallocation of output

from high-value users (at the margin) to low-value users. In the single-price case, all users end up with the same marginal valuation. This “gap” is the inefficiency that produces the net loss. Trading between cable users and tire users, which is not allowed by duPont, would make both groups better off. (It is relevant to note that duPont’s enforcement of different prices brought forth a claim from a customer that it was an antitrust violation.) As noted, we have been examining a rather special case. To go beyond it, suppose that the demand for Kevlar by cable companies is unchanged, as is the marginal cost. However, now assume that the demand by tire users is smaller than before. In particular, let

Carrying through the same analysis as before, we find that in figure 8.9, the discrimination solution is where MRc = MC and MRt = MC, so pc = $60 and pt = $30. Now, however, the single-price equilibrium yields a price of $60 from SMR = MC.89 Given that the tire users will pay at most $40, the tire users will not buy any Kevlar and instead will use fiberglass and steel in their tires. This situation is therefore different from the case in figure 8.8, where the tire users did buy positive amounts of Kevlar when a single price was used.

Figure 8.9 Price Discrimination That Increases Total Surplus

The following table summarizes these results. Discrimination

Single Price

pc = $60 qc = 40

pc = pt = $60 qc = 40

pt = $30 qc = 10 qc + qt = 50 Total profit = $1,700

qt = 0 qc + qt = 40 Total profit = $1,600

As before, discrimination yields higher profits ($1,700 versus $1,600). Now, however, total output increases under discrimination. This increase is the output purchased by tire users who were not in the market when the single price was used. As there has been no change in the cable market, the only effect of price discrimination is that it has permitted purchases of Kevlar by tire users. The interesting result is that welfare is higher with discrimination. Not only is total surplus higher (as indicated by the shaded area in figure 8.9), but also all agents are better off. Notice that no one is harmed—the cable market is unchanged— and both duPont and tire users gain. duPont gains by a profit increase of $100 (the square area of the shaded region), and the tire users gain by $50 (the triangular portion of the shaded area). Finally, we should note that in cases of nonlinear demand curves, price discrimination can either raise or lower total surplus. It is very difficult to provide any general conditions in this situation. It is true, however, that total output must increase if discrimination is to improve welfare.90 Cases Robinson-Patman cases generally involve either primary-line discrimination or secondary-line discrimination. Primary-line discrimination refers to situations in which the seller practicing discrimination injures its own rivals. Predatory pricing is an extreme example, since it requires prices set below costs. Less severe discrimination that harms one’s rivals without being predatory also qualifies as primary-line discrimination. Secondary-line discrimination occurs when injury to competition takes place in the buyers’ market. The idea is that buyers who get preferentially low prices will have an advantage over their rivals. We consider two famous cases, one in each category. The primary-line case is a private case known as Utah Pie.91 A small frozen dessert pie manufacturer in Salt Lake City, Utah Pie Company, brought the suit against three large rivals: Continental Baking, Carnation, and Pet Milk. The three large rivals had manufacturing facilities in California but not in Utah. Hence, when Utah Pie opened its frozen pie plant in Utah, it obtained a significant cost advantage over its larger rivals. Utah Pie had market-share percentages of 66.5 in 1958, 34.3 in 1959, 45.5 in 1960, and 45.3 in 1961. Also, the market was expanding rapidly over this period, and Utah Pie’s actual sales steadily rose as well. The Supreme Court noted in its opinion that Continental, for example, set a price in Salt Lake City that was “less than its direct cost plus an allocation for overhead.” Also, the rivals tended to charge less in Utah than they did in other locations. According to the Court, At times Utah Pie was a leader in moving the general level of prices down, and at other times each of the respondents also bore responsibility for the downward pressure on the price structure. We believe that the Act reaches price discrimination that erodes competition as much as it does price discrimination that is intended to have immediate destructive impact.

Utah Pie won its case, even though the decision has been regarded by most scholars as a judicial mistake. Justice Stewart, in a dissenting opinion, put the argument clearly when he said that the market should be viewed as more competitive in 1961 than it was in 1958, not less competitive, as the Court held. One reason is that the dominant firm, Utah Pie, had a 66.5 percent share of the market in 1958 but only 45.3 percent in 1961. According to Justice Stewart, “the Court has fallen into the error of reading the Robinson-Patman Act

as protecting competitors, instead of competition.” The 1948 Morton Salt case is a landmark case involving secondary-line discrimination.92 Morton Salt sold its Blue Label salt to wholesalers and to chain stores according to the following table of discounts: Amount

Price per Case

Less-than-carload purchases

$1.60

Carload purchases

$1.50

5,000-case purchases in any consecutive twelve months

$1.40

50,000-case purchases in any consecutive twelve months

$1.35

Only five customers ever bought sufficient quantities of salt to obtain the $1.35 per case price. These were large chain stores, and the Court was concerned that small independent food stores could not compete with the chains, because they had to pay, say, $1.60 per case. As the Court put it, “Congress was especially concerned with protecting small businesses which were unable to buy in quantities, such as the merchants here who purchased in less-than-carload lots.” Morton Salt defended itself by claiming that the discounts were available to all customers. However, the Court rejected this claim, saying that “theoretically, these discounts are available to all, but functionally they are not.” In conclusion, although economic analysis reveals that there are cases in which prohibiting price discrimination is socially beneficial, in many cases it should not be prohibited. The Robinson-Patman Act appears to be a poor instrument for distinguishing them. As has been claimed for predatory pricing, the attempt to prohibit certain harmful practices under the Robinson-Patman Act may be, on balance, more harmful than doing nothing, because of the chilling effect on socially desirable pricing. Former judge Robert Bork put the point colorfully: “One often hears of the baseball player who, although a weak hitter, was also a poor fielder. Robinson-Patman is a little like that. Although it does not prevent much price discrimination, at least it has stifled a great deal of competition.”93 Summary The law regarding monopolization is one of the most difficult antitrust laws to apply. In part this difficulty stems from the number of ways (some good, some bad) of winning and maintaining market dominance— superior product, better management, scale economics, predatory tactics, exclusionary contracts, and on and on. In this chapter we have reviewed the evolution of antitrust case law regarding monopolization. Initially the courts required, in addition to a large market share, evidence of abusive or predatory acts to show intent. Standard Oil is a case in point. Then, roughly from 1945 to 1970, the courts did not require evidence of abusive acts to infer intent, with United Shoe Machinery being a representative case. Since that time, the courts have been willing to allow more aggressive practices by dominant firms without inferring intent to monopolize. The 1993 Brooke decision meant more stringent criteria for arguing that a firm has engaged in predatory pricing. However, an aberration from this trend is the 1992 Kodak decision by the Supreme Court, which could have opened many aftermarkets up to antitrust litigation. The case law over the past century has resulted in the following principles regarding the establishment of a violation of Section 2 of the Sherman Act. To begin, the firm must have monopoly power, which typically involves defining the market and arguing that entry barriers allow it to maintain that monopoly. With that

prerequisite satisfied, a dominant firm must have acted to harm the competitive process as opposed to harming competitors. If the plaintiffs have successfully shown that anticompetitive harm occurred, the defendants can offer a procompetitive justification for their actions. If that justification withstands rebuttal by the plaintiffs, then the plaintiffs must argue that the anticompetitive effects outweigh the procompetitive effects. As dictated by the rule of reason, only then will a firm be found guilty of monopolization. Questions and Problems 1. If a large firm is found to possess monopoly power, what else is needed to find the firm guilty of monopolization? Why is possessing monopoly power insufficient for illegality? 2. In chapter 6 the merger guidelines were described. In particular, the relevant market was defined as consisting of all products and firms such that a hypothetical cartel could raise its price by 5 percent and not have to rescind it because of unprofitability. In light of the duPont Cellophane case, how might this rule be modified to avoid the “error” of defining the market to be too broad? 3. Which of the three market definitions considered by Judge Hand in the Alcoa case do you think is most defensible? 4. IBM redesigned its disk drive 2319A to make it more difficult for rivals to interface with IBM computer systems. The Antitrust Division of the DOJ regarded this as anticompetitive. Comment. 5. A three-firm Cournot industry has a demand curve of Q = 20 − P. Each firm has an annual total cost of 4q + 12. Find the equilibrium price, output of each firm, and profit per firm. a. The management of firm 1 is considering a strategy of predatory pricing, since management believes monopoly profits would far exceed current profits. What are the potential monopoly profits in this industry? b. Ignoring any antitrust concerns, management wants at least to do an investment analysis of predatory pricing. They decide that to drive the other two firms out of the market, a price of $2 per unit is needed, and that it must be maintained for at least three years. Other assumptions were put forth, but the consensus was the $2 price, three-year assumption. Given that a six-year time horizon and a 14 percent cost of capital are standard assumptions for firm 1 in its investment decisions, is predatory pricing profitable? Present value formulas show that at 14 percent interest, a stream of $1 receipts for three years has a present value of $2.322 and a stream of $1 receipts from year 4 to year 6 has a present value of $1.567. c. Assume that whatever the given numbers show, the management could choose numbers that make the investment profitable. If it did appear profitable (say, net present value is $10), should predation be pursued? What other considerations should the management want further information about? 6. A new chemical has been discovered that can be produced at a constant marginal cost of $10 by its patent holder, Johnson, Inc. Two industries, A and B, find the chemical, Cloreen, to be useful in their production processes. Industry A has a demand for Cloreen of qa = 100 − pa. Industry B’s demand is qb = 60 − pb. a. If Johnson can prevent resales between industries A and B, what prices will it charge A and B? It can be assumed that the patent gives Johnson monopoly power. What quantities will be sold to the two industries, and what will be Johnson’s profit? b. Assume now that it is illegal for Johnson to charge A and B different prices. What price will Johnson now charge, and what will its profit be? What is Johnson’s quantity sold? c. Is total economic surplus higher in part a or in part b? What is the difference in total surplus in the two cases? d. Assume now that the demand for Cloreen by industry B is less than before: qb = 40 − pb. Aside from this change, the facts are as given previously. Answer parts a, b, and c, given the changed demand by industry B. 7. Assume that you are to decide whether a third-degree discrimination situation is in society’s interest. Assume that under the “no discrimination allowed” case, one group of users (group B) of a new product finds its price too high to buy any of the product. But under discrimination, they would buy a positive amount at the lower price offered to them. Further, assume that the price charged to the original group of buyers (group A) remains unchanged after discrimination is

permitted. a. The legal permission to price discriminate would benefit the monopolist, but the original group of buyers (group A) would be harmed. Do you agree or disagree? Why? b. The legal permission to discriminate would benefit group B buyers and the monopolist, and group A buyers would be unaffected, so the permission to discriminate is a Pareto superior move compared with the no-discrimination situation. Do you agree or disagree? Why? c. A Pareto superior move will always pass the “increase in total economic surplus” test, but the reverse is not true. True or false? Why? 8. Justice Antonin Scalia, in his dissent to the 1992 decision Eastman Kodak Co. v. Image Technical Services, Inc., made the observation that if Kodak had required consumers to purchase a lifetime parts and service contract with each machine, the tie-in between service and parts would not have been a violation of Section 2 of the Sherman Act. What is the argument behind this claim? 9. Firms A and B are currently embroiled in litigation regarding whether firm B’s newly developed product violates the patent owned by firm A. Firm A currently has ten years left on its patent. Firm A’s attorneys have projected litigation expenses to be $10 million and believe there is a 60 percent chance of losing the case. In that event, firm B would immediately enter the market, which would cause firm A’s annual profit to fall from $1.5 million (when firm A is a monopolist) to $500,000 (when firms A and B are a duopoly). Firm B proposes the following deal to firm A: If firm A pays firm B an amount of $12 million, firm B will delay entering the market for eight years. Should firm A accept the deal or continue with litigation? Should the competition authority prohibit the deal on the grounds that it will harm consumers?

Notes 1. United States v. Aluminum Co. of America, 148 F. 2d 416 (2d Cir. 1945). 2. Berkey Photo v. Eastman Kodak Co., 603 F. 2d 263 (2d Cir. 1979). 3. United States v. Grinnell Corps., 384 U.S. 563 (1966). 4. See Abba Lerner, “The Concept of Monopoly and the Measurement of Monopoly Power,” Review of Economic Studies 1 (June 1934): 157–175. 5. This result was developed in chapter 4, equation 4.9, for the case of n firms. By setting n = 1, the monopoly result here is obtained. 6. It is useful to observe that L might be large but involve very little economic activity. For the purpose of deciding whether the government should bring antitrust charges, a better measure of monopoly power is probably the deadweight loss that it causes. It can be shown, for example, that for a monopolist with constant marginal cost and linear demand, the deadweight loss equals half the monopolist’s revenue. Hence it is clear that the deadweight loss would be small in absolute magnitude when the revenue involved is small, regardless of L. See Richard Schmalensee, “Another Look at Market Power,” Harvard Law Review 95 (June 1982): 1789–1816. 7. Richard Schmalensee, “On the Use of Economic Models in Antitrust: The ReaLemon Case,” University of Pennsylvania Law Review (April 1979): 1009. 8. United States v. E. I. duPont de Nemours and Co., 351 U.S. 377 (1956). 9. See George W. Stocking and Willard F. Muellar, “The Cellophane Case and the New Competition,” American Economic Review 45 (March 1955): 29–63. 10. Franklin M. Fisher, John J. McGowan, and Joen E. Greenwood, Folded, Spindled, and Mutilated: Economic Analysis and U.S. v. IBM (Cambridge, MA: MIT Press, 1983), p. 99. 11. United States v. Aluminum Co. of America, 148 F. 2d 416 (2d Cir. 1945). 12. Douglas F. Greer, Business, Government, and Society (New York: Macmillan, 1983). 13. United States v. United Shoe Machinery Corp., 110 F. Supp. 295 (D. Mass. 1953). 14. Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911).

15. United States v. American Tobacco Co., 221 U.S. 106 (1911). 16. United States v. United States Steel Corp., 251 U.S. 417 (1920). 17. United States v. Aluminum Co. of America, 148 F. 2d 416 (2d Cir. 1945). 18. A quorum of the Supreme Court was not available, because several justices disqualified themselves due to previous connection with the litigation. 19. Quoted in Leonard W. Weiss, Economics and American Industry (New York: John Wiley & Sons, 1961), p. 203. 20. Chapter 5 discusses strategic investment in capacity so as to deter entry. 21. United States v. United Shoe Machinery Corp., 110 F. Supp. 295 (D. Mass. 1953). 22. The breakfast cereals case was discussed in chapter 5. 23. Berkey Photo, Inc. v. Eastman Kodak Co., 603 F. 2d 263 (2d Cir. 1979). 24. For the details of this case, see Franklin M. Fisher, John J. McGowan, and Joen E. Greenwood, Folded, Spindled, and Mutilated (Cambridge, MA: MIT Press, 1983). For an opposing view, see Leonard W. Weiss, “The Structure-ConductPerformance Paradigm and Antitrust,” University of Pennsylvania Law Review 127 (April 1979): 1104–1140. 25. A reference for this section is Joseph F. Brodley, Patrick Bolton, and Michael H. Riordan, “Predatory Pricing: Strategic Theory and Legal Policy,” Georgetown Law Journal 88 (August 2000): 2239–2330. 26. Randal C. Picker, “Twombly, Leegin, and the Reshaping of Antitrust,” Supreme Court Review 2007 (2007): 161–203, p. 162. 27. One study examined forty litigated cases of predatory pricing during 1940–1982 and concluded that predatory pricing was present in twenty-seven of them; see Richard O. Zerbe Jr. and Donald S. Cooper, “An Empirical and Theoretical Comparison of Alternative Predation Rules,” Texas Law Review 61 (December 1982): 655–715. 28. John S. McGee, “Predatory Price Cutting: The Standard Oil (N.J.) Case,” Journal of Law and Economics 1 (October 1958): 137–169. 29. See Patrick Bolton and David Scharfstein, “A Theory of Predation Based on Agency Problems in Financial Contracting,” American Economic Review 80 (March 1990): 93–106. 30. Garth Saloner, “Predation, Mergers, and Incomplete Information,” RAND Journal of Economics 18 (Summer 1987): 165–186. The claim was made, but not rigorously validated, in Bernard S. Yamey, “Predatory Price-Cutting: Notes and Comments,” Journal of Law and Economics 15 (April 1972): 129–142. 31. Malcom R. Burns, “Predatory Pricing and the Acquisition Cost of Competitors,” Journal of Political Economy 94 (April 1986): 266–296. 32. For an argument along these lines, see Drew Fudenberg and Jean Tirole, “A ‘Signal-Jamming’ Theory of Predation,” RAND Journal of Economics 17 (Autumn 1986): 366–376. 33. In the parlance of game theory, price L strictly dominates M when the incumbent firm is low cost, and price M strictly dominates L when the incumbent firm is moderate or high cost. 34. 1 − ρl − ρm is the probability that it is high cost, since probabilities must sum to one. 35. If you are familiar with Bayes’s rule, then β = ρl/(ρl + ρm). For example, if all three cost types are initially equally likely, ρl = 1/3 and ρm = 1/3, then β = 1/2. 36. Note that both inequalities 8.3 and 8.4 can hold simultaneously if ρl + ρm is sufficiently small and ρl is sufficiently large relative to ρm. 37. Whether entry deterrence reduces welfare depends on whether entry would have been welfare-enhancing, and that depends on whether the rise in consumer surplus from a lower price compensates for the additional entry cost. 38. Joseph F. Brodley and George A. Hay, “Predatory Pricing: Competing Economic Theories and the Evolution of Legal Standards,” Cornell Law Review 66 (1981): 738–803, pp. 755–756. 39. Phillip Areeda and Donald F. Turner, “Predatory Pricing and Related Practices under Section 2 of the Sherman Act,” Harvard Law Review 88 (February 1975): 697–733. 40. For a strong economic argument favoring the Areeda-Turner rule, see Barry Wright Corporation v. ITT Grinnell Corporation et al., 724 F. 2d 227 (1st Cir. 1983), written by Judge Stephen Breyer.

41. These facts were cited in Nicola Giocoli, “Games Judges Don’t Play: Predatory Pricing and Strategic Reasoning, U.S. Antitrust,” Supreme Court Economic Review 21 (2013): 271–330. 42. Greer, Business, Government, and Society, p. 166. 43. Oliver E. Williamson, “Predatory Pricing: A Strategic and Welfare Analysis,” Yale Law Journal 87 (December 1977): 284–334. 44. Paul L. Joskow and Alvin K. Klevorick, “A Framework for Analyzing Predatory Pricing Policy,” Yale Law Journal 89 (December 1979): 213–270. 45. Matsushita Electric Industrial Co., Ltd. v. Zenith Radio Corp., 475 U.S. 574 (1986). 46. Kenneth G. Elzinga, “Collusive Predation: Matsushita v. Zenith (1986),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 3rd ed. (New York: Oxford University Press, 1999), pp. 220–238. 47. Matsushita Electric Industrial Co., Ltd. v. Zenith Radio Corp., 475 U.S. 574 (1986). 48. Brooke Group v. Brown and Williamson Tobacco, 113 S.Ct. 2578 (1993). 49. AA Poultry Farms, Inc. v. Rose Acres Farms, Inc., 881 F.2d 1396, 1403 (7th Cir. 1989). 50. Brodley et al., “Predatory Pricing.” Summary judgment is granted when, even if all disputed facts are in favor of one party, that party cannot, as a matter of law, win the case. 51. Ibid, p. 2259. 52. For a review of this case, see Aaron E. Edlin and Joseph Farrell, “The American Airlines Case: A Chance to Clarify Predation Policy (2001),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 4th ed. (New York: Oxford University Press, 2004), pp. 502–527. 53. “Response of U.S. Federal Trade Commission and U.S. Department of Justice to Questionnaire—Annex to the Report on Predatory Pricing Prepared by The Unilateral Conduct Working Group,” paper presented at the 7th Annual Conference of the International Competition Network, Kyoto, April 14–16, 2008. Available at www.internationalcompetitionnetwork.org /uploads/questionnaires/uc%20pp/us%20response%20predatory%20pricing.pdf (accessed November 9, 2016). 54. Advo, Inc. v. Philadelphia Newspapers, Inc., 51 F.3d 1191, 1196 n.4 (3d Cir. 1995). 55. Spirit Airlines, Inc. v. Northwest Airlines, Inc., 431 F.3d 917 (6th Cir. 2005). 56. Weyerhaeuser Company v. Ross-Simmons Hardwood Lumber Company, 549 U.S. 312 (2007). 57. For a more complete discussion, see Dennis Carlton, “A General Analysis of Exclusionary Conduct and Refusal to Deal: Why Aspen and Kodak are Misguided,” Antitrust Law Journal 68 (2001): 659–683. 58. Aspen Skiing Company v. Aspen Highland Skiing Corporation, 472 U.S. 585 (1985). 59. Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko LLP, 02-682 540 U.S. 398 (2004). 60. Daniel F. Spulber and Christopher S. Yoo, “Mandating Access to Telecom and the Internet: The Hidden Side of Trinko,” Columbia Law Review 107 (2007): 1822–1907, p. 1829. Another useful reference is Robert Pitofsky, Donna Patterson, and Jonathan Hooks, “The Essential Facilities Doctrine under U.S. Antitrust Law,” Antitrust Law Journal 70 (2002): 443–462. 61. United States v. Terminal Railroad Ass’n, 224 U.S. 383 (1912). 62. MCI Communications Co. v. AT&T, 708 F.2d 1081 (7th Cir. 1982). 63. Ibid. 64. Otter Tail Power Co. v. United States, 410 U.S. 366 (1973). 65. Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko LLP, 02-682 540 U.S. 398 (2004). 66. Eastman Kodak Co. v. Image Technical Services, Inc., 504 U.S. 451 (1992). 67. Intergraph Corp v. Intel Corp., 195 F. 3d (Fed. Cir. 1999). 68. Federal Trade Commission v. Actavis, Inc. 133 S. Ct. 2223 (2013). 69. Watson Pharmaceuticals, Inc., et al. (FTC v. Actavis), FTC Matter/File Number: 071 0060, February 2, 2009. 70. FTC v. Actavis, Inc., 570 U.S. 133 (2013). 71. Aaron Edlin, Scott Hemphill, Herbert Hovenkamp, and Carl Shapiro, “Activating Actavis,” Antitrust 28 (Fall 2013): 16– 23.

72. The ensuing analysis is from Edlin et al., “Activating Actavis.” 73. Eastman Kodak Co. v. Image Technical Services, Inc., 504 U.S. 451 (1992). 74. References for this discussion include Carl Shapiro, “Aftermarkets and Consumer Welfare: Making Sense of Kodak,” Antitrust Law Journal 63 (1995): 483–512; and Jeffrey K. MacKie-Mason and John Metzler, “Links between Markets and Aftermarkets: Kodak (1997),” in Kwoka and White, eds., The Antitrust Revolution, 4th ed., pp. 428–452. 75. Shapiro, “Aftermarkets.” 76. Although service comes after the purchase of the equipment, equipment and service could, in principle, be purchased simultaneously if the company offers, at the time of purchase, a completely specified service contract, such as an unconditional warranty. 77. The price discrimination rationale for tying for Kodak-type aftermarkets is explored in Zhiqi Chen and Thomas W. Ross, “Refusal to Deal, Price Discrimination, and Independent Service Organizations,” Journal of Economics and Management Strategy 2 (Winter 1993): 593–614. 78. If the net surpluses are equal, then the consumer is indifferent. To ensure the sale, a firm would then want to provide a little more net surplus than its rival. Making this change would only marginally change the results and would not affect any of our conclusions. 79. MacKie-Mason and Metzler, “Links between Markets and Aftermarkets,” p. 432. 80. This argument is from Severin Borenstein, Jeffrey K. MacKie-Mason, and Janet S. Netz, “Exercising Market Power in Proprietary Aftermarkets,” Journal of Economics and Management Strategy 9 (Summer 2000): 157–188. 81. Richard A. Posner, The Robinson-Patman Act: Federal Regulation of Price Differences (Washington, DC: American Enterprise Institute, 1976), p. 57. 82. For example, in the tying illustration discussed in chapter 7, the two customers faced the same price schedule (a copying machine price of $2,812.50 and a price per package of copying paper of $25); however, one customer paid an average price per “unit of copying services” of $62.50, compared with only $43.75 for the other customer. The amounts differed because the two customers’ demands for copying services differed. 83. This example is based on Jerry A. Hausman and Jeffrey K. Mackie-Mason, “Price Discrimination and Patent Policy,” RAND Journal of Economics 19 (Summer 1988): 253–265. 84. Profit from cable users = pc qc − (MC)qc = (60)(40) − (20)(40) = $1,600; Profit from tire users = pt qt − (MC)qt = (40) (20) − (20)(20) = $400. 85. The elasticity of demand is −(dq/dp)(p/q), so substituting pc = 60, qc = 40, and dq/dp = 1, we get 1.5. Similarly, in the tire market the elasticity is 2. 86. This point can be understood by recalling the standard formula MR = p(1 − 1/η), where η is the absolute value of the elasticity. Hence, if MRc = MRt, then pc(1 − 1/ηc) = pt(1 − 1/ηt), and if ηc = ηt, then pc = pt. 87. The curve SMR is actually MNPTR, not just PTR in figure 8.8. The curve has a discontinuous jump upward at point N where the kink in total demand occurs. We denote this by SMR for “simple marginal revenue,” following the exposition in Joan Robinson, The Economics of Imperfect Competition (London: Macmillan, 1933), chapter 15. The SMR should not be confused with the aggregate MR schedule, which is the horizontal sum of MRc and MRt. We do not need that schedule here because of the assumption that MC is constant. The aggregate MR schedule is used in the discrimination case for determining total output when MC is not constant. 88. Because the demanders are firms rather than individuals, the demand curves are marginal revenue product schedules, and the area under these curves represents total revenue to the buying firms. 89. Similar to the point made in note 85, the SMR coincides with MRc up to output 60; it then jumps vertically from a negative value to point P and includes segment PR. 90. See Richard Schmalensee, “Output and Welfare Implications of Monopolistic Third-Degree Discrimination,”American Economic Review 71 (March 1981): 242–247. 91. Utah Pie v. Continental Baking, 386 U.S. 685 (1967). An interesting analysis of this case is K. G. Elzinga and T. F. Hogarty, “Utah Pie and the Consequences of Robinson-Patman,” Journal of Law and Economics 21 (October 1978): 427– 434.

92. Federal Trade Commission v. Morton Salt Co., 334 U.S. 37 (1948). 93. Robert Bork, The Antitrust Paradox (New York: The Free Press, 1978), p. 382.

9 Antitrust in the New Economy

This chapter examines antitrust in the context of New Economy industries. What is a New Economy industry? Most people would agree that search engines, software, and online social networks are part of the New Economy, while automobiles, fast food restaurants, and life insurance are not. One useful definition of the New Economy is that it revolves around the creation of intellectual property in the form of computer code and includes (1) computer software, (2) Internet-based businesses (such as Internet service providers and content providers), and (3) communication services and equipment designed to support the first two markets.1 While we will indeed be considering such industries—and will focus on some cases involving New Economy giants Microsoft and Google—the chapter is rooted in the economic fundamentals that underlie the New Economy and are pertinent to antitrust. As a result, the lessons to be learned will pertain to any industry with those fundamentals, whether part of the New Economy or not. But what are those fundamentals? Economic Fundamentals of the New Economy Intellectual property is a critical feature of a New Economy industry. The creation and application of knowledge typically involves a high fixed cost and low marginal cost. The fixed cost is the expenditure incurred in discovering new knowledge. In the case of Microsoft’s Windows, it took years of development involving a large number of software engineers and vast amounts of computing equipment to produce the original computer code. The marginal cost of then selling that intellectual property is the cost of distributing it to a consumer, which is quite low. In the case of Windows and other software programs, it was originally the cost of manufacturing and distributing a disk with the code recorded on it, which is indeed very small. Now, it is even lower, as most software programs are downloaded off the Internet. Of particular importance for the relevance of antitrust in New Economy industries, the scale economies coming from a high fixed cost and low marginal cost can produce highly concentrated markets. Thus, one economic fundamental of a New Economy industry is that there are scale economies associated with the creation and distribution of intellectual property. At the same time that fixed costs can be high relative to variable costs, fixed costs can be low in absolute terms in some cases. The fixed cost of Facebook supplying the entire market with social networking services is far lower than, say, the fixed cost of AT&T supplying local telephone service to the entire market in the pre-cellular days of wireline systems. Or compare the fixed costs of Walmart and Amazon in providing retail services in the United States. When Walmart entered the retail market, its geographic coverage was limited by the locations of its stores. It took a massive investment over several decades for Walmart to serve

all of the well-populated areas of the United States. In contrast, when Amazon entered the book market, it covered the entire United States at a very modest fixed cost.2 When the fixed cost is largely composed of the cost of creating knowledge as opposed to the construction of plants and facilities, potential competition may be a more viable threat in New Economy industries than in traditional ones, in spite of scale economies. A second trait of many New Economy industries is network effects. A product or service has network effects when its value to a consumer increases with how many other consumers use it. Communication networks, such as telephone systems, email, text messaging, and social networking sites, have powerful network effects: The more people who are connected to a communication network, the more valuable it is to be part of that network because there are more people with whom to communicate. Another example is software programs for which users want to share files. A program with many users is more appealing, because there are more people with which one can exchange files. Early in the days of personal computing, an important determinant of a consumer’s decision regarding whether to use Microsoft Word or WordPerfect as their word processing package was how many other people used those programs; the more people who used the same program, the more people with which one could exchange files. A second class of products and services has network effects that operate indirectly. An example we will explore in the context of the Microsoft antitrust case is a computer operating system (OS). The value of an OS to a consumer depends on the supply of software applications written for that OS. The more apps that are written for it, the more value a consumer will attach to the OS and the more likely it will be bought. Of course, software developers are more inclined to write an app for an OS with more users, as then there are more potential buyers of that app. We then have a network effect: The more consumers who buy an OS, the more apps will be written for the OS, which then makes the OS of greater value to consumers. At work with this indirect network effect is that the more consumers who buy a product, the more complementary products are produced for that product, which then enhances the value of the original product. In the preceding example, the product is the OS, and the complementary products are software applications. Another example is a smartphone operating system platform—such as Apple’s iOS and Google’s Android—and apps written for the platform. Or consider video game platforms—such as Microsoft’s Xbox, Sony’s PlayStation, and Nintendo’s Wii—and the games written for those platforms. The more consumers who use a PlayStation, the more games are written for the PlayStation, which enhances the value of owning a PlayStation. Thus far, two fundamentals of the New Economy have been identified: high fixed cost–low marginal cost and network effects. These traits are not, however, the exclusive domain of New Economy industries. Consider a “new economy” industry at the turn of the twentieth century: local telephone service. The construction of a physical network of wires was a very large fixed cost, while the marginal cost of adding a consumer to the network was small (at least in urban areas where population density was high). That an American Telephone & Telegraph (AT&T) customer could only connect with someone who was also an AT&T customer created strong network effects. The more customers that AT&T had, the more attractive it was to join AT&T. While initially there were competing local telephone systems, ultimately one company, AT&T, prevailed (though admittedly with the assistance of some government regulation). Both high fixed cost–low marginal cost and network effects will contribute to a few firms (perhaps, just one) operating in a market. Furthermore, these two forces can reinforce each other to produce and sustain market dominance through the process of innovation. Consider an innovation that produces a better product or service for consumers. The profits earned from that innovation will depend on two factors. The first is the firm’s share of the additional value that the innovation creates. If a firm anticipates that consumers will capture all gains from the innovation, then it would not invest in research and development (R&D). It is then

critical that the firm be able to appropriate a sufficient share of the new value created (which is an issue discussed in the section on innovation in monopoly and competition in chapter 3). The second factor is how many consumers will benefit from this innovation. The total profit generated by the innovation can be expressed as (αv − c) Q − F, where v is the economic surplus created by the innovation, α is the share of the surplus captured by the firm (in which case αv is the revenue earned for each unit sold), c is the marginal cost of each unit sold (which for intellectual property is typically quite small), Q is the number of units sold, and F is the fixed cost of the innovation. The point is that the profitability of the innovation is increasing with Q; the more consumers who can benefit from the innovation, the larger the revenue generated will be. Another way to think about it is that investing in R&D is profitable if and only if

That

condition is more likely to hold when the fixed cost can be spread out over more consumers. Now we come to the main point regarding how network effects and innovation are mutually reinforcing. Network effects will tend to result in market dominance as reflected in a high market share. That high market share means a large customer base (large Q) that would benefit from an innovation that improved the product or service being offered. Hence, the dominant firm is more likely to find investing in innovation to be profitable, which will serve to perpetuate its dominance. By way of example, consider social networking sites, such as Facebook and LinkedIn; each site dominates its market because of network effects. With Facebook’s large customer base, incurring a fixed cost to develop a better service has a large financial return. That better service, even if quite small (such as Facebook’s introduction of the Like button in 2009), will cause each user to spend more time on the site, which, when aggregated across its immense customer base, will attract many more advertising dollars to Facebook. Thus, network effects and innovation go hand-in-hand, which reinforces initial market dominance. This complementarity between a large customer base and the incentive to invest was originally identified to explain market concentration in the offline retail sector3 (in particular, companies like Walmart), which again highlights that the forces being discussed are not just relevant for New Economy industries. A strategy of low prices and high investment in cost reduction for a retailer are complementary: Low prices will result in a large customer base and, given a large customer base, the return from lowering cost is greater because the fixed cost of reducing marginal cost can be spread out over many customers. The third fundamental economic trait of the New Economy is rapid and disruptive innovation. Disruptive innovation must be distinguished from the incremental innovation just discussed. The latter refers to modest changes in the services provided, such as eBay’s “Buy it Now” option or Amazon’s “1-Click Ordering.” Continual incremental improvements of that sort are an integral part of online markets, because it is easy to change a website (especially compared to changing a manufacturing process). But what we are focusing on here is innovation that has the potential to displace what is considered to be the best product or service in the market. An example is Google’s Page Rank algorithm. The superior search results it produced caused Google to surge past existing leaders in the search engine market, such as AltaVista, Lycos, and Yahoo!. The history of the video game industry is a series of disruptive innovations as market leaders are regularly supplanted. Originally dominated by pioneer Atari, Nintendo became the new market leader after launching the 8-bit Nintendo Entertainment System in 1985. Later, Sony took over leadership with the arrival of the 32-bit PlayStation in 1995. The market continues to be disrupted by the next best technology, which may come from an existing or new firm. While network effects contribute to sustained market dominance, a high rate of disruptive innovation can mean the regular supplanting of dominant firms. Competition from firms with new technologies—rather than competition from existing rival firms—is often the more important force in New Economy industries.

This trait is to be contrasted with many standard industries, where competition typically focuses on prices and product traits among existing suppliers, and where innovation is incremental and distinctly not drastic. Rapid and disruptive innovation also has significant welfare implications, as it can mean that improvements in products and services, rather than lower prices, are the primary source of consumer benefits. Though not a New Economy industry, the pharmaceutical industry also involves intellectual property and is characterized by high fixed cost–low marginal cost and a high rate of disruptive innovation. The intellectual property is the knowledge that a chemical compound is effective in treating some illness. Obtaining that knowledge (and permission to sell the associated drug in a country such as the United States) is extremely expensive, as it involves R&D and clinical trials to prove safety and efficacy that can cost hundreds of millions of dollars. The marginal cost of manufacturing and distributing the product is often quite low (though marketing to promote the drug can be expensive). While protected by a patent, a drug that dominates its market could be overthrown at any moment by the development of a superior drug, and therein lies the source of disruptive innovation. To summarize, the key properties of New Economy industries pertinent to antitrust analysis are (1) high fixed cost and low marginal cost of developing and selling intellectual property, (2) network effects, and (3) rapid and disruptive innovation. Not all New Economy industries have all of these features, and some traditional industries also have them. But these features are sufficiently common in the New Economy and are a source of considerable challenge to the enforcement of competition law that they will be the focus of our attention. Antitrust Issues in the New Economy When assessing the impact of potentially anticompetitive actions on consumers, the focus of the preceding chapters was largely on price. Will a horizontal merger raise price? Will exclusive dealing allow an upstream manufacturer to raise the input price for downstream firms and thereby cause final consumers to pay more? It was also considered how the array of products offered and the extent of services provided might be impacted. Will a higher price due to minimum resale price maintenance be offset by more services provided by retailers? Will a vertical restraint foreclose an upstream manufacturer from the downstream market and cause it to exit, thereby reducing product variety and raising price? In contrast, the focus of antitrust actions for New Economy industries is elsewhere. It is less about how firm behavior and antitrust intervention affect static efficiency through their impact on prices, products, services, and cost, and more about their effect on dynamic efficiency in creating novel products and service and producing major technological improvements that drastically lower cost. Some of the distinctive elements of the New Economy pertinent to antitrust analysis are succinctly stated by David Evans and Richard Schmalensee: There are three important implications for antitrust economic analysis. First, the rational expectation of significant market power for some period of time is a necessary condition for dynamic competition to exist.… Thus if dynamic competition is healthy, the presence of short-run market power is not a symptom of a market failure that will harm consumers. Second, one expects leaders in new-economy industries to charge prices well above marginal cost and to earn high profits. It is natural in dynamic competition, not an indicator of market failure, for successful firms to have high rates of return, even adjusting for risks they have borne.… Third, although static competition is rarely vigorous in new-economy industries, the key determinant of the performance of these industries is the vigor of dynamic competition.4

These properties largely emanate from the fact that, in many New Economy industries, the primary force is competition for a market rather than competition in a market. As a result, traditional measures of market

competition—such as market concentration, price-cost margins, Lerner indices—are less meaningful. As we know from chapter 6, horizontal merger analysis has been moving away from defining the market, given the difficult and contentious challenges in doing so. The relevance of market definition is even less useful for analyzing New Economy industries, because competition can come from anywhere (and, therefore, from outside any conventional definition of a market) and could ultimately change what exactly is the market. Due to the central role of “competing for a market,” the relationship between market concentration and price is rarely a primary consideration in antitrust matters concerning the New Economy. More important is the role of potential competition and, in particular, the ease with which a firm with a superior technology could succeed. Can a better search engine easily supplant Google? Can a superior operating system take the market from Microsoft? Can a more attractive and efficient auction site induce buyers and sellers to leave eBay? The heightened importance of potential competition above actual competition means that a competition authority may be less concerned with the acquisition by a dominant firm of an existing rival and more concerned with its acquisition of a nascent technology owned by a noncompetitor that could prove to be a disruptive innovation. Of course, it can be a highly speculative exercise to predict whether a new technology will be a disruptive innovation, which highlights the significant challenge of effectively enforcing competition law in the New Economy. While the relationship between price and cost is typically relevant to assessing market power and monopolization attempts, that yardstick can be very misleading with the New Economy. When discussing monopolization practices in chapter 8, it was noted that a dominant firm pricing below marginal cost is evidence consistent with predatory pricing. While it was also mentioned that there are legitimate competitive reasons for below-cost prices, those instances are generally rare and can often be easily identified. With many New Economy industries, price below marginal cost may be a very weak standard for assessing attempted monopolization, because marginal cost is so low. The marginal cost of distributing software is close to zero, but an effective predatory pricing strategy could involve a price below average variable cost, which is well above marginal cost. In addition, the nature of markets in the New Economy often results in equilibrium prices being largely unrelated to cost. In the case of products with network effects, price may be set very low to attract consumers, build the customer base, and thereby enhance the value of the product. (This dynamic pricing strategy will be explored in the next section.) For some products and services, price is set at zero (and thus below marginal cost) even in the long run. The New Economy is loaded with examples: It costs nothing to use the Google search engine and to be listed under the organic results (though advertisers pay for sponsored listings); it costs nothing to join, post, and message on Facebook (though again advertisers pay); it costs nothing for a buyer to use the services of eBay (though sellers pay a fee to list a good and a commission when the good is sold). As explained later in this chapter, these are all examples of two-sided platforms, and they share the common feature that price is largely unrelated to cost, only partially related to how much a person values the service, and can be largely determined by how much value that person creates for the platform. For all of these reasons, the relationship between price and cost is less informative of market power and efficiency than in the traditional economy. As we know from the preceding chapters, anticompetitive actions pertain to collusion, horizontal mergers, and the exclusion of rival firms (for example, due to vertical mergers, vertical restraints, or predatory pricing). Let us take stock of what has generally drawn the attention of competition authorities in the United States during the early years of the twenty-first century. Collusion has clearly been a primary focus of the Antitrust Division of the Department of Justice (DOJ) as it has aggressively prosecuted and penalized cartels. With regard to horizontal mergers, the DOJ and Federal Trade Commission (FTC) have overall implemented a fairly lenient policy; they are more inclined to approve anticompetitive mergers with

structural changes or conduct conditions (“remedies”) than prohibit them. With regard to vertical mergers and restraints, there has been very little activity on the enforcement front unless one of the firms is clearly dominant, and even then, cases are few. (In comparison, the European Commission [EC] has been more active in pursuing vertical restrains under the legal rubric of abuse of dominance.) Finally, predatory pricing cases are largely nonexistent. What does that status report look like for markets in the New Economy? As discussed above, market dominance is a common feature of the New Economy due to network effects and scale economies. In light of the frequent presence of a dominant firm and the critical role of entry for dynamic efficiency, vertical restraints have proven to be the most substantive antitrust issue in the New Economy. In this chapter, we consider antitrust cases against Microsoft involving a variety of monopolization practices, including exclusive dealing and tying. We will examine a case brought by the EC against Google that charges the company with abuse of dominance on the grounds that it used its monopoly position in the general search market to acquire market power in other markets. The FTC also investigated that matter and, while it chose not to bring a case, Google did agree to alter some of its practices. Turning to horizontal and vertical mergers and acquisitions, there has certainly been no shortage of them in the New Economy.5 Some of the acquisitions include PayPal ($1.5 billion in 2002) and StubHub ($310 million in 2007) by eBay; YouTube ($1.65 billion in 2006) and Motorola Mobility ($12.5 billion in 2011) by Alphabet (the parent company of Google); Skype ($8.5 billion in 2011) and LinkedIn ($26.2 billion in 2016) by Microsoft; and Instagram ($1 billion in 2012) and WhatsApp ($19 billion in 2014) by Facebook. But very few proposed transactions were considered to have the potential for anticompetitive effects. Google’s acquisition of Doubleclick ($3.1 billion in 2011) was criticized by competitors on the grounds that Doubleclick’s market power in the ad serving market would enhance Google’s position in the search ad market, but the FTC found little basis for that complaint. More problematic was Google’s acquisition of ITA Software ($676 million in 2011), which did raise some concerns at the DOJ that Google might withhold, degrade, or raise the price of the travel data that ITA provided to other companies serving the flight search market. The acquisition was approved with a conduct remedy. The one notable transaction that did not occur was when Yahoo! sought to buy Google in 2008, at which time their combined market share in the general search engine market was over 80 percent. Yahoo! jettisoned its plan after the DOJ expressed that it was likely to challenge it.6 Overall, merger activity in the New Economy has not been constrained by competition authorities due to the lack of identified anticompetitive effects. In the general economy, collusion is most common in markets with homogeneous goods or services, such as cement, chemicals, shipping, and vitamins. Though a lack of differentiation among products and services is not typically a feature of New Economy industries, the desire to coordinate for the purpose of constraining competition is ever present. In the e-books case, Apple was found guilty of organizing a conspiracy among book publishers with respect to how they are compensated (and which would have impacted the prices paid by consumers).7 Interestingly, a case can be made that this arrangement would have constrained the potential abuse of market dominance by Amazon in that market. There is an increased opportunity for collusion in online retail markets due to enhanced price transparency and the use of pricing algorithms. Only one recent case has been pursued along those lines, but more cases in the future is quite possible.8 One of the most significant challenges associated with the implementation of effective antitrust law and policy in the New Economy is the high rate of innovation, especially of the disruptive type. The challenges manifest themselves in two ways. First, it is difficult to predict when and from where the next disruptive innovation will come, and thus to what extent any current abuse of market dominance will soon be constrained or made irrelevant by the arrival of a new competitor. Second, the pace of innovation, when

juxtaposed on the often slow pace with which an antitrust case develops, can make an antitrust remedy no longer appropriate or effective. When it comes to assessing possible antitrust violations and weighing the impact of government intervention, the speed and disruptiveness of technological change does not necessarily imply a laissez faire approach. Innovation may be harmed either by antitrust actions that are too aggressive (for example, penalizing a firm that is dominant only because of its success in the market) or too lenient (for example, allowing a dominant firm to exclude from the market an entrant that would have introduced a disruptive innovation of significant value to consumers). The challenge for antitrust authorities is finding the right balance in the New Economy. How does one effectively evaluate the impact of a vertical restraint or a merger on dynamic efficiency? And how does one perform that evaluation in a timely manner? Antitrust law is, in principle, appropriate for dealing with anticompetitive conduct in the New Economy. The question is whether the legal and administrative processes, along with economic analysis, are up to the task. Markets with Network Effects Economics of Markets with Network Effects Before we tackle the series of antitrust cases involving Microsoft in the 1990s, it is important to understand the economic force—network effects—that was the basis for the near-monopoly position of Microsoft in the market for operating systems. It is best to start with basics by constructing a demand curve for a product with network effects. To simplify matters, suppose that there are two types of consumers, denoted L and H, and type H consumers value the good more. When a total of Q consumers use the product, VH(Q) denotes the value that a type H consumer attaches to the product, and VL(Q) denotes the value of a type L consumer. As the product has network effects, VH(Q) and VL(Q) are both increasing with Q, as depicted in figure 9.1. Assume a population of 1,000 consumers, of which 250 are type H consumers and 750 are type L consumers.

Figure 9.1 Network Externalities

To derive the demand curve, first note that it can take three possible values—0, 250, and 1,000—which correspond to no one buying, only type H consumers buying, and everyone buying, respectively (note that if a type L buys, then a type H buys, too). As usual with demand curves, demand is zero if price is sufficiently high. More specifically, if price exceeds VL(1,000), then no type L consumer will buy, as their value is below the price even when network effects are maximal. Furthermore, if only type H consumers buy, then a type H consumer is only willing to pay VH(250), which, as can be seen in figure 9.1, is less than VL(1,000). Thus, if price exceeds then demand is zero. What if price is less than Demand can still be zero, but it can also be positive; demand is not uniquely defined! If consumers expect no one to buy, then this belief will be self-fulfilling in that no one will buy. That is, if each consumer expects Q = 0, then, since VH(0) = 0 and VL (0) = 0, they are not willing to buy at any positive price.9 Thus, there is a vertical segment at Q = 0 in figure 9.2 that indicates that demand can be zero regardless of price. Another possibility, however, is that demand is 1,000. If all consumers expect Q = 1,000, then type L (and therefore type H) consumers will buy as long as The remaining possibility is for demand to be 250. For that to occur, price must satisfy VH(250) ≥ p ≥ VL(250), so when Q = 250 is expected, type H consumers want to buy but type L consumers do not. As shown in figure 9.2, this holds when

Figure 9.2 Demand for a Product with Network Externalities

Summing up, we have the demand curve in figure 9.2. When price exceeds demand is zero. When price is between and demand is 0 or 1,000. When price is between and demand is 0, 250, or 1,000. And when price is less than , demand is once again 0 or 1,000. An implication worth noting is that demand need not be higher when price is lower. For example, if price lies between and and consumers only expect type H consumers to buy, then demand is 250. If price exceeds (but is below ) and consumers expect everyone to buy then demand is 1,000. Of course, raising price also changes consumers’ expectations (and thus their belief as to the magnitude of network effects) and that is what leads to higher demand at a higher price. Let us now note four fundamental properties of markets with network effects. First, as just described, consumer expectations about the popularity of a product matter in determining demand. Second, a critical mass of consumer support can be instrumental in the success of a product with network effects. In the context of figure 9.2, suppose a firm must get all 1,000 consumers to buy in order to achieve profitability. Further suppose that consumers base their decisions on the size of the installed base, that is, the number of consumers who previously purchased and thereby are using the product. If the firm is able to induce the type H consumers to buy, so that initially Q = 250, it can induce the type L consumers to buy by setting price below VL(250). However, if it cannot get that initial mass of consumers to buy, there is no price that will induce any consumer to buy. At work is a mechanism known as positive feedback: The more consumers that buy, the easier it is induce additional consumers to buy. The trick is getting the critical mass to jumpstart the positive feedback process. Third, markets with network effects naturally lead to market dominance. Due to positive feedback, there can be tipping, whereby once a firm gets enough of a lead in its installed base, it goes on to dominate. The firm with a higher installed base will offer a more attractive product because of network effects and this will

attract consumers at a higher rate than rival firms. Hence, an initial advantage in installed base steadily grows: The best get better. Fourth, markets with network effects are also prone to sustained dominance. The initial firm in an industry can persist in being dominant, even when superior products come along, because it has a large installed base to offset a rival’s superiority. Returning to our example, suppose all 1,000 consumers are using the current technology, which means the net surplus to type H consumers is VH(1000) − p and to type L consumers is VL(1000) − p, where we have netted out the price p. Now suppose a superior technology comes along with value and for all Q. Hence, for the same number of users (and price), all consumers prefer the new technology. However, if and then (at the same price) no consumer would want to switch technologies if he or she thought the other consumers would remain with the old technology. Unless the new technology is priced sufficiently lower than the current technology (which may make it unprofitable, especially since the existing technology can also lower its price) or consumers manage to somehow coordinate a shift, this new and superior technology will fail. Exemplifying some of these properties, consider when IBM introduced OS/2 in 1987 as an OS for the personal computer. At the time, the dominant OS was sold by Microsoft. IBM spent about $2 billion developing OS/2, which was generally considered to be superior to Microsoft’s OS. In spite of its technological appeal and that IBM was the largest computer manufacturer in the world, OS/2 failed miserably. IBM never convinced personal computer manufacturers and consumers that it would become popular. Consumers did not buy it because they did not expect others to do so, and if not enough people bought it, then there would not be much software written for it. To gain a better understanding of the dynamic competition in a market with network effects, let us delve into the optimal pricing decision of a firm.10 Consider two firms competing in a new market with network effects. The value that a consumer attaches to the product of firm 1 (2) depends on the installed base of firm 1 (2), where the installed base is the set of consumers who bought in the past and are currently using the product. New consumers are flowing into the market each period, and their purchasing decisions depend on firms’ prices, the characteristics of firms’ products, and firms’ installed bases. A consumer is more likely to buy a firm’s product when its price is lower and network effects are stronger. These effects are captured in firm 1’s demand function D1(p1, p2, B1, B2) and firm 2’s demand function D2(p1, p2, B1, B2), where pi is the price of firm i and Bi is the (current) installed base of firm i. The demand function D1(p1, p2, B1, B2) is decreasing in its own price p1, increasing in its rival price p2, increasing in its own installed base B1 (so network effects are stronger), and decreasing in its rival’s installed base B2. Analogous properties hold for firm 2’s demand function. If there were no network effects then firm 1 would choose its price to maximize its profit (p1 − c1)q1,where c1 is its unit cost, and q1 is how many units it sells (and depends only on p1 and p2 when there are no network effects). The presence of network effects inserts a dynamic consideration into a firm’s pricing decision: The price it sets today influences how many consumers buy today, which affects the installed based it will have tomorrow and thereby affects its demand tomorrow. Selling more in the current period will then raise demand and profit in the next period and, in fact, every period thereafter, as all those periods will have a higher installed base. We summarize the effect of the installed base on a firm’s future profits with the expression W1(B1, B2), which is the sum of discounted profits starting tomorrow when the initial installed bases are B1 and B2. By the preceding argument, W1(B1, B2) is increasing in B1 and decreasing in B2. The firm’s pricing problem can then be posed as choosing a price to maximize the sum of current profit and the discounted sum of future profits:

A firm will take into account how its price affects current profit (p1 − c1)q1 but also its future profit stream W(B1 + q1, B2 + q2) in that price influences the future installed base through its effect on the current amount sold q1. Figure 9.3 plots current profit (p1 − c1)q1 as a function of price, which as usual, is hill shaped. In the absence of network effects, a firm would choose price to maximize its current profit, which yields a price of With network effects, the current price also impacts the future profit stream W1. The profit stream W1 is decreasing in the current price because the lower is price, the more consumers will buy, and, therefore, the higher is the installed base in the next period, which implies stronger future demand and higher profits. Hence, a lower price not only raises current demand but also future demand. This has the implication that optimal dynamic pricing has the firm price below that which maximizes current profit. In figure 9.3, that price is While pricing below forgoes some current profit, it is more than compensated by higher future profits by building the installed base.

Figure 9.3 Firm 1’s Current and Future Profits

This dynamic pricing incentive can have some powerful consequences. Making some assumptions on the exact form of demand, one can solve for a firm’s equilibrium price function. (It is an equilibrium price function, because each firm’s price function is optimal, given the other firm’s price function.) Figure 9.4 depicts the optimal price for firm 1 (which is given by the height of the plotted surface) depending on both firms’ current installed bases. The “trench” around the diagonal indicates that firm 1 prices very low when firms have comparable installed bases (and firm 2 will price similarly). Referred to as penetration pricing, it is intended to build a firm’s installed base and thus is a form of investment that, in expectation, pays off in terms of future profit. Penetration pricing is intended to spark the positive feedback associated with network effects. Thus, a firm that gets a slight edge in its installed base has a significantly higher chance of dominating the market and earning high profits. Though price could even be below marginal cost, it is not predatory pricing, because the intent is to build a firm’s installed base rather than drive its rival from the

market.

Figure 9.4 Optimal Price of Firm 1 Depending on Firms’ Installed Bases Source: Figure 2 from Jiawei Chen, Ulrich Doraszelski, and Joseph E. Harrington Jr., “Avoiding Market Dominance: Product Compatibility in a Market with Network Effects,” RAND Journal of Economics 40 (Autumn 2009): 455–485.

When installed bases are far enough away from the diagonal, prices are much higher because the intense phase of competition is over. The firm with a smaller installed base is resigned to having a smaller share of the market because supplanting the current market leader would take a sustained policy of pricing low, which is too costly. Thus, firms settle down to more standard competition though with the firm with a higher installed base having higher demand, a higher price, and higher profits. The tendency for market dominance to emerge is portrayed in Figure 9.5. This figure plots the average movement in firms’ installed bases based on their current installed bases (and given equilibrium prices). When installed bases are equal (so they are on the diagonal), the average tendency is for them to remain equal; either rising when their initial bases are low or falling when they are high. (They can fall over time because some consumers who bought in the past stop using the product.) With identical installed bases and identical prices in equilibrium, no firm has an advantage in expectation. However, this only shows what happens on average, and the actual realization can be different, so that even if installed bases start on the diagonal, they can move away from it. Once firms’ installed bases are different, the tendency is for them to move in the direction of expanding the advantage of the firm with the larger installed base. For example, if installed bases are to the right of the diagonal, so firm 1 has the larger base, the arrows point to the right which means the average tendency is for installed bases to move to the right which means firm 1’s advantage grows.

Figure 9.5 Average Direction of Future Installed Bases Depending on Firms’ Current Installed Bases Source: Figure 5 from Jiawei Chen, Ulrich Doraszelski, and Joseph E. Harrington Jr., “Avoiding Market Dominance: Product Compatibility in a Market with Network Effects,” RAND Journal of Economics 40 (Autumn 2009): 455–485.

The figure shows two absorbing points that are, approximately, (B1, B2) = (16, 2) and (B1, B2) = (2, 16). If firm 1 has a higher installed base early on, the market is more likely to end up around (B1, B2) = (16, 2), so firm 1 is the dominant firm. If instead firm 2 starts with an advantage, then the endpoint is more likely to be (B1, B2) = (2, 16). Furthermore, note that a firm’s dominance will tend to persist. For suppose (B1, B2) = (16, 2) but then, in spite of the network effects, firm 2 sells more over a few periods (perhaps because firm 2 significantly lowered price) and firm 1 loses some of its base, so now (B1, B2) = (12, 5). The average direction is still toward (B1, B2) = (16, 2). Only if the change was really large so that it moved, say, to (B1, B2) = (10, 12), would firm 2 be expected to supplant firm 1’s dominance. Market dominance is then a natural competitive outcome when there are strong network effects, and with it comes two redeeming features. First, competition to become the dominant firm (“competition for the market”) is intense, which benefits consumers by ensuring low prices. Second, consumers benefit from having a dominant firm, because that results in a higher-valued product due to stronger network effects (for example, more software being written for an OS). Of course, the dominant firm will eventually take advantage of its position by charging a high price, but consumers are still likely to fare well. The antitrust problem resides not in the dominance that prevails but rather in the potential abuse of that dominance. In chapter 7, we argued that exclusionary contracts, such as exclusive dealing and tying, can deter entry by a more efficient firm by preventing it from gaining the necessary critical mass to achieve profitability when there are scale economies. For example, if there is a large fixed entry cost, then the newcomer must earn variable profit on enough consumers so as to cover that fixed cost. Though it is

generally unprofitable for a dominant firm to offer terms sufficient to induce all consumers to commit to buying exclusively from it, entry can be deterred by having enough consumers agree to buy only from the dominant firm. A similar logic applies to markets with network effects. Exclusionary contracts can keep a new firm’s installed base from reaching the critical mass required for the product to be attractive to consumers. This can drive out a more efficient firm or deter one from entering.11 Let us flesh out this issue with a simple example. Suppose an upstream monopolist 1 provides input A to a downstream firm, which combines it with other inputs and sell it to final consumers. Upstream firm 2 has developed product B as a substitute for A. These upstream products are subject to network effects. Anticipating our later discussion of the first antitrust case against Microsoft, one can think of the upstream product as a computer OS. Firm 1 is Microsoft and firm 2 has a competing operating system (for example, IBM and OS/2). The downstream firms are original equipment manufacturers (OEMs) of personal computers, such as Dell and Hewlett-Packard, who install an OS and then sell it to consumers. While we imagine many downstream firms, for simplicity assume there is just one. Suppose that the network effects for the product of firm 1 have largely been realized, so that additional customers will not add any more value. The value of input A to the downstream firm is assumed to be 150, and firm 1 charges a price of 100. Hence, a downstream firm receives a surplus of 50 on its M customers.12 Now, a new input B appears, which is supplied by firm 2. The downstream firm’s M customers differ in how they value B. A fraction 1 − θ attach zero value to B, because there are no network effects (for example, no software has been developed), while a fraction θ attach a value of 200 (for example, they prize the OS’s stability and develop their own software). Assume marginal cost is 0 for both inputs A and B, and firm 2 must earn revenue of at least F to profitably supply its input. The surplus maximizing solution is to have the (1 − θ)M customers buy A, which they value at 150 compared to 0 from B, and the θM customers buy B, which they value at 200 compared to 150 for A. If firm 1 continues to price A at 100, that outcome will occur if, for example, firm 2 prices B at 100. The resulting revenue for firm 2 is θM100, which we assume exceeds F. Firm 2 and consumers are better off, but firm 1 is harmed in two ways. First, its profit is lower by θM100 because of weaker demand. Second, firm 2 may eventually attract the other (1 − θ)M consumers as it builds up network effects for its product. Here is a contract that will exclude firm 2 and cost firm 1 nothing in terms of profit. Instead of requiring a downstream firm to pay 100 for each unit of input A it buys, it must pay 100 for each unit of output that it sells. Let PB denote the price charged by firm 2 for input B. The downstream firm has three options. First, it can agree to this contract from firm 1 and buy all its input from firm 1, which yields a profit of 50M. Second, it can agree to firm 1’s contract and buy (1 − θ)M units from firm 1 and θM units from firm 2. In that case, its profit is (1 − θ)M(150 − 100) + θM(200 − PB − 100). It buys (1 − θ)M units of A at a price of 100, which delivers value of 150, so the resulting per unit profit is 50. It buys θM units of B, which delivers per unit value of 200 and which requires the downstream firm to pay PB to firm 2 and 100 to firm 1. The third option is to decline the contract from firm 1 and buy θM units from firm 2, which delivers profit of θM(200 − PB). Let us compare the profitability of these three options for the downstream firm. The first option is more profitable than the second option when

This condition can be simplified to PB > 50. Hence, if firm 2’s price for B exceeds 50, then the downstream firm prefers buying all its inputs from firm 1 than buying inputs from both firms 1 and 2.

Comparing the first option with the third option, the former is more profitable when 50M > θM(200 − PB). Even if PB = 0, so firm 2 gives its product away, it is more profitable to have firm 1 as the exclusive supplier than firm B when θ < 1/4. Assume that is the case; that is, the fraction of consumers who value B (when there are no network effects) is less than 25 percent. Summarizing, we find that the downstream firm will agree to the contract from firm 1 and buy all its inputs from firm 1 when PB > 50. Note that if firm 2 were to charge a price of 50, then its profit is θ50M. Recalling that it must earn at least F to be profitable, if θ50M < F, then the highest price that firm 2 can charge (and it has positive demand) is not sufficient for it to be profitable. Thus, if θ50M < F, then firm 2 will not supply. We then find that if θ < 1/4 and θ50M < F, then the downstream firm will agree to the contract from firm 1 and buy all its input from firm 1. In spite of the arrival of firm 2 as a competitor with a better product for some consumers, this exclusionary contract allows firm 1 to continue to be the exclusive supplier at its original price of 100. Furthermore, it is able to prevent firm 2 from getting a foothold in the market, which, after building its installed base and generating network effects, might have allowed firm 2 to effectively compete for the remainder of firm 1’s customers. While firm 1 initially earned its dominant position through network effects, the maintenance of its monopoly with this contract is anticompetitive, because it harms consumers.13 Microsoft Antitrust in the twentieth century began with a monopolization case against Standard Oil and ended with a monopolization case against Microsoft. That Section 2 of the Sherman Act could be effectively applied in such diverse markets—refined oil and computer OSs—separated by almost a century in time is testimony to its relevance and robustness. Microsoft supplies an OS—initially MS-DOS and then Windows—for the PC as well as many software applications for that OS, such as Word, Excel, and Internet Explorer. The FTC opened an antitrust investigation of Microsoft in 1990. Though the FTC eventually dropped its case three years later, any thoughts that the authorities were through with Microsoft were quickly dispelled as the DOJ immediately took over the case. It was the beginning of many years of litigious battles.14 Before getting to the economic and legal details of these cases, an overview will be useful. In its first case, the DOJ claimed Microsoft used anticompetitive terms in its licensing and software agreements to perpetuate its monopoly in the OS market. In July 1994, the DOJ and Microsoft resolved matters with a consent decree that put restrictions on the types of contracts Microsoft could use.15 It also prohibited Microsoft from tying the sale of products to its OS. This decision became known as Microsoft I.16 Three years later the DOJ filed another suit, claiming that Microsoft had violated the provision of the settlement that prohibited tying. The claim was that Microsoft required OEMs to install Internet Explorer (IE) browser along with Windows 95. In what became known as the Microsoft II decision, the circuit court concluded that technological bundling did not violate the consent decree.17 Just prior to the Microsoft II decision being delivered, the DOJ and numerous states filed suit that Microsoft had violated the Sherman Act with its recent behavior in the browser market. As later summarized by the circuit court: Relying almost exclusively on Microsoft’s varied efforts to unseat Netscape Navigator as the preeminent internet browser, plaintiffs charged four distinct violations of the Sherman Act: (1) unlawful exclusive dealing arrangements in violation of Section 1; (2) unlawful tying of IE to Windows 95 and 98 in violation of Section 1; (3) unlawful maintenance of a monopoly in the PC operating systems market in violation of Section 2; and (4) unlawful attempted monopolization of the internet browser market in violation of Section 2.18

The trial began in October 1998, and a verdict was delivered in April 2000. Microsoft was found not guilty on the charge of exclusive dealing but guilty on the other three charges. Microsoft appealed the decision and, in June 2001, the circuit court upheld the third charge regarding “maintenance of monopoly” but reversed the decision on monopolization and remanded the decision on tying back to the district court. The DOJ and Microsoft reached a settlement agreement in November 2001. Microsoft I: Exclusionary contracts in the operating systems market When it first sold MS-DOS to OEMs, Microsoft initially set a flat fee of $95,000, which allowed an OEM to install an unlimited number of copies.19 The contractual terms changed over time to where the following terms were standard by 1992: An OEM was required to pay a fee to Microsoft for each computer that it shipped, whether or not it installed Microsoft’s MS-DOS or Windows. This arrangement made it less attractive to install a competing OS, since an OEM would have to pay for that OS and still pay a fee to Microsoft. Such a contractual feature can exclude rivals and harm those consumers who would prefer a rival’s OS, and thus can be a violation of Section 2 of the Sherman Act (as well as the FTC Act). While exclusion is one explanation for this contractual feature, are there any efficiency rationales for charging for each PC shipped rather than each OS installed? One possibility is that it was intended to deal with fraud, because an OEM may underreport how many PCs it shipped with MS-DOS installed. Or it could be used to deal with software piracy, whereby an OEM installed a pirated version of MS-DOS. It would have no incentive to do so if it had to pay for each PC shipped. These procompetitive rationales were not found to be credible. The contracts were typically two years in length and often required some minimum number of units per year. Hence, the contract effectively had a fixed fee—equal to that minimum number multiplied by the fee per unit—and then charged a fee for each unit above the minimum. In the event that the total number of shipments was less than the minimum, Microsoft could allow those unused licenses to carry forward to the next year (though it was not contractually bound to do so). There were reports that Microsoft did not allow the carry forward if the OEM had installed a competing OS in some of its units. Other forms of penalties imposed for installing non-Microsoft OSs included withholding technical service and support and raising prices. This behavior is anticompetitive, as Microsoft is using its dominant position to disadvantage rivals in a way that harms consumers. Though not admitting guilt, Microsoft agreed to stop using contracts that charged a fee for each PC sold and instead only charged for each Microsoft OS installed. Microsoft II: Tying and monopolization of the browser market Most of our attention will be focused on the monopolization charge upheld by the circuit court. However, let us briefly deal with the other two guilty charges found by the district court. As regards the tying charge, the plaintiffs claimed that Microsoft’s technological integration of Windows and IE, along with certain features of contractual arrangements between Microsoft and intermediate suppliers of browsers (such as OEMs), were anticompetitive and a per se violation of Section 1.20 Microsoft disputed this claim and contended that the integration was done to enhance quality by making Windows a better applications platform. In light of the novel role of technology associated with this tying arrangement, the circuit court remanded the guilty charge on the grounds that a rule of reason should be applied. It noted that “applying per se analysis to such an amalgamation creates undue risks of error and of deterring welfareenhancing innovation.”21 Because many of the facts pertaining to this charge are similar to those for the

maintenance of monopoly charge, we will not discuss the tying claim any further. For the fourth charge, regarding monopolization, the government claimed and the district court concluded that Microsoft had tried to leverage its monopoly in the OS market so as to monopolize the browser market. Microsoft engaged in a number of contractual arrangements with OEMs and Internet service providers (ISPs) to promote IE at the expense of Netscape Navigator. These arrangements will be reviewed shortly (in the context of maintaining monopoly power in the OS market). The government also accused Microsoft of pricing predatorily by distributing IE free (even at a negative price in some instances). The circuit court reversed this decision, because, to establish such a monopolization claim, the plaintiffs must show that the browser market could be monopolized. However, the court found that the plaintiffs had neither defined the relevant market nor argued that the necessary entry barriers were present to protect a monopoly once it was established. Microsoft III: Maintenance of monopoly in the operating systems market Arguably the most serious charge—and the one that the circuit court upheld—was that Microsoft engaged in anticompetitive practices to maintain its near-monopoly with Windows in the OS market. Before diving into the economic arguments, some knowledge of computer technology will prove useful.22 An OS runs software applications by using application programming interfaces (APIs), which allow the application to interact with the OS. A platform is a collection of such APIs. When writing software, a developer needs to write it so that it works with an OS’s APIs. If the software is also to work on a different OS, then the code must be rewritten for its APIs. Such a process is known as “porting” and can be costly. Given the large number of Windows users, software developers generally write their programs for the Windows platform; such is the advantage of having the largest installed base, as we know from our analysis of network effects. Given fewer Mac users, not all software written for Windows’ APIs would get ported to the Mac OS. A threat to Windows’ dominant position would be a technology that allows the same programs to run on all OSs. In that case, if a superior OS came along, it would not be at a disadvantage to Microsoft, because the existing software applications could run on it. This was the threat that Netscape Navigator and Java posed. Written by Sun Microsystems, Java was a cross-platform language that allowed a program to run on many different OSs. Navigator relied on Java and could run software applications independently of Windows. For example, it ran on seventeen different OSs, including Windows, Mac, and various versions of the UNIX operating system. Referred to as middleware, Navigator with Java was a potential challenge to Windows’ position as the dominant platform. To establish a monopolization claim, the plaintiffs must argue that (1) the accused firm has monopoly power in a relevant market, and (2) it has sought to maintain that monopoly through anticompetitive behavior (for example, harming a rival’s product as opposed to making one’s product better). In addressing the first part, the district court concluded, and the circuit court affirmed, that the relevant market is the global Intel-compatible PC OS market. In that market, Windows’ market share exceeded 95 percent. Microsoft argued for a more encompassing notion of a market. For example, it wanted to include Mac OS as a competitor (in which case Microsoft’s market share would be around 80 percent). However, the district court concluded that even if the price of Windows was raised substantially, few Windows users would switch to the Mac OS, because fewer software applications were available for it and the cost of new hardware would be prohibitive. Microsoft also sought to include hand-held devices and middleware, a point we will return to shortly.

Of course, high market share is necessary but not sufficient for monopoly power. An airline may be the only carrier to fly from Des Moines to Omaha, but it does not need to raise price by very much to induce another airline to enter and compete. But Windows did have a barrier to entry in the form of network effects and a large installed base. A new OS would lack such an installed base, which meant that not much software might be written for it, and if there is not much software, consumers will not be inclined to buy it, even if the OS is superior. Having established that Microsoft did have monopoly power, the next issue was anticompetitive conduct. Here, the government rattled off a long litany of practices designed to prevent rival browsers from becoming a viable alternative platform for software development. One cannot do justice to the many offenses put forward by the government, but we can give a flavor of their arguments. The two primary avenues for distributing browsers are OEMs, which install programs on the computers they sell, and ISPs (such as America Online), which offer browsers when someone signs up for Internet access. In both cases, Microsoft was found to have used contracts that severely disadvantaged competing browsers.23 Microsoft prohibited OEMs from altering the Windows desktop and the initial boot sequence. So, for example, an OEM could not take IE off the desktop and replace it with Navigator, even if the customer so desired it. As a case in point, in 1996 Compaq wanted to load the more popular Netscape browser on its machines and remove the icon for IE from Windows 95. Microsoft informed Compaq that if it removed IE, Compaq would lose its license for Windows. Compaq complied with Microsoft’s wishes. With regard to ISPs, Microsoft agreed to provide easy access to an ISP’s services on the Windows desktop in exchange for them exclusively promoting IE and keeping shipments of Navigator under 25 percent. Figure 9.6 gives some indication of the impact of these exclusionary agreements. AOL and CompuServe had contracts with Microsoft that restricted them in their ability to promote non-IE browsers. The market share of IE for those two ISPs rose from 20 percent to 87 percent in less than two years. By comparison, “IE Parity” ISPs refer to those ISPs in the top eighty whose browser choice was not known to be restricted and that had 10,000 or more subscribers. IE’s market share among them rose only to 30 percent, while the market share among all ISPs rose to 49 percent. In other words, those ISPs that had a choice chose IE much less often than those who were restricted. The contractual restrictions appear to have made a difference.

Figure 9.6 Microsoft’s Share of the Browser Market (Three-Month Moving Average of Usage by ISP Category) Source: Figure 19-1 from Daniel L. Rubinfeld, “Maintenance of Monopoly: U.S. v. Microsoft (2001),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 4th ed. (New York: Oxford University Press, 2004), pp. 476–501.

The district court concluded that these restrictions did not improve the performance of IE but did reduce the use of rival browsers. Microsoft argued that it was “exercising its right as the holder of valid copyrights” when imposing these restrictions on those to whom it licensed Windows. As covered in chapter 8, the courts have noted that intellectual property rights do not provide an exemption to antitrust laws. The government also argued that the integration of Windows and IE was anticompetitive because it involved excluding IE from the “Add/Remove Programs” utility, commingling code so that deletion of IE would harm the OS, and that Windows could override the user’s default browser when it was not IE. Here matters are not as clear, as Microsoft argued that there were technical reasons for these features. The circuit court did not find them liable, because the plaintiffs failed to argue that any anticompetitive effects outweighed the benefits identified by Microsoft. Quite distinct from the previous practices (but arguably the most serious in terms of its potential impact

on the incentives to innovate) is what Microsoft did to Java. We have already explained that Java was a potentially serious threat to the Windows platform. Though Microsoft announced its intent to promote Java, it actually crippled it as a cross-platform threat by designing a version of Java that ran on Windows but was incompatible with the one written by Sun Microsystems, and then using its Windows monopoly to induce parties (including Intel) to adopt its version. Before moving on to the issue of remedy, an argument made by Microsoft is worth mentioning. Microsoft attorneys contended that the district court was being contradictory by excluding middleware, such as Java and Navigator, in defining the market for Windows but accusing Microsoft as engaging in monopolization practices against middleware in order to maintain a Windows monopoly. The circuit court responded that because the threat was “nascent,” there was no contradiction. The court then went on: The question in this case is not whether Java or Navigator would actually have developed into viable platform substitutes, but (1) whether as a general matter the exclusion of nascent threats is the type of conduct that is reasonably capable of contributing significantly to a defendant’s continued monopoly power and (2) whether Java and Navigator reasonably constituted nascent threats at the time Microsoft engaged in the anticompetitive conduct at issue. As to the first, suffice it to say that it would be inimical to the purpose of the Sherman Act to allow monopolists free reign to squash nascent, albeit unproven, competitors at will—particularly in industries marked by rapid technological change and frequent paradigm shifts. As to the second, the District Court made ample findings that both Navigator and Java showed potential as middleware platform threat.24

Three remedies were discussed. One was to restrict conduct, as done in the Microsoft I decision. This would entail prohibiting some or all of the anticompetitive practices, such as limiting what Microsoft could put in a contract with an OEM. The other two remedies were structural and draconian. Along the lines of Standard Oil and AT&T, the approach was to reduce monopoly power by breaking up the company. One plan was to create three identical Microsoft companies, each with Windows and applications.25 Dubbed “Baby Bills,” as a play on the “Baby Bells” that rose from the ashes of the AT&T breakup, the objective was to inject competition into the OS market. The second structural remedy was to create two companies, one with Windows and one with applications (such as Word and Excel). The latter remedy was proposed by the DOJ and ordered by the district court. However, the circuit court remanded the remedy, because the district court had not adequately considered the facts and justified its decision. At that point, the government stopped pursuing a structural remedy, and a settlement restricting the conduct of Microsoft was reached in November 2001. For example, the remedy prohibited Microsoft from retaliating against an OEM for using or promoting software that competes with Microsoft products and required uniform licensing agreements to OEMs (so as to make it more difficult for Microsoft to hide rewards and punishments in discriminatory agreements). It also sought to make it more difficult for Microsoft to hamper the development of middleware and more generally to promote interoperability by, for example, mandating that it make available the APIs and related documentation that are used by middleware to interact with a Windows OS product. Big Data With billions of people using smartphones, computers, and other devices connected to the Internet, the amount of data generated on what we search, what we watch, what we read, and what we buy is immense. The collection and analysis of such data is an example of “big data.” Big data refers to the use of large scale computing power and sophisticated algorithms on massive data sets for the purpose of finding patterns, trends, and associations in human behavior and other phenomena. In the context of markets, big data has

been used to identify trends in demand, measure consumer preferences, and assess the performance of business practices. While the central policy issues concerning markets and big data have revolved around consumer protection and privacy (for example, privacy was the driving concern of the FTC when Facebook acquired WhatsApp in 2014), there are also some potentially significant antitrust issues. After explaining how big data can generate market dominance in online markets, we will examine some issues related to monopolization and mergers.26 In many online markets, big data is used to deliver better service. One way this is done is by recommending products that are likely to be of interest to a customer. The class of algorithms, known as collaborative filtering, uses a database reflecting other users’ preferences (based on their past activity) along with information on the current user to then make recommendations. As far back as when it was exclusively a bookseller, Amazon has used collaborative filtering. Netflix’s recommendation algorithm, referred to as Cinematch, uses its database on viewings and reviews by Netflix subscribers to make predictions as to which movies a user would enjoy based on their own past viewings and reviews. The accuracy of a recommendation system depends on the database (size and diversity), the data available on the user, and the algorithm. Of particular relevance to our discussion is that a bigger database leads to more accurate predictions and, therefore, a better user experience. Similar points can be made about search engines.27 The quality of a search is primarily about the “overall accuracy of search results” (that is, the extent to which it delivers what the user is looking for) as well as “page load speed” and “real-time relevance.”28 Higher quality is delivered by a better algorithm, more computing power, and the amount of data that can be searched. The latter includes publicly available information but also the content of past searches using that search engine. Every time someone uses the search engine, a search log is created. That information is used to improve the performance of the search engine, because it reveals information about what past users were looking for and to what extent the information delivered was appropriate. Recommendation systems and search engines have increasing returns to scale coming from a learning curve: More past activity generates more data, which yields better predictions as to what a user wants. In addition, many websites conduct experiments to improve their services, and a larger user base enhances the productivity of those experiments. Search engines conduct experiments on everything from ranking of search results to user interface and design decisions.… The more search users there are at any given time, the more experiments can be run, the faster they can be completed, and the more improvements that can be made to the search algorithms.… Sergey Brin [of Google] testified that a “rough rule of thumb” might be, as query volume doubles, a search engine might expect to see a one percent increase in quality.29

Experiments are conducted by most websites including social networks, retailers, and search engines. As depicted in Figure 9.7, a dynamic emerges whereby more users of an online service generate more proprietary data, which leads to the improvement of services, which then attracts more users. Big data can then be a source of market dominance in that an early market leader with more data can use that to further enhance its position. Like network effects, this dominance emerges through the delivery of more value to consumers.

Figure 9.7 Feedback Loop with Big Data in Online Services

Whether driven by this learning dynamic or some other source of dominance, it is clear that high concentration is an endemic feature of many online markets, including that for search engines.30 The dominant search engine is Google in many countries, including the United States (71.0 percent market share), Australia (92.8 percent), and Germany (97 percent). But this dominance is not universal—as the dominant search engine in China is Baidu (with 73.0 percent) and in Russia is Yandex (62.0 percent). Somewhat uniquely, Japan has a duopoly with Yahoo! (51.0 percent) and Google (38.0 percent) which is still a very high level of concentration. The takeaway is that big data can be the basis for market dominance. Though such dominance is based on superior efficiency and is not illegal, the existence of a dominant firm does raise the possibility of unlawful monopolization practices. From our preceding discussion, Microsoft had a near-monopoly of operating systems that was rooted in welfare-enhancing network effects, but it then abused that dominance for the purpose of maintaining it and extending it to the Internet browser market. If part of a firm’s dominant position comes from its data, it may seek to “(i) limit their competitors’ access to data, (ii) prevent others from sharing the data, and (iii) oppose data-portability policies that threaten their data-related competitive advantage”31 Such practices need not be in violation of competition laws, but it can be a subtle matter to distinguish between legitimate practices to maintain a competitive advantage and anticompetitive practices intended to cripple rivals and which cause harm to consumers. Of particular concern is that a firm whose dominance is rooted in its superior data may try to prevent newcomers from developing or gaining access to a sufficiently large database to allow it to effectively compete: As the Eleventh Circuit recently noted … a monopoly can violate Section 2 of the Sherman Act when its exclusive dealing program deprives smaller rivals of “distribution sufficient to achieve efficient scale, thereby raising costs and slowing or preventing effective entry.” So too a dominant data-driven company can use exclusionary tactics to prevent rivals from achieving minimum efficient scale.32

In the next section on two-sided platforms (for which Google and Facebook are examples), we will consider a monopolization case pursued against Google by the EC (and contemplated by the FTC), which claims that Google sought to illegally leverage its market dominance in search engines to online retailing. When addressing monopolization practices in the case of big data, a central issue is how far a dominant firm is allowed to go to protect its data advantage. This question was addressed in Facebook v. Power Ventures (2008).33 Power Ventures offered a service through its website Power.com that would allow users to integrate their contacts from multiple social networks sites. With a user’s Facebook login information, Power Ventures would scrape information off Facebook and display it at Power.com. Facebook’s market dominance is rooted in network effects—a user gains access to a massive community when it spends time on Facebook—and Power.com could possibly weaken Facebook’s competitive advantage. The terms of use at Facebook required that a user not “collect users’ content or information, or otherwise access Facebook, using automated means (such as harvesting bots, robots, spiders, or scrapers) without our permission.” On the grounds of violating those terms, Facebook sued Power Ventures. In response, Power Ventures countersued Facebook for violating antitrust law through the exclusionary practice of having its users provide it with user names and passwords from other third-party services (such as Gmail)—so that they could access those accounts through Facebook—and, at the same time, prohibiting Power Ventures from engaging in a similar activity. The district court was clear in its ruling that Facebook did not violate antitrust law. Even as a dominant firm (which is a necessary condition to be guilty of monopolization practices), Facebook was under no obligation to provide access to third party websites, and it has the right to actively block that access. This right includes making its product incompatible with the services of rival firms. Exemplifying this point is an episode involving Google’s Facebook Friend Exporter. Developed as an add-on to its Chrome browser, its purpose was to facilitate the transfer of a user’s data in Facebook to rival social network Google+. Facebook blocked it, and their behavior is on the right side of the law. This assessment is consistent with the Ninth Circuit Court in a case involving Kodak almost thirty years earlier: The creation of technological incompatibilities, without more, does not foreclose competition; rather, it increases competition by providing consumers with a choice among differing technologies, advanced and standard, and by providing competing manufacturers the incentive to enter the new product market by developing similar products of advanced technology.34

Big data as a source of efficiency and market dominance has been a factor in some merger cases involving companies providing online services. As previously noted, a firm can improve its services by expanding the depth and breadth of its database. One way to get more data is to acquire the data of another company, which creates an efficiency rationale for a merger. At the same time, if these markets tend to be highly concentrated, a merger could substantively lessen competition by removing an actual or potential competitor. When evaluating a proposed alliance between Microsoft and Yahoo! in 2010, the DOJ emphasized that the quality of Microsoft’s search engine Bing would be improved, which would then make it more competitive (with Google): The transaction will enhance Microsoft’s competitive performance because it will have access to a larger set of queries, which should accelerate the automated learning of Microsoft’s search and paid search algorithms and enhance Microsoft’s ability to serve more relevant search results and paid search listings, particularly with respect to rare or “tail” queries.… This larger data pool may enable more effective testing and thus more rapid innovation of potential new search-related products, changes in the presentation of search results and paid search listings, other changes in the user interface, and changes in the search or paid search algorithms. This enhanced performance, if realized, should exert correspondingly greater competitive pressure in the marketplace.35

Thus, there was an efficiency basis for approving the alliance. In 2013, Google’s proposed acquisition of Waze Mobile was investigated by the United Kingdom’s Office of Fair Trading (OFT) out of concern that it would impact competitiveness in the market for mobile map services. Google Maps was the market leader, and Waze offered an innovative dynamic mapping service that encompassed real-time traffic updates and turn-by-turn navigation. The OFT “examined the possibility that the merger may result in the loss of a growing and innovative competitor in the form of Waze [and] may dampen Google’s incentives to innovate.”36 It was also emphasized that Waze’s services relied on the amount of real-time data collected from users and that there were network effects: “The more users there are the more data is created, which improves the experience and attracts yet more users.”37 The OFT ultimately determined that Waze’s community of users was currently too small for it to become a “disruptive force in the market.”38 However, Waze’s technology in the hands of Google with its large customer base would allow for the efficiency-enhancing benefits of that technology to flourish and provide considerable value to many consumers. The OFT approved the acquisition. The issues raised in Google’s acquisition of Waze are common and central ones in the New Economy. They involve a dominant firm acquiring a small firm with a nascent technology. The acquisition may allow the technology to produce better services to a wider group of consumers because of the dominant’s firm customer base and its human, physical, and financial capital. At the same time, the technology may have the potential to create a rival to that dominant firm, though any such forecast comes with a high level of uncertainty. It is challenging task for a competition authority to assess whether such an acquisition should be allowed.39 Two-Sided Platforms A two-sided platform brings together two different types of agents for the purpose of engaging in a transaction.40 A platform could be located at a physical place, such as 85 Old Brompton Road in London. As the location for auction house Christie’s, the platform brings together the owner of an item (who is one type of agent) and prospective buyers (who are another type of agent) for the purpose of selling that item. eBay offers a virtual counterpart with its offering of online auctions. A newspaper is a physical platform that matches readers with advertisers. Readers are attracted by the newspaper’s content (and may attach positive or negative value to ads), and advertisers are attracted to readers. The online world has many such “attention” platforms, where advertisers are drawn by the presence (or attention) of consumers, and consumers are drawn by content whether it is search information (so the platform is a search engine, such as Google), contact with friends (so the platform is a social network, such as Facebook), or videos (so the platform is a streaming video website, such as YouTube). Figure 9.8 displays the role played by a platform as it connects two user groups. This structure of interactions is ubiquitous in both the online and offline worlds, as reflected by the many examples in table 9.1. Note that platforms can be rather diverse. A platform could be a credit card, which supports transactions between consumers or retailers, or it could be an operating system like iOS, which supports the sale of apps for the iPhone between developers and smartphone users.

Figure 9.8 User Groups Interact through a Two-Sided Platform Table 9.1 Examples of Two-Sided Platforms Market

Platforms

User Group 1

User Group 2

Auctions Search engines Taxi service Operating systems Credit cards Video games Employment Dating* Dining reservations Accommodations Retailing

eBay, eBid, Christie’s Google, Bing Uber, Lyft Windows, Mac VISA, MasterCard PlayStation, Nintendo Monster.com, Careerbuilder.com eHarmony, Match.com, Tinder, nightclubs Open Table, Reserve Airbnb, Wimdu, Booking.com Shopping malls, Amazon

Sellers Advertisers Drivers App developers Retailers Game developers Employers Women Restaurants Property owners Stores

Buyers Consumers Passengers Consumers Consumers Gamers Job seekers Men Diners Consumers Consumers

* In describing the user groups as men and women, we are presuming heterosexual dating. There are also same-sex dating platforms.

A two-sided platform performs an intermediation role by reducing the transaction costs that agents might incur to find each other and consummate an exchange. It is then not surprising that they are a common feature of the New Economy, for a strength of the Internet is its ability to process large amounts of information, which is exactly what must be done to find matches that are likely to produce exchange. For example, two-sided platforms are integral to the sharing economy, whether they match drivers and passengers for short distances (Uber and Lyft) or long distances (BlaBlaCar, Carpooling.com) or match property owners and renters (Airbnb, Wimdu).41 Several features of two-sided platforms are critical to understanding their performance and are pertinent to antitrust analysis. The first is the presence of network effects. Recall that a product or service has a network effect when its value to a user is higher when more users use that product or service. In the case of a two-sided platform, the value that a user attaches to the associated service increases with the number of users on the other side of the platform. For example, a buyer finds it more valuable to use eBay when there are more sellers, because then it is more likely that a seller will have an item of interest to the buyer. The more people there are at eHarmony, the more likely a person is to find a suitable partner. The more Lyft drivers there are, the closer the nearest Lyft car is to a person in need of transport. In two-sided platforms, network effects are powerful and come from the other side of the platform. A second effect is present in some two-sided platforms, which is congestion. Congestion is the opposite

of network effects: The more users of a service there are, the lower will be the value that a user attaches to it. We are all familiar with this effect in the context of transportation; the more drivers there are on the roads, the longer it will take to drive and thus the lower the value of driving. Congestion effects can arise in association with a user’s own side of the platform. For example, the more buyers there are at eBay, the less attractive eBay will be to a buyer as then there are more buyers with which to compete when bidding on an item. However, not all two-sided platforms have congestion effects. The value to conducting a search at Bing or viewing a video at YouTube is not diminished if many other people are doing so.42 In addition, there can be network (not congestion) effects among the same type of users at some platforms, such as social networks like Facebook. Note that Facebook is also a two-sided platform, as it brings together advertisers and consumers, as well as friends. The third critical feature of a two-sided platform is the manner in which services are priced. In a standard market, a seller selects the price it will charge a buyer. In contrast, the owner of a two-sided platform decides the prices to be charged to the two types of users. Optimal prices can be very different from what the standard model of pricing would predict. For example, profit maximization could mean charging one type of user a price below cost or even a negative price (that is, paying them to use the platform). What matters is not just the total price charged to a pair of user types when they conduct a transaction but how that total price is distributed between the two user types. Thus, the price structure matters. The pricing issue is rather important from both a business perspective (if one owns a platform) and when conducting antitrust analysis. In the next section, we consider the determinants of optimal pricing for a two-sided platform. Later in the chapter some of the antitrust implications will be discussed. We conclude this introduction with a comment on terminology. Initially, a two-sided platform was referred to as a two-sided market. It is indeed the case that a two-sided platform such as eBay is itself a market in that there are both buyers and sellers. However, it is also the case that platforms compete; for example, eBay competes with eBid and Yahoo! Auctions (at least in Japan). There is competition within eBay (that is, buyers compete in an auction and sellers compete to attract buyers to their auctions) but also competition across platforms (for example, eBay and eBid compete for participation of buyers and sellers). For these reasons, it is preferable to refer to them as two-sided platforms rather than two-sided markets. Prices at a Two-Sided Platform A two-sided platform can collect revenue in several ways. It can charge a fee to users to access the platform. For example, eBay charges a seller a fee to list an item, eHarmony has users pay a monthly subscription fee, and a shopping mall requires a store to pay rent. Another approach is to charge a fee when users from two sides of the platform are matched. For example, when a user clicks an ad (that is, a sponsored link) at Google, the advertiser is charged a fee. When an item is sold at eBay, the seller is charged a commission. In addition to deciding on whether to charge for access, a match, or a transaction, a two-sided platform also decides how to price the different sides of the market. eBay has a positive price for sellers but a zero price for buyers (who access and transact at eBay at no charge). In contrast, the auction houses Christie’s and Sotheby’s have both the seller and buyer pay a commission when a sale takes place. When an item is purchased at a store with a credit card, the credit card company charges a fee to the store that is a percentage of the purchase and may provide a payment (which means a negative price) to the consumer. While we may be accustomed to certain pricing structures—such as only sellers are charged a price to access eBay and Amazon—it is important to note that these structures are the choice of the platform. Our task is to explore what factors into that choice. Focusing on when users are charged to access the platform,

the analysis is composed of three steps. Given prices for the two user types, the first step is to determine which prospective users participate at the platform (that is, the equilibrium quantity of users). That gives us user demand, which is used in the second step: Solve for the prices that maximize profits when there is a monopoly platform. The third step is to discuss what additional forces come into play when there are competing platforms. Equilibrium quantity of users Consider a two-sided platform that is an auction or retailing site, such as eBay or Amazon (in its capacity as a platform for other online stores to post their wares). The two user groups are buyers and sellers. A buyer’s value to using the platform is given by VB(QS, b), where QS is the quantity of sellers using the platform and b is a buyer’s type. The quantity VB(QS, b) is increasing with QS because of network effects. Assume that there are many buyers and they differ in the value attached to using the platform, where a higher value for b indicates the buyer finds it more valuable, as portrayed in figure 9.9a. One can think of a buyer’s type as a trait (for example, income) that positively influences the buyer’s desire to buy some item, in which case the value of going to eBay is higher when b is higher. Analogously, VS(QB, s) is the value that a seller of type s assigns to using the platform, where QB is the quantity of buyers at the platform, and VS(QB, s) increases with QB. As shown in figure 9.9b, a higher value for s corresponds to a seller who finds it more valuable to participate at the platform (perhaps because its inventory is large or it has low cost). For simplicity, congestion effects are assumed to be negligible.43

Figure 9.9 (a) Buyer’s Value from the Platform (b) Seller’s Value from the Platform

If a buyer of type b believes there will be sellers at the platform, then is her maximum willingness to pay to access the platform. Let PB and PS denote the price charged to a buyer and a seller, respectively, for joining the platform. (Note that these are the prices charged by the platform owner and are not the prices involved in any exchange between a buyer and a seller.) A buyer joins the platform if and only if she values it at least as much as it costs her: If then, as shown in figure 9.9a, only buyers with b ≥ b′ will pay the price and participate at the platform. If we let NB(b′) denote the number

of buyers whose type is at least b′, then NB(b′) is how many buyers participate at the platform. While one may be inclined to think that NB(b′) is buyers’ demand for the platform, remember that NB(b′) depends on how many sellers participate, and we have not yet solved for it. We simply stated it was but, in fact, how many sellers participate depends on how many buyers participate. Suppose sellers expect buyers to be at the platform, and the price charged to a seller for participating is If we go through a similar analysis as we did for buyers (see Figure 9.9b), sellers whose type is at least s′ will join. If NS(s′) is the number of sellers with a type at least as great as s′, then NS(s′) is how many sellers will be at the platform when they believe there will be buyers, and the price for a seller to access the platform is Given prices, a user equilibrium is a quantity for each side of the platform, and such that if sellers believe buyers will join the platform, then sellers will find it optimal to join, and if buyers believe sellers will join the platform, then buyers will find it optimal to join. In other words, the beliefs of each side are fulfilled. If that were not the case, then the situation would not persist. For example, if fewer sellers were there then expected by buyers, it would cause fewer buyers to join. The conditions defining a user equilibrium can be cast in terms of the types of buyer and seller who are just indifferent to joining:

Equation 9.1 tells us that if buyers believe sellers will join the platform, then the net surplus to a type b* from paying PB and joining is zero, which implies that all those of a higher type will have a positive net surplus. Hence, NB(b*) (or ) buyers will join. Of course, that participation decision depends on sellers joining, and equation 9.2 ensures us that will be the case. For if sellers believe buyers will join, then sellers of type s* and higher will join, which results in NS(s*) (or ) sellers on the platform. Given prices PB and PS, a platform can then expect of on one side (buyers) and on the other side (sellers). We have just derived the platform’s demand functions: and Note that the demand from each side depends on the prices charged to both sides. The higher is PB, the fewer buyers will join by the usual logic of downward sloping demand. Given that there are fewer buyers, fewer sellers will then join; hence, seller demand is also lower. The quantity of a user group is decreasing with both the price charged to that user group and the price charged to the other user group. These forces are displayed in figure 9.10. Suppose that the initial user equilibrium is based on prices and which results in buyers and sellers joining the platform. Now suppose that the price for buyers rises to If buyers continued to believe there will be sellers, then buyers whose type lies between b′ and bo will no longer join, and the quantity of buyers will fall to NB(bo). Now that there are fewer buyers, fewer sellers will join, which is reflected in the value for sellers shifting down. Depicted in figure 9.10 is the new equilibrium, which has buyers and sellers joining the platform. Note that the value that a buyer attaches to the platform has shifted down to in which case only buyers with b ≥ b″ join. Given that the number of buyers is the value of a seller shifts down to in which case only sellers with s ≥ s″ join.

Figure 9.10 (a) Change in Buyer’s Value from the Platform (b) Change in Seller’s Value from the Platform

Monopoly prices Let us shift from our buyer-seller example to a more generic description with user groups 1 and 2. Given demand functions and the platform owner will choose prices for the two user groups that maximize total profit. Let MC1 and MC2 denote the (constant) marginal cost of having someone from user group 1 and 2, respectively, access the platform. The monopoly platform profit maximization problem can be presented as

In the standard monopoly model, the marginal revenue from selling to another consumer equals the revenue from that additional consumer, which is just the price the consumer pays, less the reduction in revenue from the other consumers (because price had to be lowered to induce another consumer to buy). The marginal revenue from another user group 1 agent is then P1 + Q1ΔP1, where ΔP1 < 0 is the reduction in price required to get that additional user. The platform earns P1 from that additional user and loses revenue of Q1ΔP1 from those in user group 1 who were already accessing the platform. That is the standard treatment of marginal revenue. With a two-sided platform, an additional term must be added to that marginal revenue expression. By inducing one more user group 1 agent to join the platform, the platform becomes more attractive to user group 2, so more of them join. Let ΔQ2 be the rise in user group 2 for each additional user group 1 agent. From each of those user group 2 agents, the platform is earning profit of P2 − MC2. Hence, if, by lowering the user group 1 price, another user group 1 agent accesses the platform, the profit from user group 2 rises by (P2 − MC2)ΔQ2. The marginal revenue from selling to one more user group 1 member is then MR1 = P1 + Q1ΔP1 + (P2 − c2)ΔQ2. As the term (P2 − c2)ΔQ2 raises marginal revenue, it causes the profit-maximizing value of Q1 to rise, as shown in figure 9.11, and this implies a lower price P1. The bigger is ΔQ2, the more that the marginal revenue curve for user group 1 will shift up, which implies a lower price for user group 1 in order to attract more of them. Recall that ΔQ2 measures how much user group 2 demand rises in response

to a unit rise in user group 1. This effect is driven by how much value user group 2 gets out of user group 1. Therefore, the stronger are the network effects generated by user group 1 for user group 2, the lower the profit-maximizing price will be for user group 1.

Figure 9.11 Higher Marginal Revenue for User Group 1 from Network Effects on User Group 2

We can now state a general insight regarding the relative prices charged to the two user groups. As a starting point, suppose there are no cross-user group network effects—so the demand by a user group is independent of how many users are on the other side—and both user groups have the same demand and marginal cost. In that case, the profit-maximizing prices will be identical. Now suppose user group 2 attaches value to user group 1 but the reverse is not true. As explained above, the marginal revenue from selling to user group 1 is higher, which causes the monopolist to lower the price to user group 1 in order to expand how many of them participate on the platform, for more of user group 1 increases user 2 group’s demand. With more user group 1 agents, user group 2 attaches higher value to the platform, so the monopolist will then probably want to raise the price on user group 2. Compared to when neither group cared about the other, the price of user group 1 is lower and the price of user group 2 is higher. The principle established by that argument is that, holding all other factors the same (such as cost), a monopolist will optimally set price lower (higher) for the user group that attaches relatively less (more) value to the other user group that is on the platform. If user group 2 attaches more value to user group 1 being on the platform than user group 1 attaches to user group 2 being on the platform, then the platform owner will do more to encourage user group 1 participation in order to enhance demand by user group 2. It does that by setting a low price for user group 1. The user group that generates more network effects (for the other user group) will have a lower price. Consistent with this point, many platforms have starkly different prices for different user groups. In some

cases, a user group’s price is zero or even negative, while the other user group is charged a nontrivial positive price. For eBay, buyers face a zero price, while the basic commission rate for sellers is 10 percent of the amount of the sale up to a maximum of $750 (and there is a fee to list the good). Many advertiserfunded platforms charge a zero price to one side of the platform and a positive price to the other side. At Google, the user groups are consumers and advertisers. Consumers attach far less value to advertisers than advertisers do to consumers; consumers generate large network effects for advertisers. As profit maximization would predict, consumers pay a lower price than advertisers. In fact, the official price for someone to use the Google search engine is zero, which means, given the service has positive value, the actual price is negative (that is, Google is paying consumers to come to the platform by providing search). In contrast, advertisers pay a positive price every time a consumer clicks a sponsored link. While consumers would come to Google even without advertisers (and, in fact, did so before sponsored links were launched), advertisers would not join the Google platform unless there were consumers. This relationship between consumers and advertisers in terms of network effects and prices holds as well for social networks sites, such as Facebook. Recall from chapter 3 that the optimal monopoly price Pm satisfies the condition

where η

is the absolute value of the price-elasticity of demand; the more price-inelastic is demand, the higher the price-cost margin will be. When the monopolist controls a two-sided platform, the analogous expression for user group 1 is44

where ΔQ2 measures how much user group 2 demand rises in response to an increase in user group 1 participation. (A similar expression holds for the price-cost margin for user group 2.) If user group 2 derives strong network effects from user group 1, then ΔQ2 is large, and that will raise the left-hand side of equation 9.3. For the equality to hold, the user group 1 price must be low. If user group 2 derives no value from user group 1, then ΔQ2 = 0, but that does not imply that the user group 1 price is unaffected by the platform. If user group 1 values user group 2, then the monopolist will set a lower price for user group 2, which will raise network effects for user group 1 and increase their willingness to pay. That higher willingness to pay typically translates into making user group 1’s demand more price-inelastic, which lowers η1, and that results in a higher price-cost margin for user group 1. In sum, optimal two-sided platform prices do not only depend on the cost of serving a user or the value that a user attaches to the platform but also how much a user’s participation adds to the platform’s value to the other types of users. If a user group imparts (receives) strong network effects to (from) the other user group, its price-cost margin will tend to be low (high). Prices for competing two-sided platforms Thus far the analysis has considered the case of a monopoly. Now suppose there are competing two-sided platforms, such as Uber and Lyft for local transportation or Google and Bing for search engines. The insight of the previous section remains relevant when it comes to the relative prices between the two user groups. Our discussion will focus on additional considerations when the platforms compete. The first factor to take account of is differentiation across platforms. Recall from chapter 4 that price

competition is more intense when firms’ products are less differentiated. Given products are more similar, a consumer’s choice of a product is more sensitive to price. As firm demand is then more price-elastic, each firm has a stronger incentive to undercut a rival’s price, which results in low prices. The same force operates when two-sided platforms compete. The more similar are platforms, the more intense price competition will be. A twist is when product differentiation varies between the two sides of the platform. Suppose the customer interfaces for Uber and Lyft are largely viewed to be the same, but the driver interface is highly differentiated. In that case, passenger demand for a platform will be more sensitive to price than driver demand will be. As a result, price competition will tend to be more intense for passengers. All other things the same, price-cost margins will be lower for the side of the platform for which the competing platforms are less differentiated. Other factors emphasized in our analysis of monopoly pricing may prove more important than the extent of platform differentiation. In the case of Google and Bing, the sites may be reasonably more differentiated on the consumer side, as Google is generally recognized to have a better search engine. The preceding discussion would suggest nonintense price competition on the consumer side. However, a more important force is the network effects received by advertisers, which encourage low prices for consumers; in practice, price is zero. By virtue of its larger customer base and better search algorithm, Google receives a higher pay-per-click price than Bing from advertisers (and Google delivers a higher click-through rate for advertisers).45 More generally, product differentiation can be a minor factor compared to network effects. When there are competing platforms, the issue of multihoming arises, which can be quite important for understanding platform competition. Multihoming is when a user participates in two or more platforms for the same service. With local transportation, some consumers multihome by having the apps for both Uber and Lyft on their smartphones (and using both services), while drivers are probably more likely to single home by being on either the Uber or Lyft platform. With computers, consumers single home by using the PC or Apple, while developers mulithome by writing programs for both operating systems. Platforms will tend to compete more aggressively for the user group that is more inclined to single home. To see why, suppose platforms A and B compete, where user group 1 multihomes and user group 2 single homes. If platform A lowers its user group 2 price, it will attract more of those users, which will include drawing them from platform B, because these users only single home. Thus, the network effects generated by user group 2 for user group 1 have gone up for platform A and down for platform B. Platform A is then looking relatively more attractive for user group 1 compared to platform B. That effect would be weaker, however, if platform A would have lowered the price for user group 1. The price reduction will attract more user group 1 agents, but some will continue to be on platform B, because they multihome. While platform A’s network effects for user group 2 have gone up (because it has more user group 1 agents), platform B’s network effects may not have not fallen by much (because of multihoming). Hence, the increase in the relative appeal of platform A is not as strong when it lowered the price for the multihoming users compared to when it lowered the price for the single-homing users. Challenges in Antitrust Analysis Three of the lessons learned from analyzing standard markets are: (1) efficiency requires that relative prices should reflect relative costs; (2) a high price-cost margin indicates market power; and (3) a price below marginal cost is evidence of predatory pricing (as discussed in chapter 8). None of these lessons apply to two-sided markets.46 A zero or even negative price is common in two-sided markets (and thus price lies

below marginal cost). If one side is priced below (and perhaps well below) marginal cost, then the other side must have a high price-cost margin if a firm is to earn at least a normal rate of return, and that can occur even in markets with stiff competition between platforms. Hence, a high price-cost margin need not reflect high market power. Finally, prices can often be unrelated to costs, especially in online markets where the marginal cost of a consumer can be near zero (for example, the cost of a search), and prices are dominated by noncost factors, such as network effects and whether a user group multihomes. As a result, antitrust analysis must be conducted differently when it involves two-sided platforms. As a case in point, suppose two firms are proposing a merger and the issue is how much this will increase market concentration. Addressing that question requires defining the market. As covered in chapter 6, the DOJ-FTC Horizontal Merger Guidelines defines a market as the smallest set of products such that a “small but significant and nontransitory increase in price” (SSNIP) for those products is profitable. Operationally, the test is taken to be a 5 percent increase in price lasting for one year. The SSNIP test is not very useful if price is zero, because a 5 percent increase of zero is zero! Furthermore, the profit impact of any price increase will occur on both sides of the market and that must be taken into account. Should the SSNIP involve a 5 percent increase for both user groups? It is not trivial to determine how to extend standard methods of market definition to two-sided markets.47 Many of those two-sided platforms with zero prices are seeking to attract one side of the platform and do so by making it free and offering services (such as search) or content (such as videos or news stories). While price is zero, there is positive quality associated with that service. It has then been suggested that SSNIP could be replaced with a “small but significant and nontransitory decrease in quality” (or SSNDQ; try saying that!). If it is profitable for a collection of services to all reduce quality by, say, 5 percent, then perhaps that could define a market. Besides the difficulty of measuring quality and its impact on demand, it is unclear how useful this will be when evaluating mergers of competing two-sided platforms. As methods of antitrust analysis for two-sided markets are not yet well developed or widely applied, our focus will not be on methods but rather on some recent cases that highlight some of the relevant antitrust issues. As one of the dominant companies in the world, Google has drawn the attention of global competition authorities on a number of issues related to monopolization practices. Google Google operates a general purpose (or “horizontal”) search engine that is dominant in many countries, and it has a market share exceeding 90 percent in most EU countries. By comparison, “vertical” search engines focus on narrowly defined content categories, such as travel (for example, Trip Advisor) and dining (for example, Yelp). In the vertical area of comparison shopping, Google initially entered without success with Froogle and then Google Product Search. It has achieved far more success with its more recent Google Shopping, but the EC claims that success may be the result of Google having abused its dominance in general search engines and thereby has violated Article 102.48 When a user enters a keyword, such as “headphones,” Google reports organic links, ads, and Google Shopping links (which, as with ads, provide revenue to Google when clicked). The links from other vertical search engines appear under organic links. The EC’s Statement of Objections noted that Google’s placement of its own shopping links was not determined by their suitability (as assessed by the Google Search algorithm), while the links of competing vertical search engines were. FTC economists noted that this gave Google Shopping a distinct advantage: Google’s vertical properties would rank poorly if they were crawled and indexed by Google, because they have never been

“engineered” for ranking by the search engine. Unlike Google’s vertical competitors, who expend considerable resources on optimizing their websites in order to rank highly, … Google does not expend the time and resources to optimize its own vertical properties: it simply places them on the Search Engine Results Page.49

This practice “had the collateral effect of pushing the ‘ten blue links’ of organic search results that Google had traditionally displayed farther down the search results page [and some vertical search engines] charged that Google manipulated its search algorithms in order to demote vertical websites that competed against Google’s own vertical properties.”50 Both the EC and FTC concluded that this practice resulted in significant harm to rival vertical websites [which] have experienced significant drops in traffic.… Simultaneously, Google’s prominent placement and display of its Universal Search properties led to gains in user share for its own properties.51

Harming rivals is not necessarily anticompetitive. Competition laws are designed to preserve competition for the purpose of protecting consumers, not protecting competitors. However, the EC went on to conclude that Google’s conduct has a negative impact on consumers and innovation. It means that users do not necessarily see the most relevant comparison shopping results in response to their queries, and that incentives to innovate from rivals are lowered as they know that however good their product, they will not benefit from the same prominence as Google’s product.52

As a remedy, the EC proposed that Google use the same criteria for determining the placement of its own comparison shopping services as it uses for rival vertical search engines. Perhaps as a reflection of the EC’s more aggressive stance toward abuse of dominance or just a different interpretation of the facts, the FTC chose not to pursue a case against Google. The reason expressed by FTC economists is a common one when judging a business practice: Is it delivering a better service or does it have little competitive merit? The evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.… Staff does not recommend that the Commission move forward on this cause of action.53

FTC economists did identify one particular practice that was seen as harmful to rivals and lacked “a countervailing efficiency justification.” Google Shopping improved its own listings by “scraping” and posting the content of rival vertical websites. They recommended action against Google, because this practice did not benefit consumers and served to “maintain, preserve, or enhance its monopoly power in the markets for search and search advertising.” Though a case was not pursued, the FTC managed to get Google to agree not to “misappropriate online content from so-called ‘vertical’ websites.”54 Focusing on the first practice (and thus putting aside the scraping and posting of another website’s material), let us summarize the assessment of whether Google violated Section 2 of the Sherman Act. First, is Google a dominant firm? A firm must be dominant in a market to violate Section 2. Google is indeed dominant in general search. The next issue is whether it used that dominance to create market power in shopping comparison search. Did the practice harm rivals? It does appear so. Finally, does the practice have offsetting procompetitive benefits? The EC concluded that it did not, but the FTC did not dismiss the possibility that Google was intending to deliver the “best user experience.”55 Let us move on to the next set of Google practices that drew the attention of competition authorities. Consider an advertiser who wants to place ads on a search advertising platform. To enhance the productivity of this activity, these platforms use application programming interfacesAPIs, which give these advertisers

direct access to the platform in order to manage and optimize their advertising. Such conduct is clearly procompetitive, as it makes advertising more productive. Where the possible anticompetitive concern arises is in contractual restrictions imposed by Google that prevented advertisers using software that would allow them to simultaneously manage their ads on Google’s AdWords platform and rival advertising platforms (such as Microsoft’s AdCenter). Google mandated that they remove functionality in order to disable the simultaneous (and efficient) management of search advertising campaigns. This restriction imposed by Google raised the cost of a consumer (here, an advertiser) of dealing with a rival firm, while not offering offsetting benefits when dealing with Google. This additional cost was determined to have had an impact, as it was concluded that “advertisers are spending less on the nondominant search network [and] for advertisers, this means forgone advertising opportunities that presumably would have been profitable, but for the restrictive conditions.”56 This case is effectively about raising rivals’ costs and, without any offsetting procompetitive benefits, FTC economists concluded there was a violation of Section 2 of the Sherman Act. Again the FTC did not bring a case, but Google did agree to remove these restrictions. Finally, the EC accused Google of seeking to maintain its dominance in search engines (as opposed to leveraging that dominance elsewhere) by using its market power in “licensable smart mobile operating systems and app stores for the Android mobile operating system.”57 Google developed the operating system Android, which is the dominant OS used in mobile phones in the European Union. In its Statement of Objections, the EC reported that Google violated Article 102 by requiring manufacturers to pre-install Google Search and Google’s Chrome browser and requiring them to set Google Search as default search engine on their devices, as a condition to license certain Google proprietary apps [and] giving financial incentives to manufacturers and mobile network operators on condition that they exclusively pre-install Google Search on their devices.

Those “proprietary apps” include Google Play Store, which supplies apps and is “commercially important for manufacturers of devices using the Android operating system.”58 We then have both tying of Google Search to Google Play Store and exclusive dealing with regards to pre-installed search engines.59 These practices are reminiscent of Microsoft’s exclusivity practice of tying Internet Explorer IE to Windows. Microsoft technologically tied IE to Windows and implemented a contractual tie by prohibiting PC manufacturers from taking IE off the desktop and replacing it with another browser. This latter tie is similar to Google’s provision requiring that Google Search be the default search engine. While Microsoft was seeking to make IE dominant by tying it to Windows, the EC claims that Google was seeking to maintain Google Search’s dominance by tying it to Google Play Store.60 Industries with Rapid and Disruptive Innovation When innovation is rapid and disruptive, a well-functioning competitive process can mean a series of monopolies.61 Though the market is not competitive as measured by the presence of many firms or low price-cost margins, it is competitive in the sense that prospective firms compete to become the next monopoly. Consumers benefit from competition for the market rather than competition in the market. More concretely, they benefit from the better product and services that come out of a stream of innovations. Though the current dominant firm may be charging price well above cost, it is the prospect of those high profits that induce potential entrants to invest in order to come up with the next best technology. Such a dynamic setting provides a unique set of antitrust challenges, because now the objective of an

antitrust authority is to maintain strong incentives to innovate rather than to have price close to cost. Some economists argue for a minimalist antitrust policy for at least three reasons. First and foremost, antitrust intervention can stifle innovation. Intervention means constraining what the current dominant firm can do, and that will necessarily lower its profits (because the monopolist could have done what is now mandated by the antitrust authority and presumably it chose not to do so, as that action did not maximize profit). For example, antitrust policy could restrict contractual terms with buyers (as in Microsoft I) or prohibit the tying of certain goods (as was an issue in Microsoft II). By limiting the profits earned, the antitrust authority is reducing the returns to innovation, and that will reduce how much is spent on R&D and ultimately decrease the rate at which new products are developed and come to market. One is reminded of the caution opined by Judge Learned Hand: “The successful competitor, having been urged to compete, must not be turned upon when he wins.”62 Second, even if there are some anticompetitive practices, by the time that the antitrust process runs its course (an investigation is conducted, a decision is reached, and the judicial process of review and appeal is completed), the market may have changed so much that any remedy is inappropriate or irrelevant. Antitrust litigation moves slowly relative to the new economy.… The mismatch between law time and new-economy time is troubling in two respects. First an antitrust case … may drag on for so long relative to the changing market conditions … as to become irrelevant, ineffectual.… Second, even if the case is not obsoleted by the passage of time, its pendency may cast a pall over parties to and affected by the litigation, making investment riskier and complicating business planning.63

A third rationale for an antitrust authority to have a light touch is the belief that any abuse of market dominance is likely soon to be rectified by the arrival of a new firm with a better technology that displaces the dominant firm. Furthermore, the abusive dominant firm’s reckoning may draw nearer when its abuse is more harmful to consumers. Every time the monopolist asserts its market dominance [it gives customers] more incentive to find an alternative supplier, which in turn gives alternate suppliers more reason to think that they can compete with the monopolist. Every act exploiting monopoly power to the disadvantage of the monopoly’s customers hastens the monopoly’s end by making the potential competition more attractive.64

In sum, the laissez faire view argues that the remedy is unlikely to be effective or necessary when dealing with any current harm created by anticompetitive conduct, while antitrust proceedings could well create future harm by discouraging R&D and thereby reducing the rate of innovation. Although there is genuine reason for concern that an aggressive antitrust authority could have detrimental consequences for innovation, the above argument for little or no role for antitrust policy errs on the side of wishful thinking. If anticompetitive conduct serves to make entry by a more efficient firm difficult, the argument that any abuse will soon be corrected is undermined. And if entry is more difficult or less profitable due to those practices, then the incentives to invest in R&D are lessened, so innovation could be harmed by weak antitrust enforcement as much as it can by strong enforcement. We also know from our analysis of network effects a strong incumbency advantage can exist, which—even without anticompetitive conduct—can result in better technologies failing. Allowing that incumbent firm to abuse its dominant position would raise already formidable obstacles to entry faced by even more efficient firms. This discussion reveals that a more active antitrust policy has two counteracting effects on the returns to innovation. It reduces the profits earned when a firm is a monopolist (which lowers the return to innovation), but it also raises the profits earned by an entrant when it competes with the currently dominant firm (which increases the return to innovation). To examine this tradeoff, we turn to describing a model of a market with rapid and drastic innovation and use it to discuss the impact of antitrust policy.

Segal-Whinston Model Suppose that at any moment in time, the market is occupied by a monopolist earning monopoly profit by virtue of having the most advanced technology.65 A stream of potential entrants threaten to displace that dominant firm. Assume there is one potential entrant at any time. Its decision is how much to invest in R&D in order to come up with the next best technology that would make it the new monopolist. If its investment succeeds in delivering an innovation, the new firm and the established dominant firm compete; the new firm has the advantage of a better technology, while the incumbent firm has an edge in its dominant position in the market. Though initially the two firms compete, eventually the new technology wins out, and the previously dominant firm exits (or becomes a fringe firm); the entrant is now the new monopolist. This process repeats anew as the new monopolist earns monopoly profit until an entrant arrives with a superior technology to take over the market. This market is then described as a sequence of monopolies with the exception of a short time when the previous technology leader competes with the new technology leader. There is competition in the market during that phase, but the primary form of competition is for the market. A series of potential entrants strive over time to become the next monopolist, and consumers benefit from each technological innovation. In this model, antitrust policy plays a role during the phase when the entrant and the incumbent firm are both present. It is presumed that the dominant firm could engage in anticompetitive practices to reduce the profit of the entrant. Antitrust policy can limit the extent of those practices, which would raise the entrant’s profit (and possibly lower the incumbent firm’s profit). In its most extreme form, anticompetitive conduct could drive the entrant out of the market, so the incumbent firm maintains its monopoly. This model assumes that possibility away and focuses on how the monopolist can abuse its dominant position to make entry less profitable.66 Let us look at the problem faced by a potential entrant investing in R&D. If investment fails to produce an innovation, then the potential entrant incurs a loss equal to the R&D expenditure and departs.67 If it succeeds, then it earns profit of πE while competing with the incumbent firm. The incumbent firm earns πI during that competition phase. After the incumbent firm exits, the entrant earns monopoly profit πM as the new monopolist.68 Actually, it earns πM only until another entrant comes along with the next best technology, at which time it earns πI for one period and then exits. Of course, how long it can expect to be a monopolist depends on the R&D investment of future potential entrants. For now, let VI denote the present value of the expected profit stream of a dominant firm. It takes into account the earning of monopoly profits until the next innovation. In examining the R&D decision of a potential entrant, we can think about it choosing a level of expenditure that results in some probability of success, or equivalently, the problem can be cast as choosing a probability of success, which then requires a certain a level of R&D expenditure. The latter approach will prove more useful. Let φ denote the probability of success chosen by a potential entrant, and c(φ) be the cost of achieving φ. The function c(φ) is assumed to be increasing in φ, which means that a higher probability of producing an innovation requires spending more on R&D. The problem faced by a potential entrant is to choose a probability of success in order to maximize its expected return less the cost of the associated R&D. Its return is initially receiving πE and then, after the incumbent firm exits, earning monopoly profits until a later potential entrant innovates. The present value of that uncertain stream of profits is VI, which will be discounted by incumbent firm exits. A potential entrant’s problem is then

as the stream starts after the

The potential entrant incurs cost c(φ) and with probability φ, it receives an “innovation prize:”

A potential entrant chooses the probability of innovation φ to maximize φw − c(φ), which is the expected benefit less the cost of R&D. For a given value for the prize, w, let S(w) denote the solution to equation 9.4; it is the profit-maximizing level of φ. Solution S(w) is referred to as the “supply” of innovation and is plotted in figure 9.12. S(w) is upward sloping, because a bigger prize brings forth more R&D, which results in a higher rate of innovation.

Figure 9.12 Benefit and Supply of Innovation

We have just shown how the innovation rate φ depends on the innovation prize w. However, the prize also depends on the (future) innovation rate, as that innovation rate determines how long the new monopolist can expect to hold its monopoly. That is, the values for φ chosen by future potential entrants will affect VI and thus influence the innovation rate that is chosen today. Let us next take account of how the prize is determined by the innovation rate. The value of being an incumbent firm, VI, can be expressed as

Note that, with probability 1 − φ, the current potential entrant fails in its R&D, so there is no innovation. In that case, the current incumbent firm is still a monopolist, so it earns πM in the current period and a future profit stream valued by VI. With probability φ, the potential entrant innovates, so the incumbent firm earns profit πI and then exits. Solving equation 9.5 for VI, we have

Recalling that the prize is

substitute the expression for VI in equation 9.6

where B(φ) is the value of the prize w that results from an innovation rate φ. That is, if each period there is a probability φ of an innovation that would supplant the current monopolist, then B(φ) is the present value of expected profits from being a monopolist. The function B(φ) is plotted in figure 9.12. It is decreasing with φ because a higher innovation rate reduces the expected time until a monopolist is displaced by drastic innovation, which lowers the value of the prize associated with innovating and thereby becoming a monopolist. Figure 9.12 illustrates the tension between the innovation rate and the magnitude of the prize. A higher prize brings forth a higher rate of innovation—as reflected in S(w) increasing with w—but a higher rate of innovation lowers the size of the price—as reflected in B(φ) decreasing with φ. The equilibrium innovation rate is given by the intersection of the two curves, which occurs at innovation rate φ* and prize w*. The values of φ* and w* are consistent in that if the stream of potential entrants anticipate a prize of w*, then they will all optimally innovate at the rate φ*, and that innovation rate does indeed result in a prize of w*. Note that if, say, the innovation rate were φ′ < φ*, then the resulting prize would be w′ = B(φ′), and that prize is so large (because the innovation rate is so low and thus a monopoly lasts for a long time) that it would induce the innovation rate S(w′) = φ″, which exceeds φ′. Thus, it is not an equilibrium to have an innovation rate of φ′. The objective is to use this model to gain insight into how antitrust policy impacts the rate of innovation in a market with rapid and drastic innovation. A more active antitrust authority is presumed to restrict what the incumbent firm can do during the competition phase, which would then result in a higher value of πE and (possibly) a lower value of πI. Note that changing πE and πI affects the prize and thus shifts B(φ). However, it does not change the innovation supply curve S(w). (What would shift S(w) would be, for example, if R&D became less productive, so that c(φ) increased.) Hence, a change in the activities of the antitrust authority

shifts B(φ) but leaves S(w) unaltered. Suppose that antitrust policy restricts certain practices, which serves to raise πE but not affect πI. By equation 9.7, B(φ) shifts up to, say, (see figure 9.13). The higher return to innovating causes the innovation rate to rise to and even though monopolies are replaced more quickly, the prize increases to because profits from innovating are higher. Alternatively, suppose a more active antitrust authority lowered the incumbent firm’s profit πI but did not make the entrant any better off, so that πE is unchanged. An example would be if a monopolist were forced to go through expensive legal proceedings that ultimately did not rid the market of any abuse. This activity shifts B(φ) down, say to and the innovation rate declines to

Figure 9.13 Change in Innovation Rate and Innovation Prize

More generally, if a change in antitrust policy benefits the entrant (that is, it increases πE) and any harm to the incumbent firm is not greater in magnitude (that is, πE rises by at least as much as πI falls or, equivalently, πE + πI does not decline), it can be shown B(φ) shifts up and therefore the innovation rate rises. Intuitively, an entrant immediately reaps the benefits of higher πE but only suffers the loss of lower πI in the future (when an entrant arrives with a better technology), which is discounted. If the rise in πE is equal or greater in magnitude to the loss in πI then the innovation prize is larger. Let us use this insight to think through the impact of antitrust policy. As an example, consider exclusive dealing. Microsoft engaged in exclusive dealing when it required a PC manufacturer to pay Microsoft for every computer it shipped, whether or not it had installed a Microsoft operating system.69 Suppose that the antitrust authority sets the maximum fraction of customers that the incumbent firm is permitted to sign to an exclusive dealing contract. Antitrust policy is least restrictive when this maximum is one—the incumbent firm is left unconstrained—and is most restrictive when it is zero—so all exclusive dealing is prohibited.

The lower is the fraction, the more assertive the antitrust authority will be. We know from chapter 7 that exclusive dealing can be a profitable anticompetitive practice for an incumbent firm, even when the entrant has a better technology.70 Consider an exclusive dealing arrangement with the incumbent firm that allows customers to buy from another firm if they pay a per unit fee to the incumbent firm (which can be thought of as a contractual penalty for not buying exclusively from the incumbent firm). Having to pay that penalty lowers the price that consumers are willing to pay to buy from a new firm. This exclusive dealing contract then shifts profit from the entrant (through a lower price that it can charge) to the incumbent firm (through the fees it collects from customers who buy from the entrant). When that is a pure transfer (so πE falls by an amount equal in size to the rise in πI), B(φ) shifts down, which implies a lower innovation rate. Hence, more aggressive antitrust intervention that limits exclusive dealing results in more innovation.71 Consumers benefit from the elimination of exclusive dealing, as a higher innovation rate results, so better products arrive faster. However, consumers could be worse off during the competition phase when the established firm and the entrant are competing. Exclusive dealing shifts profit from the entrant to the dominant firm, and it is possible that some of the profits lost by the entrant could end up with consumers. Consumers might be able to get a share when they sign the exclusive dealing contract with the incumbent firm. If that is so, then the prohibition of exclusive dealing makes consumers worse off in the competition phase. However, if the gains from innovation are large (and enough of those gains are appropriated by consumers), then on net consumers will be better off. Summary The creation of intellectual property in the form of computer code lies at the heart of the New Economy, whether it is a smartphone OS, a social network, or an online retailer. Of relevance to antitrust, market dominance is a common feature of many New Economy markets. There are several reasons for this. One factor is the existence of scale economies associated with the creation and application of knowledge. There is a high fixed cost from investing in R&D, while the marginal cost of distributing that knowledge to a consumer is typically low. A second reason for dominance comes from big data, which is relevant to many Internet-related markets. Big data has a learning curve that fuels a dynamic that produces dominance: More user activity generates more data, which is used to generate better predictions as to what users want, which then attracts more users who then generate more data, and so on. A third common source of dominance in the New Economy is network effects. A product or service has network effects when its value to a consumer increases with how many other consumers use it. In some cases, network effects are direct, as with a social network. More people on a social network makes it more valuable to use it, as then there are more people with whom to communicate. Indirect network effects arise when more users result in the supply of complementary products, which enhance the value of a user’s experience. For example, the value of a computer OS to a consumer is greater when there are more software applications written for it. Software developers are more inclined to write an app for an OS when it has more users, as then there are more potential buyers of that app. Hence, the more consumers who buy an OS, the more apps are written for that OS, which then makes the OS of greater value to those consumers, which induces more of them to buy it; ergo, market dominance. Network effects are also present in the two-sided platforms that form a significant segment of the New Economy. A two-sided platform brings together two different types of agents for the purpose of engaging in

a transaction. An online auction site is a platform that brings together buyers and sellers, while a search engine brings together advertisers and consumers (who are largely interested in conducting search). With a two-sided platform, the network effect is coming from the other side of the platform. For example, a person requiring transportation finds it more valuable to use a ride sharing platform when there are more drivers using that platform, and vice versa. Markets with dominant firms often draw the attention of competition authorities because of the possibility of anticompetitive conduct. Though the dominance might have been acquired through superior efficiency, there is the concern that it will be sustained and extended to other markets through illegal practices. Cases examined in this chapter involved anticompetitive conduct stemming from Microsoft’s dominance in computer OSs and Google’s dominance in search engines and apps for smartphones using the Android OS. When it comes to antitrust analysis, some of the basic lessons from standard markets do not apply to many markets in the New Economy. As we know from chapter 8, a price below marginal cost is often taken as strong evidence of predation. However, when there are network effects, below-cost pricing may just be a means of competing to become the dominant firm and might not be driven by predation. Two-sided platforms are especially challenging in this regard. The profit-maximizing price charged to a user on a two-sided platform prices is often quite unrelated to the cost of serving a user and need not even depend on the value that a user attaches to the platform. Instead, it can be determined by how much a user’s participation adds to the platform’s value for the other types of users (because of cross-platform network effects). If a user group imparts (receives) strong network effects to (from) the other user group, its pricecost margin will tend to be low (high). This property can result in a platform charging a zero or even negative price to one type of user. Furthermore, competition among platforms can be intense, as reflected in low profits, but some users face a high price-cost margin. Hence, a high price-cost margin need not reflect high market power. In much of the New Economy, the primary force is competition for a market rather than competition in a market. As a result, the relationship between market concentration and price is rarely a primary consideration in antitrust matters. More important is the role of potential competition and, in particular, the ease with which a firm with a superior technology can succeed. The heightened importance of potential competition means that a competition authority may be less concerned with the acquisition by a dominant firm of an existing rival and more concerned with its acquisition of a nascent technology owned by a noncompetitor that could prove to be a disruptive innovation. That brings us to another common feature of the New Economy, which is rapid and drastic innovation. While continual incremental innovation is routine for Internet-related firms, the more striking feature is innovation that has the potential to displace what is currently the best product or service on the market. Scale economies in knowledge creation, increasing returns from big data, and network effects all contribute to market dominance, but a high rate of disruptive innovation can mean that dominant firms are regularly supplanted. This feature poses a significant challenge when it comes to antitrust analysis, because it is difficult to predict where the next disruptive innovation will come from. It is unclear to what extent any current abuse of market dominance—such as exclusionary practices—will soon be made irrelevant by the arrival of a competitor with a superior technology or whether instead that abuse might be stifling what would have been the next disruptive innovation. In spite of these challenges, there is a role for cautious and judicious competition authority: I think a policy of zero enforcement against alleged exclusionary practices in the new economy would be a mistake, because there is a pretty solid theoretical basis for concern both that some new-economy firms would find it in their rational selfinterest to employ such practices and that natural market forces would not undo those practices in time to avoid significant

social costs.… Clearly, though, the byword of a prudent enforcement agency and a sensible court will be: caution.72

Questions and Problems 1. In United States v. Microsoft Corp., Microsoft expert witness Richard Schmalensee testified: The software industry is intensely competitive. Microsoft does not have monopoly power over operating systems, nor does it act like a company with monopoly power. Microsoft’s pricing for Windows is far, far lower than what a rational, profitmaximizing company with a monopoly over operating systems would charge. Computer manufacturers pay Microsoft a royalty for their license to Windows that is considerably less than 5 percent of the price of a typical new PC. Application of a standard economic formula for pricing by a monopolist shows that Microsoft charges less than one-sixteenth the price for Windows that a monopolist would charge.73 a. Professor Schmalensee is referring to the static monopoly price as defined in figure 3.2 and equation 8.1. How might you come up with an estimate of the static monopoly price? b. Assuming it was a monopolist, why might Microsoft price below the static monopoly price? c. Do you agree or disagree with Professor Schmalensee? Substantiate your position. 2. Two structural remedies were considered for Microsoft. One remedy would have created three identical companies, each selling Windows and all of Microsoft’s applications that run on Windows. The second remedy would have created two companies, one with Windows and the other with the applications. a. Which remedy do you think would have done a better job of increasing future competition? How about increasing future consumer welfare? b. Which remedy would have had more of an effect on market power in the OS market? In the applications market? 3. As covered in chapter 8, an essential facility is an input controlled by a monopolist that is essential to competitors and is effectively infeasible for competitors to duplicate. The essential facilities doctrine is that the monopolist must share those resources with their competitors on reasonable terms. a. Do you think Google’s search engine is an essential facility? (Address this question in the context of online retailing, where the competitors are vertical search engines.) b. If Google is an essential facility, how would you apply the essential facilities doctrine? 4. In 2015, the biotech entrepreneur Martin Shkreli bought the marketing rights to Daraprim, which is a drug that treats a parasitic infection. He then raised the price from $13.50 per pill to $750 per pill. a. Is his conduct a violation of U.S. antitrust laws? b. In a country that prohibits “excessive pricing,” would this price hike be illegal? How would you define the “excessive” price for Daraprim? c. Should the government intervene and pass legislation to put a price cap on the drug? What are the benefits and costs of such a policy? Should the price cap be set so as to produce, say, a 10 percent rate of return on the manufacturing and marketing of the drug? 5. In the context of the duopoly model of competition in this section “Economics of Markets with Network Effects, consider firms’ prices prior to any firm becoming dominant. Address how a firm’s dynamically optimal price would respond to each of the following situations. a. Customer demand is expected to grow faster. b. A firm’s discount factor is reduced (so that it values future profits less). c. A superior product is discovered, which is currently in the development stage. Once it hits the market in a few years, the current products will be obsolete. d. Future antitrust enforcement is anticipated to be weaker, so anticompetitive conduct is less likely to be prosecuted. 6. In the context of the duopoly model of competition in the section in “Economics of Markets with Network Effects,” suppose that the two firms were to merge. Assume that the merger occurred before either firm became dominant, and there

is no prospect of postmerger entry. a. What would be the impact on consumer welfare in the short run? (Consider both the effect on prices and consumer value from network effects.) b. What would be the impact on consumer welfare in the long run? 7. For each of the following product markets, do you think network effects are present? (Remember to consider the role of complementary products). a. Online payment services like PayPal and Venmo. b. Electric cars like Telsa. c. Self-driving cars. 8. Consider a two-sided platform that matches men and women who are interested in finding a marriage partner. The platform sets a price for men to access the platform and a price for women to access the platform. These prices are chosen to maximizes the platform’s profits. Assume the platform is the only one in the market. a. What determines which gender pays a higher price? b. Suppose a massive immigration wave occurs that increases the fraction of young men in the population relative to young women. How will this affect the male and female prices on the platform? c. Suppose the value of marrying exogenously increased for men, while the value of marrying for women was unchanged. Hence, in terms of figure 9.9, the benefit curve rises for men and is unchanged for women. What will happen to the price for men to access the platform? What about the price for women to access the platform? d. Now suppose a second platform enters the market. Legislation is passed that allows a man to belong to both platforms but permits a woman to belong to at most one platform. Will this legislation tend to cause the access price for women to be higher or lower than the access price for men? 9. Provide as many reasons as you can for why the dynamically optimal price for a firm could be below its marginal cost. 10. The economist Joseph Schumpeter wrote in 1942: But in capitalist reality as distinguished from its textbook picture, it is not [perfect] competition which counts, but the competition from the new commodity, the new technology, the new source of supply, the new type of organization … competition which strikes not at the margins of the profits and the outputs of the existing firms but at their very foundations and their very lives.74 a. Do you agree or disagree? Explain your answer. b. What industries conform to this view? What industries do not? c. What implications does this view have for antitrust policy?

Notes 1. Richard A. Posner, “Antitrust in the New Economy,” John M. Olin Law & Economics Working Paper 106, University of Chicago, 2000. 2. It is important to note that Amazon had to scale up distribution and other services as its demand rose, which involved some fixed costs. Nevertheless, the general point holds that the fixed cost of an online retailer is far below that of a conventional retailer. 3. Kyle Bagwell, Garey Ramey, and Daniel F. Spulber, “Dynamic Retail Price and Investment Competition,” RAND Journal of Economics 28 (Summer 1997): 207–227. 4. David S. Evans and Richard Schmalensee, “Some Economic Aspects of Antitrust Analysis in Dynamically Competitive Industries,” in Adam H. Jaffee and Scott Stern, eds., Innovation Policy and the Economy (Cambridge, MA: MIT Press, 2002), p. 14. 5. Examining the entries in Wikipedia for “List of mergers and acquisitions by [insert company]” reveals hundreds of transactions involving many billions of dollars when the company is Facebook, Google, or Microsoft. For Facebook,

acquisitions run the gamut from $8.5 million for the FB.com domain name from the American Farm Bureau Federation to more than $19 billion for WhatsApp. 6. It has been reported that Sergey Brin and Larry Page offered to sell Google for $1 million in 1998 to Alta Vista, so that they could return to their graduate studies at Stanford. That deal did not work out. Yahoo! offered to buy Google for around $3 billion in 2002, but Google reportedly wanted $5 billion, so no deal. When Google went public two years later, it was valued at $27 billion. 7. For details, see Richard J. Gilbert, “E-books: A Tale of Digital Disruption,” Journal of Economic Perspectives 29 (Summer 2015): 165–184. 8. The case involved online retailers of posters; see Amar Naik, “Pricing Algorithms and the Digital ‘Smoke-Filled Room’,” SheppardMullin Antitrust Law Blog, December 18, 2015. www.antitrustlawblog.com/. 9. This is not exactly true, because Q = 1 when a consumer buys. The analysis is simpler if we gloss over this minor caveat. 10. The ensuing discussion is partly based on the analysis in Jiawei Chen, Ulrich Doraszelski, and Joseph E. Harrington Jr., “Avoiding Market Dominance: Product Compatibility in a Market with Network Effects,” RAND Journal of Economics 40 (Autumn 2009): 455–485. A highly accessible introduction to network effects is Michael L. Katz and Carl Shapiro, “Systems Competition and Network Effects,” Journal of Economic Perspectives 8 (Spring 1994): 93–115. 11. Indeed, the effectiveness of exclusionary contracts in deterring entry may be greater with network effects than with scale economies; see Carl Shapiro, “Exclusivity in Network Industries,” George Mason Law Review 8 (Fall 1999): 1–11. 12. Though we do not explain why the monopolist does not charge a price of 150, it is worth noting that Microsoft charged a price for its OS that was well below the static monopoly price; see Chris E. Hall and Robert E. Hall, “Toward a Quantification of the Effects of Microsoft’s Conduct,” American Economic Review 90 (May 2000): 188–191. 13. The focus in the example has been on the harm to the downstream firm, though some of that harm will be passed along to final consumers. 14. An excellent reference is the decision by the U.S. Court of Appeals: United States of America v. Microsoft Corporation, no. 005212A (June 28, 2001). For views of some of the expert economists involved in the case, see David S. Evans, Franklin M. Fisher, Daniel L. Rubinfeld, and Richard L. Schmalensee, Did Microsoft Harm Consumers? Two Opposing Views (Washington, DC: AEI–Brookings Joint Center for Regulatory Studies, 2000). 15. United States v. Microsoft, 56 F.3d 1448 (D.C. Cir. 1995). 16. For a review of this case, see Richard J. Gilbert, “Networks, Standards, and the Use of Market Dominance: Microsoft (1995),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 3rd ed. (New York: Oxford University Press, 1999), pp. 409–429. 17. United States v. Microsoft, 147 F.3d 935 (D.C. Cir. 1998). 18. United States v. Microsoft Corp. 253 F.3d (D.C. Cir. 2001), p. 8. 19. A useful reference is Kenneth C. Baseman, Frederick R. Warren-Boulton, and Glenn A. Woroch, “Microsoft Plays Hardball: The Use of Exclusionary Pricing and Technical Incompatibility to Maintain Monopoly Power in Markets for Operating System Software,” Antitrust Bulletin 40 (Summer 1995): 265–315. 20. See chapter 7 for a discussion of anticompetitive theories of tying. 21. United States v. Microsoft Corp. 253 F.3d (D.C. Cir. 2001), p. 78. 22. This discussion borrows heavily from Daniel L. Rubinfeld, “Maintenance of Monopoly: U.S. v. Microsoft (2001),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution: Economics, Competition, and Policy, 4th ed. (New York: Oxford University Press, 2004). 23. There were also anticompetitive contracts with other companies, such as Apple Computer. 24. United States v. Microsoft Corp. 253 F.3d (D.C. Cir. 2001), p. 61. 25. For a description of this structural remedy, see Robert J. Levinson, R. Craig Romaine, and Steven C. Salop, “The Flawed Fragmentation Critique of Structural Remedies in the Microsoft Case,” Antitrust Bulletin (Spring 2001): 135–162. 26. A useful paper on competition policy and big data is “Big Data: Bringing Competition Policy to the Digital Era,” OECD Background Note by the Secretariat, DAF/COMP(2016)14, Organisation for Economic Co-ordination and Development, Paris, October 27, 2016.

27. The ensuing material is partly based on Cédric Argenton and Jens Prüfer, “Search Engine Competition with Network Externalities,” Journal of Competition Law & Economics 8 (March 2012): 73–105. 28. Based on a survey cited in ibid., footnote 18. 29. FTC Staff Report on FTC File Number 111-0163, Federal Trade Commission, Washington, DC, August 8, 2012, pp. 14, 16. 30. Ensuing market shares are for 2010 and are from Justus Haucap and Ulrich Heimeshoff, “Google, Facebook, Amazon, eBay: Is the Internet Driving Competition or Market Monopolization?” International Economics and Economic Policy 11 (February 2014): 49–61. 31. Maurice E. Stucke and Allen P. Grunes, “Debunking the Myths over Big Data and Antitrust,” CPI Antitrust Chronicle, May 2015 (2), p. 3. 32. Ibid, p. 3. 33. Facebook, Inc. v. Power Ventures, Inc., No. C 08-5780 WDB (N.D. Cal. Dec. 30, 2008). A useful reference here is Christopher S. Yoo, “When Antitrust Met Facebook,” George Mason Law Review 19 (Summer 2012): 1147–1162. 34. Foremost Pro Color, Inc. v. Eastman Kodak Co., 703 F.2d 534, 542 (9th Cir. 1983). 35. Press release, “Statement of the Department of Justice Antitrust Division on Its Decision to Close Its Investigation of the Internet Search and Paid Search Advertising Agreement Between Microsoft Corporation and Yahoo! Inc.,” Department of Justice, Washington, DC, February 18, 2010. 36. Office of Fair Trading, “Completed Acquisition by Motorola Mobility Holding (Google, Inc.) of Waze Mobile Limited,” ME/6167/13, London, December 17, 2013, p. 7. 37. Ibid, p. 11. 38. Ibid, p. 13. 39. For a more comprehensive discussion of some of these issues, see Maurice Stucke and Allen Grunes, Big Data and Competition Policy (Oxford: Oxford University Press, 2016). 40. While a platform could bring together more than two types of agents, and thus have more than two sides, all the examples mentioned and discussed here have two sides and, for that reason, we chose to refer to two-sided (rather than multi-sided) platforms. A reference for this section is David S. Evans, “Background Paper to OECD Policy Roundtable: Two-Sided Markets,” Organisation for Economic Co-ordination and Development, Paris, 2009. For a very accessible overview, see Marc Rysman, “The Economics of Two-Sided Markets,” Journal of Economics Perspectives 23 (Summer 2009): 125–143. 41. Arun Sundararajan, The Sharing Economy: The End of Employment and the Rise of Crowd-Based Capitalism (Cambridge, MA: MIT Press, 2016). 42. In the early days of the Internet, there were congestion effects because the lack of bandwidth slowed downloads, but that is rarely the case anymore. 43. More to the point, VB does not depend on QB, and VS does not depend on QS. 44. If we use calculus, the term ΔQ2 is replaced by 45. See http://searchengineland.com/new-ppc-report-bing-ads-vs-adwords-in-6-us-verticals-152229 (accessed on January 9, 2017). 46. These and other departures between standard markets and two-sided markets are covered in Julian Wright, “One-Sided Logic in Two-Sided Markets,” Review of Network Economics 3 (March 2004): 44–64. 47. For a discussion, see Lapo Fillistrucchi, Damien Geradin, Eric van Damme, and Pauline Affeldft, “Market Definition in Two-Sided Markets: Theory and Practice,” Journal of Competition Law and Economics 19 (June 2014): 293–339. 48. We will also be quoting from a report by FTC staff economists as they examined the same set of practices, though the FTC chose not to bring a case. This report—“FTC Bureau of Competition Staff to the Commission re Google Inc.”—was intended only for internal use, but its even pages were mistakenly released by the FTC to the Wall Street Journal. It can be accessed at graphics.wsj.com/google-ftc-report.

49. FTC Staff Report on FTC File Number 111-0163, Federal Trade Commission, Washington, DC, August 8, 2012, p. 30. 50. Statement of the Federal Trade Commission Regarding Google’s Search Practices, In the Matter of Google Inc., FTC File Number 111-0163, January 3, 2013. 51. FTC Staff Report on FTC File Number 111-0163, August 8, 2012, p. 80. 52. “Antitrust: Commission Sends Statement of Objections to Google on comparison shopping service,” European Commission Press Release, Brussels, April 15, 2015. 53. FTC Staff Report on FTC File Number 111-0163, August 8, 2012, p. 86. 54. “Google Agrees to Change Its Business Practices to Resolve FTC Competition Concerns in the Markets for Devices Like Smart Phones, Games and Tablets, and in Online Search,” Federal Trade Commission Press Release, Washington, DC, January 3, 2013. 55. However, is not the Google Search algorithm designed to yield the best search results? In which case, overruling the algorithm would seem unlikely to lead to better information for consumers. 56. FTC Staff Report on FTC File Number 111-0163, August 8, 2012, p. 44. 57. “Antitrust: Commission sends Statement of Objections to Google on Android operating system and applications,” European Commission Press Release, Brussels, April 20, 2016. 58. Ibid. 59. Another practice, which we do not cover, limits the use of modified versions of Android if a manufactured wanted to pre-install these proprietary Google apps. 60. When Google started out, its mantra was “Do No Evil,” which was widely recognized as a reference to the unethical practices of Microsoft. With these arguably anticompetitive practices attributed to Google, are we witnessing the “ring of (market) power” corrupting the Hobbits of Google? 61. We thank Howard Shelanski for a most helpful conversation, as well as for the slides from his presentation “Competition Policy and Innovation” at the Cresse/ISCI/FNE workshop, Santiago Chile, November 16, 2016. Both sources were instrumental as we formulated this section. 62. United States v. Aluminum Company of America, 148 F.2d 416, 430 (2d Cir. 1945). 63. Richard A. Posner, “Antitrust in the New Economy,” p. 9. 64. Alaska Airlines, Inc. v. United Airlines, Inc., 948 F.2d 536 (9th Cir. 1991). 65. The ensuing analysis is based on Ilya Segal and Michael D. Whinston, “Antitrust in Innovative Industries,” American Economic Review 97 (December 2007): 1703–1730. For a further discussion of this approach and related issues, see Jonathan B. Baker, “Beyond Schumpeter vs. Arrow: How Antitrust Fosters Innovation,” Antitrust Law Journal 74 (June 2007): 575– 602; and Joshua S. Gans, “When Is Static Analysis a Sufficient Proxy for Dynamic Considerations? Reconsidering Antitrust and Innovation,” in Josh Lerner and Scott Stern, eds., Innovation Policy and the Economy, vol. 11 (Chicago: University of Chicago Press, 2011), pp. 55–78. 66. The model is also appropriate for considering how regulation, as well as antitrust, impacts the innovation rate. For example, the model applies to regulations exploring the effect of imposing standards, mandating compatibility of technologies, or changing patent law. 67. Although it is assumed that each potential entrant gets just one shot at innovating, there will be a new potential entrant each period. Similar results emerge if a potential entrant can repeatedly try to innovate; see Segal and Whinston, “Antitrust in Innovative Industries.” 68. The value of πM is determined by how much better the new technology is compared to the previously dominant technology, as consumers could always choose the latter. 69. That practice was reviewed earlier in the chapter as part of the Microsoft I decision. 70. See the subsection in chapter 7 that examines the Aghion-Bolton model of exclusive dealing. 71. This result does not mean that the prohibition of exclusive dealing is universally recommended, because we know that there are procompetitive benefits that the current model does not allow for. The point is to show how antitrust policy can impact outcomes in an industry with rapid and drastic innovation. 72. Richard A. Posner, “Antitrust in the New Economy,” p. 11.

73. Summary of Written Testimony of Microsoft Witness Richard L. Schmalensee, http://www.prnewswire.com/newsreleases/summary-of-written-testimony-of-microsoft-witness-professor-richard-l-schmalensee-73384497.html (accessed December 13, 2016). 74. Joseph A. Schumpeter, Capitalism, Socialism, and Democracy (New York: Harper & Row, 1975), p. 84.

II ECONOMIC REGULATION

10 Introduction to Economic Regulation

What Is Economic Regulation? The essence of free enterprise is that individual agents are allowed to make their own decisions. As a consumer and a laborer, each person decides how much money to spend, how much to save, and how many hours to work. Firms decide which products to produce, how much to produce of each product, what prices to charge, how much to invest, and which inputs to use. In all modern economies, there is also an entity called government, which determines such key factors as income tax rate, level of national defense expenditure, and growth rate of the money supply. Government decisions like these affect the welfare of agents and how they behave. For example, increasing the income tax rate induces some individuals to work fewer hours and some not to work at all. Although an income tax influences how a laborer behaves, the laborer is left to decide how many hours to work. In contrast, in its role as regulator, a government literally restricts the choices of agents. More formally, regulation has been defined as “a state-imposed limitation on the discretion that may be exercised by individuals or organizations, which is supported by the threat of sanction.”1 Thus, regulation restricts the decisions of economic agents. In contrast to income tax, which does not restrict the choices of individuals (though it does affect their welfare), the minimum wage is a regulation in that it restricts the wages that firms can pay their laborers. Economic regulation typically refers to government-imposed restrictions on firms’ decisions regarding prices and outputs, including whether to enter or exit an industry. Economic regulation is distinct from social regulation, which is discussed in part III of this book. When an industry is regulated, industry performance in terms of allocative and productive efficiency is determined both by market forces and by administrative rules. Even if it wished to do so, a government could not control every relevant action taken by consumers and firms, because it is prohibitively costly to observe these actions. Consequently, market forces usually will influence outcomes to some degree, regardless of the degree of government intervention. For example, when the government regulated the airline industry, it controlled prices but not the quality of service. This shifted the focus of market competition from the price dimension to the quality dimension. Even in a government-controlled economy like the former Soviet Union, market forces were at work. Although the state specified production levels, the (effective) market-clearing price was set in the market. If a good was in short supply, people would wait in line for it. The effective price to them was the price paid to the state plus the value of their time spent in line. In equilibrium, people stand in line until the effective price clears the market. Reasons for Regulation

Natural monopoly problem An industry is a natural monopoly if the cost of producing the particular goods or services in question is minimized when a single firm supplies the entire industry output. In the case of a single commodity, a natural monopoly prevails when the long-run average cost (LRAC) of production always declines as output increases, as illustrated in figure 10.1. When LRAC declines with output, long-run marginal cost (LMRC) necessarily lies everywhere below LRAC.

Figure 10.1 Cost Curves of a Natural Monopoly

Natural monopoly introduces a fundamental problem: although industry costs may be minimized when a single firm serves all consumers, a monopoly supplier is inclined to set the monopoly price, which often is well above cost. Before considering how, in principle, economic regulation might be employed to resolve this problem, we first examine more carefully the definition and characteristics of natural monopoly. Permanent and temporary natural monopoly It is important to distinguish between permanent and temporary natural monopoly.2 Figure 10.1 illustrates the case of permanent natural monopoly, where LRAC falls continuously as output increases. Regardless of the level of market demand, a single firm can produce the output at minimum cost. Figure 10.2 illustrates a temporary natural monopoly. Observe that LRAC declines up to output Q* and then becomes constant thereafter. A natural monopoly when demand DD prevails can become a workably competitive market when demand D1D1 prevails. In a workably competitive market, competition among suppliers can hold prices close to production costs without any regulatory intervention.

Figure 10.2 Temporary Natural Monopoly

A cost curve like the one in figure 10.2 might approximate the average cost of supplying long-distance telephone service, for example. The average cost of supplying telephone calls may decline sharply as the number of calls increases above zero, but it may decline much less rapidly (if at all) as volume increases further. For example, a microwave telephone system often consists of relay stations located twenty to forty miles apart that transmit signals of specific frequencies. To function, each station requires land, a building, a tower, antennae, and sundry electronic equipment. These inputs do not all increase proportionately as the number of telephone calls increases, because fixed costs can be spread over an increased number of calls as volume increases. This spreading effect becomes less and less significant, however, as volume grows beyond some point and additional inputs (for example, additional antennae) must be added to handle increased call volume. To illustrate, approximately 800 circuits were required to supply the long-distance telephone service between New York and Philadelphia that was demanded in the 1940s. At this level of production, unit costs declined as output increased, so a natural monopoly prevailed. By the late 1960s, the number of required circuits had risen to 79,000. At these output levels (corresponding to output above Q* in figure 10.2), the unit cost of production varied little with the scale of output. Consequently, the temporary natural monopoly that prevailed in the 1940s had effectively disappeared by the late 1960s. This phenomenon is not rare. Railroads were the primary suppliers of freight transport in the late 1800s. Given the large fixed cost of laying track, railroads operated with substantial scale economies. By the 1920s, trucks offered a viable alternative to railroads for the transport of many commodities. Unit transport costs decline much less rapidly with output for trucks than for railroads (because trucking companies do not have to build their own roads). Consequently, technological change transformed many elements of the freight transportation industry from a natural monopoly to a workably competitive industry. Regulation typically is a highly imperfect substitute for competition, for reasons that we discuss below.

Consequently, regulation should be phased out as a natural monopoly is replaced by a workably competitive industry. However, as we explain further below, regulation often tends to persist far longer than it should. Instruments of Regulation In principle, economic regulation can encompass restrictions on many elements of a firm’s operations. In practice, regulators typically focus their restrictions on prices, output levels, and industry entry and exit. Product quality, advertising, and investment are also regulated in certain circumstances. Control of price Price regulation often specifies a particular price that firms must charge. For example, regulators often set the price that a monopoly supplier of a key service (electricity or water, for example) must charge for its service. However, price regulation can specify a range of permissible prices. For example, in 1989 the Federal Communications Commission (FCC) instituted price cap regulation to regulate the prices that AT&T was permitted to charge for long-distance calls. The regulation specified the maximum price that AT&T could charge. It also specified a minimum price in an attempt to ensure that AT&T did not engage in predatory pricing (that is, set prices below cost to induce competitors to exit the market). Control of quantity Regulators also may place restrictions on the quantity of a product or service that producers can supply. For example, in the mid-1900s, many oil-producing states (including Oklahoma and Texas) limited the amount of crude oil that producers could supply (for reasons that will be discussed below). Regulators also often require firms to supply all of a product (for example, electricity or water) that consumers wish to purchase at the specified regulated price. Control of entry and exit Regulators also restrict entry into, and exit from, certain industries. To illustrate, entry into the long-distance telecommunications market was precluded for many years before the FCC allowed Microwave Communications, Inc. (MCI) to enter the industry in 1969. In addition, regulators in the airline and trucking industries have also made it very difficult for an existing firm to enter a geographic market already served by another regulated firm. Historically, regulators in the railroad and airline industries have also precluded suppliers from terminating their service to small towns that were unprofitable for the suppliers to serve. Occupational licensing requirements constitute an additional form of entry restriction. Licensing requirements are imposed in more than 800 industries in the United States and affect more than 20 percent of the U.S. workforce.3 Licensing requirements are often imposed on the grounds that they will ensure the provision of high-quality service. Some observers suggest, though, that the requirements may be designed in part to protect the income of incumbent suppliers by erecting obstacles to entry for new suppliers.4 Governments also impose various fees, operating procedures, and standards on entrepreneurs that seek to establish a new business. The costs of these regulations can be substantial. A study ranked eighty-five countries on the basis of the number of days it takes entrepreneurs to comply with entry procedures. The twelve best and worst countries in this regard are identified in table 10.1. The table also reports the number of procedures a new business must complete, the fees that must be paid, and the associated total cost (which is the sum of the fees and the value of the entrepreneur’s time). The latter two variables are aggregated over all new businesses and divided by gross domestic product (GDP) to derive a measure of the fraction of

society’s resources that are consumed by this activity. On average, for all eighty-five countries in the study, it takes forty-seven days at a cost of about two-thirds of 1 percent of GDP. Substantial variation prevails across countries. In the United States, compliance with entry requirements takes four days and consumes about one-sixth of 1 percent of GDP. Corresponding compliance takes eighty days and almost 5 percent of GDP in the Dominican Republic. Given the sizable costs involved, it is important to understand the purpose of these regulations and whether they serve primarily to improve the well-being of consumers or to enrich certain industry suppliers. Table 10.1 Cost and Time of Government Requirements for Entry: Best Twelve and Worst Twelve Countries

Control of other variables Economic regulation can affect variables other than price, quantity, and industry entry and exit. For example, service quality is regulated in some industries. If an electric utility experiences frequent service interruptions, for instance, a regulator may intervene and require the utility to increase its generating capacity in order to improve service reliability. Service quality regulation tends to be less pervasive than price control, though, in part because of the cost of regulating quality. To control any variable, the relevant economic agents have to be able to agree on what the variable is and

what restrictions are placed on it. Typically, this is not difficult in the case of price and quantity. The price is the amount paid by the consumer for the good, which is often readily observed. Furthermore, restrictions take the simple form of numbers: a maximum price and a minimum price, for example. Quantity restrictions also tend to be relatively straightforward to specify and monitor, but quality can be more challenging to define and measure. For example, the quality of airline service encompasses many variables, including ontime performance, safety, on-board services, seat width, and the speed and care of luggage handling. In principle, a regulatory agency could attempt to control each of these variables. Doing so would be quite costly, however. Most of these variables other than safety were never formally regulated in the airline industry. Restrictions have been imposed on the advertising of certain products and services, including “undesirable” goods, such as cigarettes and alcohol. Tobacco companies have been prohibited from advertising on television and radio for decades, and some states do not allow alcohol to be advertised on billboards. This is a form of social regulation in that the intent is to discourage the consumption of goods with externalities or about which consumers are thought to be ill informed. The Federal Trade Commission also enforces rules on the content of advertising so as to prevent consumer deception. But these justifications for limiting advertising do not explain why firms have been precluded from advertising the prices of prescription drugs, eyeglasses (see chapter 15), optometry services, or legal services.5 Regulators also sometimes control a firm’s investment. To illustrate, regulators have mandated certain capital investments by electricity and telecommunications suppliers. State regulation of investment by hospitals is also common. Certificate of Need programs require a hospital to obtain state approval before undertaking certain investment projects. The stated objective of this form of economic regulation is to avoid duplicate facilities. Brief History of Economic Regulation Formative Stages Economic regulation began in the United States in the 1870s.6 Two important events took place around that time. First, a key Supreme Court decision provided the basis for the regulation of monopolies. Second, forces were building in the railroad industry that would result in its being the first major industry to be subjected to federal economic regulation. Munn v. Illinois (1877) The landmark case of Munn v. Illinois7 established that the state of Illinois could regulate rates set by grain elevators and warehouses. As stated in the opinion of the majority, the decision established the principle that property does become clothed with public interest when used in a manner to make it of public consequence, and affect the community at large. When, therefore, one devotes his property to a use in which the public has an interest, he, in effect, grants to the public an interest in that use, and must submit to be controlled by the public for the common good.

Munn v. Illinois provided the foundation for regulation to be used to prevent monopolistic exploitation of consumers. Interstate commerce act of 1887 Around the time of the Munn v. Illinois decision, the railroad industry was experiencing considerable

turbulence. Throughout the 1870s and 1880s, the railroad industry was subject to spurts of aggressive price wars intermixed with periods of relatively stable prices (see figure 4.12 in chapter 4). At the same time, the railroads implemented substantial price discrimination. Customers who were being charged relatively high prices called for government intervention. At the same time, the railroads were seeking government assistance to stabilize prices in the industry, and thereby avoid destructive competition among railroads, which they claimed was threatening their financial viability. The result of these forces was the Interstate Commerce Act of 1887, which created the Interstate Commerce Commission (ICC). Congress subsequently afforded the ICC the power to regulate prices in the rail industry. Nebbia v. New York (1934) Some interpreted the Munn v. Illinois decision to imply that a government only had the constitutional right to regulate public utilities. However, in its 1934 Nebbia v. New York8 decision, the Supreme Court created a broader realm for economic regulation. In that case, the state of New York was regulating the retail price of milk. The defense argued that the milk industry was competitive and could not be classified as a public utility, and so there was no basis for state regulation. The majority opinion stated: So far as the requirement of due process is concerned, and in the absence of other constitutional restriction, a state is free to adopt whatever economic policy may reasonably be deemed to promote public welfare, and to enforce that policy by legislation adapted to its purpose.

In this decision, the Supreme Court effectively eliminated constitutional barriers to economic regulation as long as, in the state’s judgment, regulation was in the public interest. Trends in Regulation Early regulation focused on the railroads and public utilities like electricity, telephone service, and city transit. The Massachusetts State Commission began regulating such industries in 1885. By 1930, most state legislatures had created public service commissions. As noted above, federal regulation of railroads began in 1887, and federal regulation of interstate telephone service began with the passage of the Mann-Elkins Act of 1910. Figure 10.3 depicts the growth of U.S. regulatory legislation over time. Three spurts of legislative activity can be identified.9 The first two occurred during 1909–1916 and 1930–1940. During these years, and through the 1970s, federal regulatory powers were greatly expanded to encompass several major industries. The third burst of legislative activity began in the 1970s and entailed the partial or full deregulation of many of the regulated industries.

Figure 10.3 Number of Economic Regulatory Legislative Acts Source: James F. Gatti, “An Overview of the Problem of Government Regulation,” in James F. Gatti, ed., The Limits of Government Regulation (New York: Academic Press, 1981), p. 5.

Economic historian Richard Vietor suggests that these regulatory and deregulatory booms are due to a fundamental change in how people perceive the interaction between an economy and its government.10 He attributes the regulatory wave of the 1930s to a major reduction in faith in a laissez-faire economy, due to the experience of the Great Depression. The deregulatory period of the 1970s occurred during a period of serious stagflation—high inflation and high unemployment—that Vietor believes shook our faith in the ability of the government to constructively influence the economy. 1930s: Wave of regulation After the Nebbia v. New York decision, and in the midst of the challenging economic conditions of the Great Depression, a wave of economic regulation took place between 1930 and 1940. Oil-producing states imposed controls on the activities of crude oil producers, and pieces of legislation significantly expanded the realm of federal economic regulation. Table 10.2 lists these legislative acts. Legislation in 1935 and 1940 expanded the domain of the ICC’s regulatory oversight from railroads to the entire interstate surface freight transportation industry, which included trucks, water barges, and oil pipelines. The one key exception was ocean shipping, which was regulated by the Federal Maritime Commission beginning in 1936. Responsibility for regulating longdistance passenger transportation was divided between the ICC (railroads and buses) and the newly created Civil Aeronautics Board (airlines). Table 10.2 Major Economic Regulatory Legislation, 1887–1940

Year

Legislative Act

Agency Created

1887 1910 1916 1920 1930 1933 1934 1935

Interstate Commerce Act Mann-Elkins Act Shipping Act Transportation Act Oil prorationing (Oklahoma, Texas) Banking Act Securities Act Banking Act Communications Act Motor Carrier Act Public Utility Act Securities Exchange Act Civil Aeronautics Act Natural Gas Act Transportation Act

Interstate Commerce Commission

1938 1940

Federal Communications Commission Federal Power Commission Securities and Exchange Commission Civil Aeronautics Board

The FCC was established in 1934 to regulate television and radio broadcasting and to take over the duty of regulating the interstate telecommunications market from the ICC. Although electricity and natural gas had long been regulated at the state and local levels, federal regulation of interstate commerce associated with these two energy sources was only established in 1935 (for electricity) and in 1938 (for natural gas). Initially, natural gas regulation only covered its transportation. Regulation of natural gas prices began in the mid-1950s. The unsatisfactory performance of financial markets in the Great Depression was followed by a wave of federal legislation related to the banking and securities industries. Among other restrictions, the Banking Acts of 1933 and 1935 created the Federal Deposit Insurance Corporation, forbade commercial banks from paying interest on ordinary checking accounts, and, in what has been referred to as the Glass-Steagall Act, prohibited commercial banks from participating in investment banking and prohibited investment banks from accepting deposits. (See chapter 18 for more details.) The Securities Act of 1933 mandated disclosure of information by issuers of securities, and the Securities Exchange Act of 1934 created the Securities and Exchange Commission, the main purpose of which was to monitor the activities of the securities industry. 1940s to 1960s: Continued growth of regulation Between the two legislative peaks of the 1930s and 1970s, legislative activity continued on a modest but steady path of expansion of federal regulatory powers. Two sectors, energy and communications, were particularly affected. Although cable television was initially left unregulated at the federal level, it became subject to FCC regulation in 1968. Before 1954, federal regulation of the oil and natural gas industries pertained only to pipelines and, at the state level, to production of crude oil. In 1954, the Federal Power Commission began controlling the wellhead price of natural gas. Then the price of oil was regulated beginning in 1971. Foreshadowing the deregulation that was to come, the FCC permitted MCI to enter the interstate telecommunications market in 1969. This action represented a crucial first step in the deregulation of that market. 1970s to 1980s: Wave of deregulation The decades of the 1970s and 1980s were characterized by extensive deregulation (see table 10.3). In 1977, fully regulated industries produced 17 percent of the U.S. gross national product. By 1988 this figure had been reduced to 6.6 percent.11 In the transportation sector, several pieces of legislation deregulated airlines (Airline Deregulation Act of 1978), railroads (Staggers Act of 1980), trucking (Motor Carrier Act of 1980), and passenger buses (Bus Regulatory Reform Act of 1982). In communications, entry into the interstate

telecommunications market was largely deregulated over the course of several decisions that ranged from the FCC’s Specialized Common Carrier Decision in 1971 to the breakup of AT&T in 1984 as a result of the U.S. Department of Justice’s antitrust case. Also during this period, cable television was deregulated at the federal level. Finally, oil price controls were lifted by President Ronald Reagan in January 1981, and controls on natural gas prices were removed in 1989. Table 10.3 Major Economic Deregulatory Initiatives, 1971–1989 Year

Initiative

1971 1972 1975 1976 1977 1978

Specialized Common Carrier Decision (FCC) Domestic satellite open skies policy (FCC) Abolition of fixed brokerage fees (SEC) Railroad Revitalization and Reform Act Air Cargo Deregulation Act Airline Deregulation Act Natural Gas Policy Act Deregulation of satellite earth stations (FCC) Urgent-mail exemption (Postal Service) Motor Carrier Reform Act Household Goods Transportation Act Staggers Rail Act Depository Institutions Deregulation and Monetary Control Act International Air Transportation Competition Act Deregulation of cable television (FCC) Deregulation of customer premises equipment and enhanced services (FCC) Decontrol of crude oil and refined petroleum products (executive order) Deregulation of radio (FCC) Bus Regulatory Reform Act Garn–St. Germain Depository Institutions Act AT&T settlement Space commercialization Cable Television Deregulation Act Shipping Act Trading of airport landing rights Sale of Conrail Elimination of fairness doctrine (FCC) Proposed rules on natural gas and electricity (FERC) Proposed rules on price caps (FCC) Natural Gas Wellhead Decontrol Act

1979 1980

1981 1982 1984

1986 1987 1988 1989

Source: Updated table from Economic Report of the President, United States Government Printing Office, Washington, DC, January 1989.

Regulatory policy in the 1990s As is evident from table 10.4, the deregulatory wave that began in the 1970s continued into the 1990s in some sectors. In particular, the deregulation of interstate and intrastate trucking was completed. In addition, relaxed regulatory controls allowed competition to emerge in the transmission of natural gas and in the generation and distribution of electricity. Furthermore, entry restrictions in the banking sector, which prevented banks from having branches in more than one state and, in some states, prevented a bank from having more than one branch, were largely eliminated by state legislatures. The experience in the cable television industry differed, as rates oscillated between being regulated and deregulated. In the telecommunications industry, the Telecommunications Act of 1996 introduced some deregulation but maintained substantial regulatory oversight of several industry activities. Overall, it has been estimated that between 1975 and 2006, the fraction of traditionally regulated sectors that remained subject to direct

regulation declined by 74 percent.12 Table 10.4 Major Federal Economic Regulatory and Deregulatory Initiatives, 1990–2010 Year

Initiative

Provisions

1991

Federal Deposit Insurance Corporation Improvement Act Cable Television Consumer Protection and Competition Act Energy Policy Act

Introduced risk-based deposit insurance premia, required early regulatory intervention into failing banks, eased conditions for banking failures by limiting FDIC’s ability to reimburse uninsured depositors Regulated cable TV rates

1992

1992 1992 1993

FERC Order 636 Omnibus Budget Reconciliation Act of 1993

1993 1994

1995 1996

Negotiated Rates Act Riegle-Neal Interstate Banking and Branching Efficiency Act Trucking Industry and Regulatory Reform Act ICC Termination Act Telecommunications Act

1996 1999

FERC Order 888 Gramm-Leach-Bliley Act

1999

FERC Order 2000

2002 2010

Sarbanes-Oxley Act Dodd-Frank Act

1994

Opened up wholesale competition by giving FERC the authority to order vertically integrated utilities to act as a common carrier of electrical power Required pipelines to unbundle the sale and transportation of natural gas Mandated that the FCC reallocate portions of the electromagnetic spectrum for personal communication and authorized it to use competitive bidding to award licenses Deregulated cellular telephone rates Eliminated regulatory distortions related to trucking rates Codified at the national level the elimination of branching restrictions at the state level Eliminated remaining interstate and intrastate trucking regulation Abolished the ICC Deregulated cable TV rates, set conditions for local telephone companies to enter long-distance telephone markets, mandated equal access to local telephone systems Removed impediments to competition in the wholesale bulk power market Repealed the prohibition on mixing banking with securities or insurance that had been put in place with the Glass-Steagall Act Advocated establishment of independent regional transmission organizations to facilitate competition in wholesale electricity markets Instituted new regulations concerning financial practices, corporate governance, and corporate disclosure. Instituted new regulation to limit the likelihood that large financial institutions will fail and to limit government bailouts of firms that do fail Restricted the activities of commercial banks and extended Federal Reserve oversight to large financial institutions other than commercial banks

Acknowledgments: The development of this table was aided by suggestions from Timothy Brennan, Robert Crandall, Paul Kleindorfer, Randall Kroszner, Paul MacAvoy, Thomas Gale Moore, and Sam Peltzman. Their assistance is most appreciated. They are not responsible for any errors.

The Regulatory Process Overview of the Regulatory Process Stage 1: Legislation There are two key stages in the regulation of an industry (a third stage, deregulation, sometimes also occurs). In the first stage, Congress, a state legislature, or a local government body (such as a city council) enacts legislation that establishes regulatory powers over a particular industry. During this stage, industry suppliers lobby legislators in an attempt to ensure the regulation does not constrain their profit unduly, and consumer advocates lobby legislators to ensure that the regulation protects consumers and enhances their welfare. In addition, representatives of industry workers (such as labor unions) attempt to convince legislators to adopt regulation that promotes workers’ well-being. Stage 2: Implementation The second stage in the regulatory process is the implementation of the legislation that has been passed.

Although the legislature can influence the implementation of the legislation, a regulatory agency often is charged with the bulk of the implementation. Thus, regulators replace legislators as central actors at the implementation stage, and producers, consumers, and workers typically direct their lobbying efforts toward the regulatory agency. Potential entrants who desire to enter the regulated industry often lobby for favorable terms of entry. Stage 3: Deregulation A third stage that can sometimes arise is industry deregulation. Legislators, regulators, industry suppliers, and consumer advocates all can play a role in industry deregulation, as can members of the judicial and executives branches of government. Long before Congress passed the Airline Deregulation Act, the airline industry was being deregulated by the Civil Aeronautics Board (CAB). In this respect, the White House can play a significant role in its choice of regulatory commissioners. Indeed, it was no mistake that President Jimmy Carter appointed the free-market advocate Alfred Kahn as CAB chairman. Regulatory Legislation Selection of the regulatory agency Legislation performs two key tasks in the regulatory process. First, it determines which bureaucratic agency has jurisdiction over the various dimensions of industry regulation. Legislation can create the primary regulatory agency, as was the case with the Interstate Commerce Act of 1887 and the Federal Communications Act of 1934. In other cases, legislation extends the domain of an existing agency, as the Motor Carrier Act of 1935 did by subjecting motor carriers to the regulatory oversight of the ICC. Powers of the regulatory agency The second key task of legislation is to specify the powers of the regulatory agency. Two key powers often are the control of prices and the control of entry into and exit from the industry. Although the Interstate Commerce Act of 1887 gave the ICC regulatory jurisdiction over the railroad industry, it took the Hepburn Act of 1906 and the Transportation Act of 1920 for the ICC to gain the power to control rail rates. Sometimes, the power that a piece of legislation grants to a regulatory agency is not entirely clear. Prior to a decision by the Supreme Court in 1954, the Federal Power Commission believed that the Natural Gas Act of 1938 did not give it the power to control the price of natural gas. General policy objectives Legislation also often specifies the objectives that the regulatory agency should pursue. For example, legislation may instruct the regulatory agency to ensure that the regulated service is widely available to consumers on terms that are “just and reasonable” and not unduly discriminatory. Independent Regulatory Commissions An independent federal regulatory commission typically is composed of five or more members. Table 10.5 identifies some major federal regulatory agencies. Federal regulatory commissioners are appointed, but state public utility commissioners are elected in some states.13 Each commissioner is appointed for a fixed term, and the terms of different commissioners are staggered (so they begin at different times). Regulatory commissioners typically enjoy a substantial degree of independence from the executive branch of

government. A commissioner can be removed for cause, but not at the whim of the president. Table 10.5 Major Federal Economic Regulatory Commissions

Although regulatory commissioners enjoy some independence from the executive and legislative branches of government, commissioners cannot act in an arbitrary fashion. Section 557 of the 1946 Administrative Procedure Act requires all administrative decisions by a regulatory commission to be substantiated by findings of fact and law. Members of a regulatory agency

Political scientist James Q. Wilson has identified three different kinds of employees of a regulatory agency.14 The careerist is an employee who anticipates a long-term relationship with the regulatory agency and whose major concern is that the regulatory agency continue to exist and grow. Careerists tend to disfavor industry deregulation. The politician envisions eventually leaving the agency for an elected or appointed position in government, so the regulatory agency serves as a stepping stone to other roles. Many commissioners are politicians. Finally, the professional tends to be particularly concerned with developing valued skills that will facilitate career advancement. Understanding the incentives of the employees of a regulatory agency can help explain the policies the agency adopts. To illustrate, consider the implementation of price regulation. The professional may desire to use this opportunity to show technical expertise, and therefore may recommend a complex pricing structure. In contrast, the careerist might support a simple pricing structure so as to avoid any major problems that could invite legislative action. Finally, because the politician is concerned with not aggravating interest groups, he or she might be particularly inclined to avoid price discrimination, because it could alienate some consumers. The ensuing analysis of regulatory policy will not consider such personal motivations in detail when assessing the implementation of regulation, but the potential influence of such motivations should be kept in mind. Regulatory Procedures Because the policy objectives stated in formal legislation tend to be broad and vague, a regulatory agency often has considerable discretion in how it regulates the industry. For example, the agency may be able to choose among a wide variety of pricing structures while remaining faithful to the charge to implement “just and reasonable” prices. In some cases, though, legislation delineates the duties of a regulatory agency fairly precisely. For example, the Emergency Petroleum Allocation Act (1973–1975) and the Energy Policy and Conservation Act (1975–1981) provided a detailed formula regarding the price structure for domestic crude oil. Consequently, the Federal Energy Administration had minimal discretion over the regulation of crudeoil prices. Rulemaking process Regulatory rulemaking can take one of two forms. First, a regulatory agency can adopt a case-by-case approach, considering each proposal on its own merits. Key proposals include requests for rate changes and petitions for industry entry or exit. Second, the agency can employ substantive rulemaking, in which case hearings are conducted that lead to the formulation of a general rule that is applicable to a wide class of situations. The FCC adopted substantive rulemaking rather than the more burdensome case-by-case approach when deciding on whether entry into a segment of the interstate telecommunications market would be permitted. If the participants do not agree with the details of a regulatory decision, they have the right to appeal the decision to a designated court. When the FCC decided not to allow MCI to provide long-distance telephone service, MCI appealed the FCC’s decision to the U.S. court of appeals. In its Execunet I decision (1978), the court reversed the FCC’s decision. The FCC has also had its decisions overturned on appeal when it attempted to promote, rather than limit, industry competition. For instance, the FCC’s 1992 ruling that local telephone companies must allow competitors direct access to the local phone network was overturned by a federal appeals court. Delay and strategic manipulation of regulatory proceedings

Regulatory procedures tend to be biased toward maintaining the status quo. Regulators must comply with due process requirements to implement changes. Consequently, industry participants effectively have legal rights to the status quo, and it can only be changed through time-consuming due process. This situation is quite distinct from the market, where the status quo is regularly disrupted and no legal recourse is available as long as no laws are violated. The delay that most regulatory proceedings entail also promotes the status quo. An agent who is interested in maintaining the status quo can pursue tactics (such as litigation) to slow regulatory proceedings. Delay, intentional or otherwise, is common in regulatory proceedings. Licensing proceedings at the CAB and the ICC averaged 170 days for the prehearing stage, 190 days for the hearing stage, and 220 days for the agency review stage. In total, the typical application for a license to operate in the industry took more than nineteen months. Ratemaking proceedings at the CAB, the ICC, and the Federal Power Commission took even longer, averaging more than twenty-one months.15 Parties to regulatory procedures can attempt to influence policy in ways other than delaying the regulatory process. For example, the regulated firm can control the flow of information to the regulatory agency. When determining appropriate prices, a regulatory agency usually depends on the regulated firm for estimates of cost and demand conditions. Outside expert witnesses can be called on, but their information generally is less accurate than the information the firm has at its disposal due to its day-to-day industry operations.16 Theory of Regulation Why does regulation exist? In a free-market economy like that of the United States, why does the government choose to place restrictions on the decisions of agents? One of the objectives of a theory of regulation is to answer this question. Such a theory should make predictions concerning who benefits from regulation, which industries are most likely to be regulated, and what form regulation will take.17 A compelling assessment of these issues should allow us better to understand the effects of regulation. For example, if we know that price regulation tends to benefit industry producers, it would be reasonable to expect prices to be set significantly above cost in regulated industries. In this section, we outline the evolution of thought on why regulation exists. The evolution has entailed three primary hypotheses. The first hypothesis, known as the public interest theory or the normative analysis as a positive theory (NPT), suggests that regulation occurs in industries where unfettered activities are deemed to have produced undesirable outcomes.18 Because empirical evidence often failed to support the NPT, economists and political scientists developed the capture theory (CT). Basically, the CT states that the agency created to regulate an industry often is “captured” by industry suppliers, so regulation tends to promote industry profit rather than social welfare. For reasons described later, NPT and CT are actually not theories but rather hypotheses or statements about empirical regularities. This is to be contrasted to the third stage in this evolution of thought, which is the economic theory of regulation (ET). This is indeed a proper theory: It generates testable hypotheses as logical implications from a set of assumptions. Although ET explains some observed regulatory activity quite well, it does not appear to explain all such activity. Normative Analysis as a Positive Theory Normative rationale for regulation Despite the inherent limitations of regulation, it can enhance welfare in settings where unrestrained competition does not work well. This can be the case, for example, in the presence of natural monopoly or

pronounced externalities. Recall that an industry is a natural monopoly if, at relevant levels of output, industry cost is minimized when a single firm supplies all output. Natural monopolies often prevail when least-cost production requires substantial fixed costs, as when a supplier of local telecommunications services or an electricity distributor must establish a ubiquitous network to supply service. As noted above, natural monopoly is problematic, because it introduces a fundamental conflict between allocative efficiency and productive efficiency. Productive efficiency requires that only one firm produce in order to minimize production costs. However, a monopoly supplier typically will be inclined to set price well above cost in order to maximize profit, so allocative efficiency is not achieved. Therefore, regulation may play a useful role in requiring a monopoly supplier to set prices at levels that ensure allocative efficiency. An externality exists when the actions of one agent, say agent A, affect the welfare (or the profit) of another agent, say agent B, and agent A is not concerned with how his behavior affects agent B’s welfare. When an externality is present, perfect competition does guarantee a welfare-maximizing allocation of resources. For example, suppose Jared is considering buying an Italian submarine sandwich for lunch. Further suppose that competition among sandwich makers drives the price of the sandwich to its marginal cost of production. If relevant input markets are also competitive, then the value of resources used by society in supplying a sandwich will be equal to the price of the sandwich, which we will denote as P. If V is the maximum amount that Jared is willing to pay for a sandwich and if V exceeds P, then Jared will buy the sandwich and receive a surplus of V − P. If sandwich consumption entails no externalities, then the net welfare gain to society (V − P) is positive, so the transaction should (and will) take place. Now suppose sandwich consumption generates externalities. In particular, suppose the sandwich has onions on it (as any good Italian sub does), and Jared is planning to travel on a crowded subway after eating the sandwich. Unfortunately, the individual who sits next to him on the subway will have to smell his bad breath. Suppose this individual would be willing to pay E dollars to avoid this unpleasant experience. The net welfare associated with Jared’s purchase and consumption of the Italian sub now is not V − P, but V − P − E. If E > V − P, then welfare declines when Jared consumes the sub, even though he is personally better off. Hence, in the presence of externalities, transactions in competitive markets can reduce welfare. Externalities can take many forms. Negative externalities include noise pollution, water pollution, and traffic congestion. When deciding whether to drive to work or take mass transit, the typical automobile driver does not consider the effect of his decision on the quality of the air that others must breathe. A common pool problem is a different type of negative externality. It occurs when a resource is effectively owned by many property owners. For instance, several firms may extract oil from a common reservoir, or several fishermen may fish in the same lake. In pursuit of their own objectives, these agents do not take into account how their activity reduces the resource and thereby raises the cost of production to other agents. In the presence of negative externalities, unregulated competition typically results in too much of an activity being pursued, whether it is too many Italian subs being consumed or too much oil being extracted from a reservoir. Externalities can be positive. For example, if you are immunized against a disease, you not only protect yourself but also reduce the spread of the disease, thereby making others better off. Just as there is typically too much activity when a negative externality exists, there is typically too little activity when a positive externality is present. Market failures, whether due to natural monopoly, externalities, or some other source, introduce a potential rationale for government intervention. In the case of a natural monopoly, price controls and limits on industry entry may allow both allocative and productive efficiency to be secured. Prohibitions on entry

ensure that only one firm produces (as required for productive efficiency), whereas price regulation restricts the firm to setting welfare-maximizing prices (as required for allocative efficiency). In the case of externalities, imposition of a tax (subsidy) on an activity that generates a negative (positive) externality can increase welfare. Although regulation may enhance welfare in principle in the presence of market failures, whether it does so in practice remains an open question that will be considered in the ensuing discussion. Description of NPT Normative analysis considers when regulation should occur. In contrast, positive analysis considers when regulation does occur. NPT uses normative analysis to generate a positive theory by asserting that regulation is supplied in response to the public’s demand for the correction of a market failure or to change policies that are deemed to be inequitable (for example, prices that are unduly discriminatory or profits that are deemed to be egregious). According to this theory, if a market is a natural monopoly, then the public will demand the industry be regulated, because welfare will not be maximized in the absence of regulation. Unfettered competition will result in either too many industry suppliers or prices that exceed welfaremaximizing levels. The prospect for welfare-enhancing regulation can generate a demand for regulation in such settings. In this way, the public interest theory uses normative analysis (when should regulation occur?) to produce a positive theory (when does regulation occur?). Critique of NPT There are at least two major problems with NPT. First, it is at best an incomplete theory. NPT puts forth the hypothesis that regulation occurs when it should occur, because the potential for a net social welfare gain generates a demand for regulation. This premise lacks a description of the mechanism that ensures welfareenhancing regulation is implemented. Regulation occurs through legislative action and the behavior of the regulatory agency. NPT does not address the issue of how the potential for net social welfare gains induces legislators to pass regulatory legislation and regulators to pursue the appropriate actions. NPT does not generate the testable prediction that regulation occurs to correct a market failure but instead assumes it. Second, substantial empirical evidence refutes NPT. Many industries have been regulated despite the absence of an efficiency rationale. Examples include price and entry regulation in the trucking, taxicab, and securities industries. In 1974, Richard Posner concluded, “some fifteen years of theoretical and empirical research, conducted mainly by economists, have demonstrated that regulation is not positively correlated with the presence of external economies or diseconomies or with monopolistic market structure.”19 The fact that industry suppliers have lobbied for regulation also raises questions about NPT. In the late 1880s, the railroads supported industry regulation, and AT&T supported regulation that eliminated competitors in the telecommunications sector. The support of industry suppliers is not necessarily inconsistent with NPT, but such support raises questions about the theory. It is conceivable that competition in a natural monopoly setting could impose losses on industry suppliers, and so regulation that limited competition could enhance welfare. However, rather than lobby for regulation that would ensure only normal profit, industry suppliers are more likely to lobby for regulation that would ensure stable, abovenormal profit, contrary to the predictions of NPT. Third, evidence suggests that regulation does not always constrain prices. George Stigler and Claire Friedland examined the effect of regulation on the prices set by electric utilities between 1912 and 1937.20 They found that regulation had an insignificant effect on prices. In contrast, NPT predicts that regulation would reduce prices substantially by forcing a monopolist to price at average cost rather than at the profit-

maximizing level. Reformulation of NPT The limited support for NPT forced its reformulation. The reformulation posits that regulation is originally implemented to correct a market failure, but then is mismanaged by the regulatory agency. Even this reformulated hypothesis is unsatisfactory for at least two reasons. First, like its predecessor, this formulation merely states a hypothesis rather than generating the hypothesis as a conclusion from a formal model. In particular, it does not explain why the regulatory agency is mismanaged. Second, the reformulated hypothesis is inconsistent with the evidence that industries are regulated even when they are not subject to significant market failures. Capture Theory Genesis of capture theory A review of the history of regulation in the United States since the late nineteenth century reveals that regulation is not strongly correlated with the existence of market failures. Furthermore, at least until the 1960s, regulation tended to increase, not decrease, industry profit. In potentially competitive industries, such as trucking and taxicabs, regulation supported prices above cost and prevented entry from dissipating abovenormal profit (also known as rent). In naturally monopolistic industries like electric utilities, some evidence indicated that regulation had little effect on prices, so industry suppliers were able to secure rent. Thus, empirical evidence provides support for the view that regulation often favors producers.21 These empirical observations led to the development of capture theory. In stark contrast to the original NPT, capture theory posits that regulation is supplied in response to the industry’s demand for regulation (in other words, legislators are captured by the industry) or that the regulatory agency comes to be controlled by the industry over time (in other words, regulators are captured by the industry).22 Critique of capture theory Although regulatory history may provide greater support for capture theory than for NPT, the former is subject to the same two criticisms that have been leveled against the latter. Like NPT, capture theory has no theoretical underpinnings, because it does not explain how regulation comes to be controlled by the industry. Regulation typically affects many interest groups, including consumers, workers, and industry suppliers. So why should the terms of the prevailing regulatory policy be controlled by industry suppliers rather than by some other interest group? In its original form, capture theory does not answer this question. Rather, it merely posits that regulation comes to favor industry producers over time. Although capture theory enjoys substantial empirical support, some empirical regularities seem inconsistent with the theory. For instance, regulation often implements cross-subsidies, which arise when a supplier of multiple services sets prices below cost for some services and offsets the resulting financial deficit with prices that exceed cost for other services. Such pricing behavior typically is inconsistent with profit maximization and so does not promote the interest of the industry supplier. Cross-subsidization has been regularly observed in such regulated industries as railroads, airlines, and intercity telecommunications. It often takes the form of uniform prices for all customers, even though some customers (for instance, those located in remote geographic regions) are substantially more costly to serve than others. Perhaps the strongest evidence against capture theory is the long list of regulations that were not supported by industry suppliers and have reduced industry profit. The list includes oil and natural gas price

regulation and social regulation of the environment, product safety, and worker safety. capture theory also has difficulty explaining why many industries were originally regulated but later deregulated. Economic Theory of Regulation The empirical evidence suggests that regulation is not strongly associated with the existence of market failure (in conflict with NPT) and does not always enhance the earnings of industry suppliers (in conflict with capture theory). Furthermore, regulation improves the welfare of different interest groups in different industries. A theory that explains these empirical regularities would be valuable. Ideally, the theory also would explain why industries have experienced both regulation and (partial or full) deregulation. This has been the case, for instance, in such industries as railroads (regulated in 1887, deregulated in 1980), interstate telecommunications (regulated in 1910, partially deregulated starting in 1971), trucking (regulated in 1935, deregulated in 1980), airlines (regulated in 1938, deregulated in 1978), natural gas (price regulated in 1954, deregulated in 1989), and oil (regulated in 1971, deregulated in 1981). A comprehensive theory also would explain the simultaneous decline of economic regulation and rise of social regulation in the latter part of the twentieth century. The stigler approach Nobel laureate George Stigler introduced his “Theory of Economic Regulation” in 1971.23 Stigler’s contribution was significant in part because of the manner in which he approached the question of why regulation exists. In contrast to NPT and capture theory, Stigler put forth a set of assumptions and examined the logical implications of these assumptions with predictions about which industries would be regulated and what form regulation would take. Stigler’s analysis rests on two basic assumptions. First, the basic resource of the state is the power to coerce. An interest group that can convince the state to use its power of coercion for the benefit of that group can improve the group’s well-being. Second, agents are rational in the sense that they choose actions to maximize their utility. These two assumptions imply that regulation will be supplied in response to the demands of interest groups acting to maximize their well-being. An interest group can attempt to enhance its well-being by securing favorable regulation from the state that serves to redistribute wealth from others to the interest group. As Stigler states: We assume that political systems are rationally devised and rationally employed, which is to say that they are appropriate instruments for the fulfillment of desires of members of the society.24

Employing this fundamental insight, one can construct a theory that will offer predictions about which industries will be regulated and what form regulation will take. The remainder of this discussion of the economic theory of regulation (ET) describes some of the formal models this theory admits and the predictions of these models.25 Stigler/Peltzman model In addition to specifying the basic elements of the ET, Stigler identified key factors that determine which interest group(s) will control regulation. Subsequent work by Sam Peltzman formalized Stigler’s insights, some of which build on the work of Mancur Olson.26 The Stigler/Peltzman formulation has three crucial elements. First, regulatory legislation generally serves in part to redistribute wealth. Second, legislators seek to remain in office, so they tend to introduce

legislation that will maximize political support and thereby maximize the likelihood of reelection. Third, interest groups compete by offering their political support to legislators in exchange for favorable legislation. One implication of these presumptions is that regulation is likely to be biased toward benefiting interest groups that are better organized (so they deliver political support more effectively) and that have more to gain from favorable legislation (so they are willing to invest more resources in acquiring political support). Specifically, regulation is likely to benefit small interest groups with intense preferences at the cost of large interest groups with weaker preferences. In the Stigler/Peltzman analysis, the behavior of an interest group is driven by the desires and actions of its individual members. Only if many group members perceive substantial personal gain from the group’s activity will the individual members incur the costs required to secure favorable legislation. Thus, interest groups in which the per capita benefit from favorable regulation is pronounced are most likely to secure favorable legislation, ceteris paribus. Interest groups with many members can suffer from a free-rider effect. When an individual makes a contribution on behalf of her interest group, the contribution benefits all group members, even though only the contributor has incurred the cost. To illustrate, a union worker who contributes dues of $50 incurs the full cost, but all union members share in the increased political power the additional $50 generates. Because an individual may undervalue the net benefit to the group of her contribution, she may contribute relatively little to the group’s cause. This effect tends to be most pronounced in larger groups, because the marginal impact of one person’s contribution is smaller, but the cost to that person does not vary with the group’s size. Of course, if all group members act in this manner, total contributions to the group will be limited. This free-rider effect becomes less pronounced as the size of the interest group declines, because each member’s contribution has a proportionately bigger impact on the likely performance of the group. Thus, small interest groups, for which the per capita benefits from regulation are high, tend to have an advantage over large interest groups. Free-rider considerations can help to explain why regulatory policy tends to favor industry suppliers. Typically, there are relatively few suppliers in an industry, and each supplier benefits significantly from favorable regulation. In contrast, there are usually many consumers in an industry, and even though regulation that is favorable to suppliers can impose substantial harm on consumers in the aggregate, the harm imposed on each individual consumer may be small. U.S. ethanol subsidies U.S. policies that promote the blending of gasoline with ethanol provides an example of regulatory and legislative policies that deliver large benefits to a small group (of producers) at the expense of a large group (of consumers). Ethanol is fuel that can be made from corn. When mixed with gasoline, ethanol can reduce the amount of certain pollutants that motor vehicles emit without substantially reducing fuel efficiency. Consequently, blending ethanol with gasoline has the potential to generate three types of benefits. First, when the price of oil is high relative to the price of ethanol, the blending can reduce the price of fuel used to power motor vehicles. Second, the blending can reduce domestic reliance on oil produced by foreign governments. Third, the blending can reduce harmful environmental externalities produced by motor vehicles. In 1978, at a time when oil prices were high, the federal government began to promote the blending of gasoline and ethanol. It did so in part by instituting a subsidy of 40 cents per gallon of ethanol that was

mixed with gasoline. The subsidy stimulated the demand for ethanol, which stimulated the demand for corn and for the services of plants that converted corn into ethanol. The magnitude of the per gallon subsidy for ethanol varied in subsequent years, but the ever-increasing production of ethanol led to ever-increasing subsidies paid to energy companies as they increased the blending of ethanol and gasoline. Corn growers also benefited from the higher price of corn that resulted from the increased demand for corn to produce ethanol. U.S. corn growers and domestic producers of ethanol further benefited from a tariff that was imposed on imported ethanol in 1980. While the policies provided large financial gains for a relatively small group of beneficiaries, the policies imposed costs on a widely dispersed group of individuals. In particular, U.S. taxpayers financed the billions of dollars of subsidies that were paid each year, and consumers bore the costs of the higher price of corn that resulted from the increased demand for the food staple. Even after accounting for benefits associated with reduced environmental externalities and reduced dependence on foreign oil, one study estimated that the costs of the U.S. ethanol promotion policies exceeded its benefits by $3 billion annually.27 Predicting the type of industry to be regulated A key assumption in the Stigler/Peltzman model is that legislators will structure regulatory policy to maximize the political support it engenders. In doing so, legislators will determine the identity of individuals who will benefit from favorable regulation and how much benefit they will receive at the expense of others. For example, a legislator might dictate the terms of a price structure and in so doing determine which consumers will benefit (from particularly low prices), which consumers will be harmed (from relatively high prices), and how much firms will benefit (from increased profit). To explore further which industries are most likely to be regulated, consider Peltzman’s model, which takes into account the design of price and entry regulation. In the model, a legislator/regulator chooses price so as to maximize political support. Let the political support function be represented by S(p, π), where p is price, and π is industry profit. The function S(p, π) increases as p declines, because consumers will support officials who implement lower prices. It also increases as π increases, because firms will support officials that ensure higher industry profit. Industry profit, π(p), varies with the prevailing industry price. Specifically, as illustrated in Figure 10.4, π(p) increases as p increases when p is below the monopoly price, pm, whereas π(p) declines as p increases when p exceeds pm. Observe that when p < pm, an increase in p increases consumer opposition but increases producer support by raising industry profit.

Figure 10.4 Optimal Regulatory Policy: Peltzman Model

To characterize the price that maximizes the political support function S(p, π) where π = π(p), it is helpful to consider the iso-support curves that are drawn in figure 10.4. The curve S1 represents all pairs of price and profit that generate the level S1 of political support. Note that the slope of an iso-support curve is positive, reflecting the fact that if price increases (and so consumer support declines), then profit must increase (to raise industry support) if the same level of political support is to be achieved. Because S(p, π) is decreasing in p and increasing in π, political support increases with movement up and to the left in Figure 10.4, so S3 > S2 > S1. The optimal price for the legislator, denoted p*, is the price that secures the highest level of political support subject to the constraint that profit equals π(p). Graphically, p* occurs where the π(p) curve touches the iso-support curve that is located farthest to the top and left in Figure 10.4. Note that p* lies between the competitive price, pc, where profit is zero, and the monopoly price, pm, where industry profit is maximized. Therefore, in contrast to capture theory, Peltzman’s model predicts that a legislator/regulator will not implement the price that maximizes industry profit. Doing so would alienate too many consumers and thereby reduce political support. This analysis can help explain which industries are likely to gain the most from regulation. If the equilibrium price an industry would achieve in the absence of regulation is close to the price that would exist under regulation, p*, then regulation is unlikely. The interest group that would benefit from regulation would anticipate only limited gain, because the industry price would not change much. Consequently, group members would not invest much effort to secure industry regulation. Therefore, the industries that are most likely to be regulated are those that are either relatively competitive (so that the unregulated equilibrium price is near pc) or relatively monopolistic (so that the unregulated equilibrium price is near pm). In both cases, some interest group will gain considerably from regulation. Firms will gain when the industry is relatively competitive, whereas consumers will gain when the industry is relatively monopolistic. Casual observation suggests that economic regulation often arises in these two extreme settings.

Monopolistic industries include local and long-distance telephone service, electric and gas utilities, and railroads. Relatively competitive industries include agriculture (where regulation often takes the form of price supports), trucking, taxicabs, crude oil production, and securities. Becker model In the Stigler/Peltzman model, a legislator or regulator chooses regulatory policy to maximize political support. In contrast, Gary Becker considers a model of competition among interest groups.28 He suppresses the role of the legislator/regulator by assuming that “politicians, political parties, and voters … transmit the pressure of active groups.”29 Thus, regulation serves to increase the welfare of the most influential interest groups. To illustrate Becker’s analysis most simply, suppose there are two interest groups, denoted group 1 and group 2. Each interest group can increase its welfare by acting to secure more favorable regulation. The transfer of wealth that group 1 receives depends on both the pressure it exerts on legislators and regulators (denoted r1) and the pressure exerted by group 2 (denoted r2). The amount of pressure is determined by the number of members in the group and the resources that each group member expends. Group 1 secures more influence on the political process as its pressure increases and as the pressure of group 2 declines. When group 1 secures more influence, it receives a larger wealth transfer. Let T denote the increase in group 1’s wealth due to regulation. The magnitude of T is determined by T = I(r1, r2), where I(r1, r2) is a called the influence function. This function is increasing in the pressure of group 1 and decreasing in the pressure of group 2. It is assumed that to transfer wealth T to group 1, group 2’s wealth must be reduced by [1 + x]T, where x ≥ 0. When x > 0, more wealth is taken from group 2 than is transferred to group 1. The dissipated wealth xT can be viewed as a measure of the deadweight loss from regulation. In Becker’s model, each group chooses a level of pressure to maximize its welfare, given the pressure level chosen by the other group. Because pressure is costly to supply, groups will not deliver unlimited pressure. However, each group will supply some pressure to increase its influence and thereby secure more favorable regulatory treatment. Accounting for the benefits and costs of pressure, one can determine ψ1(r2), the optimal level of pressure for group 1 given the pressure (r2) delivered by group 2. This so-called best reply function is plotted in figure 10.5. The function indicates that if, for example, group 2 applies pressure then the level of pressure that maximizes the welfare of group 1 is which is denoted in figure 10.5. Notice that ψ1(r2) is upward sloping. This is the case because group 1’s influence declines as group 2 exerts more pressure. Consequently, group 1 finds it optimal to apply more pressure to counteract increased pressure from group 2.

Figure 10.5 Political Equilibrium: Becker Model

A political equilibrium consists of a pair of pressure levels for which neither group has an incentive to change its decision. In other words, the pair of pressure levels is a political equilibrium if: (i) given that group 2 applies pressure is the pressure that maximizes group 1’s welfare; and (ii) given that group 1 applies pressure is the pressure that maximizes group 2’s welfare.30 Graphically, the political equilibrium occurs at in figure 10.5, where the two best reply functions ψ1(r2) and ψ2(r1) intersect. At this intersection, each interest group is exerting its preferred level of pressure, given the level of pressure exerted by the opposing group. Notice in figure 10.5 that both interest groups exert pressure in the political equilibrium. The optimal pressure for each group varies with the level of pressure exerted by the other group, because it is relative influence that determines equilibrium policy. This fact can reduce the importance of the free-riding problem in determining outcomes. Because all groups are subject to free-riding, it is the relative severity of freeriding that determines the equilibrium outcome. When the free-riding problem is less severe in group 1 than in group 2 (perhaps because group 1 has fewer members), group 1 will have a relative advantage over group 2. This is the case even if group 1 experiences a severe free-riding problem in an absolute sense. Note that the political equilibrium is not Pareto efficient. Each group could invest fewer resources and achieve the same level of relative influence. Because the equilibrium outcome depends only on relative influence, the same outcome could be attained at lower cost for both groups. To illustrate the excessive political lobbying that can arise, consider the competition to be selected as a cable television operator in the

Brooklyn, Queens, Staten Island, and the Bronx regions of New York City: All the [franchise] applicants have hired influential lawyers and public-relations consultants, a roster of whom reads like a Who’s Who of former city and state officials.… [A vice president for one of the applicants] contends that these friends at city hall (who typically command fees of about $5,000 per month) have tended to cancel one another out.31

Competition among groups for influence in the political process consumes economic resources, producing a Pareto-inefficient outcome. The logic that underlies this result parallels the rationale for the Pareto inefficiency of Cournot oligopoly competition (see chapter 4). Now consider the testable predictions of Becker’s model of a political equilibrium. The model predicts that as the marginal deadweight loss from regulation (x) increases, the amount of regulatory activity (measured by the amount of wealth transfer T) declines. An increase in the marginal deadweight loss means that group 2 incurs a larger reduction in wealth for any given wealth transfer received by group 1. This greater potential loss spurs group 2 to apply more pressure for any given anticipated level of pressure by group 1. This effect of an increase in x on group 2’s behavior is represented by an increase in its best reply function from ψ2(r1) to in figure 10.5. Now, if group 1 is expected to apply pressure for example, then group 2 will apply pressure rather than because the welfare loss imposed on group 2 is higher for any given value of T (because of a higher value of x). As a result, the new political equilibrium is which entails more pressure by group 2 and more pressure by group 1 Even though group 1’s best reply function is unchanged, it responds to the more intense pressure from group 2 by increasing its own pressure. On balance, though, the transfer to group 1 (T) declines, because the increase in group 2’s pressure exceeds the increase in group 1’s pressure.32 These observations imply that Becker’s model predicts welfare-enhancing regulation is more likely to be implemented than welfare-reducing regulation. For example, suppose industry A is a natural monopoly and industry B is competitive. Then the deadweight welfare loss from regulating industry B is greater than that for industry A, ceteris paribus, because (only) industry B would secure the welfare-maximizing outcome absent regulation. Becker’s model predicts that, due to the higher marginal deadweight loss from regulating industry B, more pressure for regulation will be applied in industry A than in industry B. More generally, Becker’s model predicts that industries plagued by market failures (so that the marginal deadweight loss from regulation is relatively low or even negative) are more likely to be regulated. The beneficiary groups have greater potential for gain, so they will apply more pressure. Groups harmed by regulation will not be harmed as much because of the lower deadweight loss, so they will apply less pressure against regulation. In contrast to the Stigler/Peltzman model of regulation, Becker’s model provides some support for NPT. Market failures give rise to potential welfare gains from regulation. Some interest groups stand to gain substantially from regulation, whereas other groups may be harmed marginally (relative to interest groups in industries not subject to market failure) because of the absence of relatively large deadweight welfare losses. As a result, relatively pronounced pressure for regulation will arise in industries subject to market failure. In contrast to NPT, though, Becker’s model does not state that regulation occurs only when there is a market failure. What determines regulatory activity is the relative influence of interest groups, and this influence is determined not only by the welfare effects of regulation but also by the relative efficiency of interest groups in pressuring legislators and regulators. The appendix to this chapter presents a simple mathematical model that provides several predictions about when regulation is likely to arise. Taxation by Regulation

Economic regulation often implements cross-subsidization by setting prices for some services below cost and offsetting the resulting reduction in profit by setting prices for other services above cost.33 Such pricing behavior is perplexing, because it appears to be inconsistent with both profit maximization and welfare maximization. Richard Posner has proposed an explanation for cross-subsidization.34 He posits that regulation implements cross-subsidies to assist the government in redistributing wealth among citizens. For example, regulators historically have imposed uniform prices for local telephone service. Consequently, a consumer who lives in an urban region where the unit cost of serving customers is relatively low pays the same price as a consumer who lives in a rural region where this cost is substantially higher. Posner suggests that society would like to redistribute resources from one class of consumers to another class of consumers and that cross-subsidization can facilitate this redistribution. In the present context, uniform pricing subsidizes consumers who live in less densely populated areas at the expense of consumers who live in more densely populated regions. Posner’s observations can be interpreted in light of Becker’s model. Cross-subsidization might suggest that some consumers (those who enjoy prices below cost) have relatively more influence on the political process than other consumers (those with price above cost). Although cross-subsidization cannot be explained by either NPT (because it is inconsistent with welfare maximization) or CT (because it is inconsistent with profit maximization), it can be explained as the result of self-interested competition among interest groups to influence government policy. Summary of results Stigler’s economic theory of regulation provides four major conclusions. First, regulation tends to benefit relatively small groups with strong preferences for favorable regulation at the cost of relatively large groups with weak preferences for favorable regulation. Consequently, regulation will often favor producers rather than consumers. Second, even when regulation favors industry suppliers, policy (in particular, price) will not be set so as to maximize industry profit. The constraining influence of consumer groups will compel regulators to set a price below the profit-maximizing level. Third, regulation is most likely in relatively competitive or relatively monopolistic industries. It is in these industries that regulation will have the greatest impact on the welfare of some interest group. Fourth, the presence of a market failure makes regulation more likely because the gain to some interest groups is large relative to the loss to other interest groups. As a result, the former will have more influence on the legislative process, ceteris paribus. Critique of ET: Modeling the regulatory process While ET has considerable appeal, it is not beyond reproach. Several features of ET have been questioned. In particular, the Stigler, Peltzman, and Becker models all assume that interest groups influence regulatory policy directly. In practice, though, many actors play a role in the implementation of regulation. Voters and special interest groups determine which legislators are elected, elected legislators specify the language of the regulatory legislation, and regulators determine the details of the policy that is implemented. If interest groups are to have a significant impact on regulatory policy, they must have a strong impact on election outcomes. In addition, legislators must be sufficiently constrained by the threat of losing interest group support that they implement the policies favored by the interest groups that elected them (and presumably are needed for reelection). Furthermore, legislators must exert sufficient control over regulators to induce them to implement the desired regulation. Theories of economic regulation often are criticized for not

specifying the precise means by which interest groups control legislators and legislators control regulators. ET also emphasizes legislators’ concern with reelection. Most legislators care deeply about being reelected (and thus want to appease the interest groups that originally elected them). However, they also care about other things. Like voters, legislators typically have preferences on a wide variety of issues, even if the issues have little impact on reelection. Such preferences have been referred to as ideologies, which are “more or less consistent sets of normative statements as to best or preferred states of the world.”35 Because interest groups cannot perfectly control or monitor the activities of legislators, a legislator may pursue her own ideology rather than faithfully promote the welfare of the interest groups that were instrumental in her reelection. ET also tends to view regulators as beholden to legislators. In practice, regulators can be difficult to control, because they have access to information about the regulated industry that is not available to legislators and because it is difficult for legislators to continually draft new legislation to redirect regulatory policy. Consequently, regulators can have considerable discretion when implementing policy.36 Congress may be able to employ its budgetary powers to threaten to reduce the funding of regulatory agencies that do not implement the policies that legislators favor.37 In spite of such threats, though, regulators typically have significant independence from legislators. ET also affords little explicit attention to the role of the judiciary. In practice, courts often play a central role in the regulatory process: Judicial consent is necessary when a statute must be reinterpreted in order to implement a change. For instance, reinterpretation of the existing statutes was necessary for the deregulation of airline, trucking, telecommunications and several other industries, and the deregulation of various environmental, health and safety standards. Deregulation occurred only in those cases which were approved by the judiciary. Further, where it did occur, the opposition from committees of Congress was irrelevant.38

ET does not explain in detail the objectives of judges or how interest groups affect judges. Testing theories of regulation Does the empirical evidence support ET? To serve as a useful predictor of regulatory policy, ET should be able to explain both the regulation and the deregulation of such industries as railroads, trucking, intercity telecommunications, and crude oil. Specifically, the theory should be able to identify the elements of the environment that promote deregulation. According to NPT, deregulation will occur when there are changes in cost or demand conditions that limit the likelihood of a market failure in the absence of regulation. In contrast, ET predicts that deregulation will occur when the relative influence of interest groups that benefit from regulation is reduced. This reduced influence could stem from changes in cost or demand conditions (by affecting such things as the deadweight loss associated with regulation) or changes in the cost of organizing groups. Organization costs may decline, for example, if a new technology (like social media) or a new political entrepreneur with exceptional organizing skills arises. Deregulatory movements in the United States appear to provide mixed evidence.39 The deregulation of the railroad industry between 1976 and 1980 would appear to be broadly consistent with ET. The original regulation of the industry is explained by the industry becoming more influential in the political process. Although regulation originally facilitated increased industry earnings, it eventually began to constrain railroads’ earnings. Consequently, ET would predict industry pressure for deregulation, which is what emerged in the mid-1950s. ET seems less able to explain deregulation in the trucking industry, which was

earning substantial rent from regulation at the time of its deregulation. Further, it is not clear why consumers of trucking services would have become more influential in the political process relative to trucking firms and the Teamsters Union. The deregulation of the interstate telecommunications market might be viewed as consistent with both NPT and ET. As we will see in chapter 14, deregulation can be explained as a response to the industry no longer being a natural monopoly, as NPT would predict. ET may better explain the FCC’s efforts to limit industry entry for many years. Technological change in the industry brought forth a new interest group in the form of prospective suppliers of long-distance telephone service (initially, MCI). This interest group gained sufficient influence to pressure the FCC to allow partial entry, but AT&T maintained sufficient influence to preclude full entry. A ruling from the U.S. court of appeals was required to ensure full entry. More systematic and direct tests of ET have been conducted. These analyses investigate whether regulation tends to favor interest groups with low organizing costs and a high per capita benefit from regulation. The analyses also explain why states allow reciprocity for dentist licenses,40 what determines the prices charged for nuclear energy,41 and why state regulatory commissions choose the particular forms of regulatory policy that they do.42 Although ET contributes significantly to our understanding of the origins and nature of regulatory policy, it does not enjoy the universal support of the empirical evidence. Thus, further work remains to develop a complete understanding of why regulation occurs when it does and why it takes the form that it does. We will consider the political elements of regulation more fully as we examine regulatory policy in specific industries. In particular, we will examine the political economy of state banking regulation in the following subsection, telecommunications regulation in chapter 14, and railroad and trucking regulation in chapter 16. Deregulation of bank branching restrictions Regulatory history The widespread mergers, acquisitions, and overall expansion that occurred in the banking industry in the 1990s constituted a substantial change from earlier industry activity.43 Regulation severely limited bank expansion prior to the 1970s. At the federal level, the Bank Holding Company Act of 1956 effectively prohibited a bank from having branches in more than one state. State regulations restricted intrastate branching, with some states limiting banks to a single branch (known as “unit banking”). In stark contrast, interstate banking and branching is largely unrestricted today. When did deregulation occur and why? Multibank holding companies (MBHCs) emerged as a response to restrictions on bank branching. An MBHC could own and operate multiple bank subsidiaries but could not integrate their operations. Each bank had to be run independently. Consequently, the owner of an account in one bank could not access the account at another bank even if the two banks were owned by the same MBHC. When deregulation initially took hold, MBHCs were permitted to convert subsidiary banks into branches of a single bank and to acquire other banks and make them branches as well. Later deregulation would permit banks to open new branches. A study by Randall Kroszner and Philip Strahan examined the ability of NPT and ET to explain when states chose to permit MBHCs to convert subsidiaries into branches of the same bank.44 Figure 10.6 illustrates the considerable variation in the timing of this form of deregulation. The study focused on the thirty-six states that introduced deregulation between 1970 and 1992.

Figure 10.6 Deregulation of Restrictions on Intrastate Bank Branching Source: Randall S. Kroszner and Philip E. Strahan, “What Drives Deregulation? Economics and Politics of the Relaxation of Bank Branching Restrictions,” Quarterly Journal of Economics 114 (November 1999): 1437–1467.

Predictions of NPT and ET Recall that NPT posits that regulatory policy is designed to maximize social welfare. Consequently, NPT predicts that deregulation should take place earlier in those states where regulation was imposing particularly pronounced reductions in social welfare. Recent evidence suggests that bank branching generates substantial efficiencies. Consequently, regulation that prohibited MBHCs from converting their subsidiaries into branches effectively benefited smaller, less efficient banks by protecting them from encroachment by larger, more efficient banks. The associated welfare loss should then be higher in those states where small banks have a more pronounced presence. Therefore, NPT predicts that deregulation should be introduced earlier in states with a greater presence of small banks. A second factor pertinent to the timing of deregulation is the presence of small commercial firms. Small suppliers tend to be especially dependent on the local banking sector for credit. (Large firms often can raise capital through other means, such as equity offerings). Consequently, regulations that limit the efficiency of the banking sector should tend to impose the largest reductions on social welfare in states with a greater presence of small firms. NPT predicts that deregulation should occur earlier in such states. Now consider the predictions of ET. By permitting efficiencies to be realized, deregulation would tend to benefit larger banks at the expense of smaller banks. Consequently, ET predicts that small banks should exert pressure to delay deregulation. Contrary to NPT, then, ET predicts that deregulation should be introduced more slowly in states with a greater presence of small banks. However, ET, like NPT, predicts deregulation should occur more rapidly in states with a greater presence of small commercial firms. This is the case because these firms stand to gain the most from deregulation and so will devote more resources to securing its early implementation.

Performance of ET and NPT These predictions were tested by estimating how the presence of small banks and small firms affected the amount of time that elapsed before deregulation occurred in the state banking sector. The presence of small banks in a state is measured by the percentage of banking assets in the state controlled by small banks, where a bank is defined as “small” if its assets are below the median level of assets for banks in that state. The presence of small commercial firms in a state is measured by the proportion of all business establishments in the state with fewer than twenty employees. Kroszner and Strahan found that deregulation is introduced more slowly when the presence of small banks is more pronounced and when the presence of small firms is less pronounced. The latter prediction is consistent with both ET and NPT. The former prediction is consistent only with ET. Therefore, ET seems to explain the timing of deregulation of bank branching restrictions better than NPT.

Summary and Overview of Part II Economic regulation typically entails regulation of price, quantity, and/or industry entry and exit. Legislation that imposes economic regulation has been introduced at different rates in different periods. Substantial economic regulation was implemented immediately following the Great Depression, whereas considerable deregulation has been undertaken since the 1980s. This chapter provides a brief review of the regulatory process, but does not do justice to the complexity of this process. Many economic agents are involved at the inception of regulation and throughout its implementation. To fully understand prevailing regulations, one must understand the motives of consumers, firms, unions, legislators, regulatory commissioners, and government bureaucrats. Several theories about why regulation takes the form it does have been discussed. Different variants of the economic theory of regulation (ET) appear to be most consistent with the evidence. However, ET does not explain all relevant evidence perfectly. Additional research is required to develop a comprehensive theory of regulation. The remainder of the ensuing discussion of economic regulation will proceed as follows. Chapter 11 explores possible alternatives to ongoing, active regulation of the activities of a profit-maximizing supplier, namely, competitive bidding for the right to serve as a monopoly supplier (of cable television services, for example) and public ownership of the supplier (as when a municipality owns the local supplier of electricity). Chapter 12 examines the more common setting in which a regulatory agency sets the prices that the monopoly producer can charge for the services it supplies. Chapter 13 contrasts rate of return regulation —the form of regulation that used to be widely employed in public utility industries—with incentive regulation, which has become more popular in recent years. Chapter 14 examines the design of regulatory policy in settings where an industry that once was a natural monopoly is no longer a natural monopoly. Chapter 14 illustrates key principles in the context of regulation in the communications sector. Chapter 15 considers the impact of regulation in industries with multiple suppliers. The chapter explores the key predictions of economic theory and examines how the predictions can be tested empirically. Chapter 16 employs the principles developed in chapter 15 to examine regulatory policy in the transportation sector. Chapter 17 reviews present and historical regulation in the energy sector. Chapter 18 concludes our discussion of economic regulation by examining regulatory policy in the financial sector. Appendix A Theory of Interest Group Competition Consider a market with two interest groups, consumers and firms. For simplicity, suppose that there is either a single firm or that firms act in perfect unison. There are N ≥ 2 consumers who may or may not act in a coordinated fashion. The situation is one in which firms lobby to be regulated (with an associated increase in their profit), and consumers may respond by lobbying against regulation. Competition for control of government policy takes place in three stages. In stage 1, firms decide how much to spend on lobbying for regulation. Let f denote the expenditure they incur. In stage 2, consumers observe f (and thus know what firms have spent), and each consumer decides whether to organize consumer interests in opposition to regulation. The organizing cost to an individual consumer is z > 0. If no consumer incurs this cost, then consumers are unable to lobby effectively, so regulation is instituted (as long as f > 0). If at least one consumer incurs the organizing cost, then consumers become organized effectively as a group, and the game moves to stage 3. In stage 3, the consumer group decides how much to spend on lobbying against regulation. Let c denote their expenditure. Regulation is implemented if firms outspend consumers (that is, if f > c), whereas regulation is not implemented otherwise (that is, if c ≥ f). If regulation is implemented, each consumer’s

welfare declines by an amount [1 + x]T/N (so total consumer welfare falls by [1 + x]T) and firm profit is raised by T. Hence, the welfare loss is xT, where x ≥ 0. To identify the equilibrium in this setting, we search for outcomes for which each party is acting in its own best interest, given the actions of the opposing party. To do so, we employ backward induction, which means solving the stage 3 game, then using that solution to solve the stage 2 game, and then using that solution to solve the stage 1 game. We start with stage 3, assuming that at least one consumer has incurred the organizing cost z. The consumer group knows the firms have spent f. If firms spent nothing (f = 0), then consumers will do likewise (c = 0) and avoid regulation. However, if f > 0 and regulation occurs, then the firms’ payoff is T − f. Hence, it is never optimal for firms to spend more than T, as they would have been better off spending zero. If the firms’ expenditure is f ∈(0, T), then each consumer’s payoff is

(These payoffs do not include the organizing cost, which is sunk and therefore irrelevant for decision making in stage 3.) If consumers choose expenditure c ≥ f, then regulation is not implemented and each consumer simply incurs her share of lobbying expenditure, which is c/N. In contrast, if consumers choose c < f, then regulation occurs, so each consumer loses welfare [1 + x]T/N plus her share of lobbying expenditure, c/N. Notice that consumers prefer to spend f rather than any larger amount, because they only need to match the firms’ expenditure to avoid regulation. Also notice that c = 0 is preferable to any c ∈ (0, f), because spending anything less than f does not preclude regulation. Thus, to maximize their welfare, consumers should either spend f or 0. Matching the firms’ expenditure to preclude regulation is preferable to not trying to preclude regulation when

This inequality holds because T > f and x ≥ 0. Therefore, if consumers organize as a group, their optimal action is to match the expenditure of firms and thereby avoid regulation. Now consider stage 2 to determine whether consumers will choose to organize, knowing that if they do organize, they will spend f. Suppose f > 0, so regulation would take place if consumers do not organize. First consider the possibility that no consumer chooses to incur the organizing cost z. If no other consumer incurs z, then the payoff to a consumer who does the same is −[1 + x]T/N, because regulation will be implemented in this case. If this consumer instead incurs the organizing cost z, then regulation is not implemented and the consumer’s payoff is −(f/N) − z, which includes the organizational cost and the consumer’s subsequent share of lobbying expenditure f. All consumers will choose not to organize only when

Thus, if firms spend enough (so f > [1 + x]T − zN), consumers will fail to organize and regulation will be implemented. In contrast, if f ≤ [1 + x]T − zN, then it is not an equilibrium for consumers to fail to organize. Furthermore, it is an equilibrium for exactly one consumer to incur the organizing cost.45 In this event, consumers end up spending f in stage 3, and regulation is not implemented. Let us now consider the behavior of firms in stage 1. Anticipating the outcomes in stages 2 and 3, the firms realize that if they spend f ≤ [1 + x]T − zN, then consumers will respond by effectively lobbying against regulation. Therefore, the firms’ payoff is −f when f ≤ [1 + x]T − zN. Consequently, the firms prefer f = 0 to any f ≤ [1 + x]T − zN, because any such lobbying expenditure will be insufficient to secure favorable regulation. If the firms instead spend more than [1 + x]T − zN, then consumers will not organize a response, because they deem it to be too costly to preclude regulation. Consequently, the firms’ payoff will be T − f. By spending a penny more than [1 + x]T −

zN, the firms can secure regulation and a corresponding payoff of approximately T − [1 + x]T + zN, or zN − xT. Therefore, firms will spend enough to secure regulation only when

where zN − xT represents the payoff from spending [1 + x]T − zN to secure regulation, and 0 reflects the payoff from spending zero and not securing favorable regulation. This simple theory provides insights about when regulation is likely to occur.

Result 1. Regulation is less likely when the associated welfare loss is more pronounced. Inequality 10.1 implies that regulation occurs when x < zN/T. Because x measures the welfare loss from regulation, Result 1 follows from this inequality. When x is larger, the welfare loss is larger and consumers lose more for each dollar that the firms gain. Consumers are then more likely to mount an effective response, and firms are less likely to find it worthwhile to spend enough to overcome this response. Result 2. Regulation is more likely when there are more consumers. Inequality 10.1 implies that regulation occurs when N > xT/z. Therefore, regulation is more likely to be implemented as N, the number of consumers, increases. By assumption, the cost to any consumer of coordinating a response is z, regardless of the number of consumers in the group. However, each consumer’s share of the net gain from regulation declines when the gain must be shared with more consumers. Consequently, regulation is more likely to be implemented when consumers are numerous, so each consumer has a relatively small amount at stake. Result 3. Regulation becomes more likely as the cost that consumers must incur to effectively coordinate against regulation increases. Inequality 10.1 implies that regulation occurs when z > xT/N. For example, the presence of a political entrepreneur whose cost from organizing is low or who derives personal satisfaction from doing so will make an industry drive for regulation less likely to succeed. Result 4. The larger the impact of regulation, the less likely it is to be implemented. Inequality 10.1 implies that regulation occurs when T < zN/x, where T measures the impact of regulation. The larger is T, the larger the loss will be that consumers incur if regulation is implemented. Consequently, consumers are more likely to organize effectively to preclude regulation. Questions and Problems 1. Do you agree with the Nebbia v. New York decision? If so, why? If not, describe what you believe would have been a better judicial decision. 2. What roles do the legislature, the judiciary, and the regulatory agency play in the process of deregulation? How do interest groups affect deregulation? Should they be allowed to affect regulatory policy? 3. Sometimes, former regulatory commissioners are hired by the industry they previously regulated. What effect do you think this practice has on the relationship between a regulatory agency and the industry? Should the practice be allowed? Discuss the advantages and disadvantages of prohibiting this practice. 4. Is there a theory that can explain why both competitive industries (like taxicabs) and monopolistic industries (like local telephony) are regulated? 5. Can one explain why the railroad industry was regulated and then deregulated almost a century later? What about the

regulation and deregulation of trucking? 6. What is the empirical evidence for and against the economic theory (ET) of regulation? 7. What would be the effect on regulatory practices if regulatory agencies were composed of seven members, where two members represent firms, two members represent workers, and three members represent consumers? More generally, how do you think regulatory commissioners should be chosen? 8. Use the economic theory of regulation to explain the existence of trade barriers like tariffs and quotas. 9. What were the primary causes of the wave of deregulation that took place during the 1970s and 1980s? 10. In many cities, regulation restricts the number of taxicabs that can operate in the industry. Entry restrictions were relaxed in some cities in the 1980s, but not in others (see chapter 15). How could you use the deregulation of taxicab markets to test theories about why economic regulation arises in practice?

Notes 1. Alan Stone, Regulation and Its Alternatives (Washington, D.C.: Congressional Quarterly Press, 1982), p. 10. 2. The term “permanent” may be somewhat misleading, because major technological changes might convert a natural monopoly into a more competitive industry. Chapter 14 examines such a case. 3. See Morris M. Kleiner and Alan B. Krueger, “Analyzing the Extent and Influence of Occupational Licensing on the Labor Market,” Journal of Labor Economics 31 (April 2013): S173–S202. 4. See Morris M. Kleiner and Robert T. Kurdle, “Does Regulation Affect Economic Outcomes? The Case of Dentistry,” Journal of Law and Economics 43 (October 2000): 547–582. This study finds that more stringent occupational licensing in the dental profession does not result in fewer cavities, but it does enhance the income of dentists. 5. For a review of some of the evidence, see Kyle Bagwell, “The Economic Analysis of Advertising,” in Mark Armstrong and Robert Porter, eds., Handbook of Industrial Organization, vol. 3 (Amsterdam: North-Holland, 2007), pp. 1557–2440. 6. For a discussion of early (municipal) regulation prior to the 1880s, see M. H. Hunter, “Early Regulation of Public Service Corporations,” American Economic Review 7 (September 1917): 569–581. 7. Munn v. State of Illinois, 94 U.S. 113 (1876). 8. Nebbia v. People of State of New York, 291 U.S. 502 (1934). 9. Elizabeth Sanders, “The Regulatory Surge of the 1970s in Historical Perspective,” in Elizabeth E. Bailey, ed., Public Regulation: New Perspectives on Institutions and Policies (Cambridge, MA: MIT Press, 1987), pp. 117–150. 10. Richard H. K. Vietor, Contrived Competition: Regulation and Deregulation in America (Cambridge, MA: Harvard University Press, 1994). 11. Clifford Winston, “Economic Deregulation: Days of Reckoning for Microeconomists,” Journal of Economic Literature 31 (September 1993): 1263–1289. This paper also provides a comprehensive summary of the predicted and measured effects of economic deregulation. 12. See Robert W. Crandall, “Extending Deregulation: Make the U.S. Economy More Efficient,” Opportunity 08 Memorandum, Brookings Institution, Washington, DC, 2007. Available at http://www.brookings.edu/~/media/research/files /papers/2007/2/28useconomics-crandall-opp08/pb_deregulation_crandall.pdf. 13. For a discussion and comparative analysis of appointed and elected state public utility commissioners, see Kenneth W. Costello, “Electing Regulators: The Case of Public Utility Commissioners,” Yale Journal on Regulation 2 (1984): 83–105. 14. James Q. Wilson, “The Politics of Regulation,” in James Q. Wilson, ed., The Politics of Regulation (New York: Basic Books, 1980), pp. 357–394. 15. “Delay in the Regulatory Process,” in Study on Federal Regulation, vol. IV (Washington, DC: U.S. Senate, Committee on Governmental Affairs, July 1977). 16. For further discussion of the strategic manipulation of the regulatory process, see Bruce M. Owen and Ronald Braeutigam, The Regulation Game (Cambridge, MA: Ballinger Publishing, 1978). 17. These objectives for a theory of regulation are discussed in George J. Stigler, “The Theory of Economic Regulation,”

Bell Journal of Economics and Management Science 2 (Spring 1971): 3–21. 18. Paul L. Joskow and Roger G. Noll, “Regulation in Theory and Practice: An Overview,” in Gary Fromm, ed., Studies in Public Regulation (Cambridge, MA: MIT Press, 1981), pp. 1–65. 19. Richard A. Posner, “Theories of Economic Regulation,” in Bell Journal of Economics and Management Science 5 (Autumn 1974): 335–358. 20. George J. Stigler and Claire Friedland, “What Can Regulators Regulate? The Case of Electricity,” Journal of Law and Economics 5 (October 1962): 1–16. 21. For further discussion and evidence, see William A. Jordan, “Producer Protection, Prior Market Structure and the Effects of Government Regulation,” Journal of Law and Economics 15 (April 1972): 151–176. 22. The hypothesis of a life cycle for a regulatory agency is discussed in Marver H. Bernstein, Regulating Business by Independent Commission (Princeton, NJ: Princeton University Press, 1955). 23. George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science, 2 (Spring 1971): 3–21. 24. Ibid, p. 4. 25. For a more extensive review of the development of the ET, see Ernesto Dal Bo, “Regulatory Capture: A Review,” Oxford Review of Economic Policy 22 (Summer 2006): 203–225. 26. Sam Peltzman, “Toward a More General Theory of Regulation,” Journal of Law and Economics 19 (August 1976): 211– 240; Mancur Olson, The Logic of Collective Action (Cambridge, MA: Harvard University Press, 1965). 27. See Robert Hahn and Caroline Cecot, “The Benefits and Costs of Ethanol: An Analysis of the Government’s Analysis,” Journal of Regulatory Economics 35 (June 2009): 275–295. For an estimate of the substantial deadweight losses resulting from U.S. policies to promote the use of ethanol, see Bruce Gardner, “Fuel Ethanol Subsidies and Farm Price Support,” Journal of Agricultural & Food Industrial Organization 5 (2007): Article 4. The primary political and economic factors underlying the support for ethanol subsidies are examined in Mark Skidmore, Chad Cotti, and James Alm, “The Political Economy of State Government Subsidy Adoption: The Case of Ethanol,” Economics & Politics 25 (July 2013): 162–180. 28. Gary S. Becker, “A Theory of Competition among Pressure Groups for Political Influence,” Quarterly Journal of Economics 98 (August 1983): 371–400. 29. Ibid., p. 372. 30. A political equilibrium is a Nash equilibrium for a game in which groups simultaneously choose how much pressure to apply. (Recall the discussion of Nash equilibrium in the section on game theory in chapter 4.) 31. Lauro Landro, “New York Today Picks Its Cable-TV Winners for Four Boroughs,” Wall Street Journal, November 18, 1981, pp. 1, 22. 32. We are grateful to Sugata Marjit for pointing out an error in the third edition of this book. 33. For a more precise definition of cross-subsidization, see Gerald Faulhaber, “Cross-Subsidization: Pricing in Public Enterprises,” American Economic Review 65 (December 1975): 966–977. 34. Richard A. Posner, “Taxation by Regulation,” Bell Journal of Economics and Management Science 2 (Spring 1971): 22– 50. 35. Joseph P. Kalt and Mark A. Zupan, “Capture and Ideology in the Economic Theory of Politics,” American Economic Review 74 (June 1984): 279–300. 36. For analyses that explore the implications of regulators having better information than legislators, see Pablo T. Spiller, “Politicians, Interest Groups, and Regulators: A Multiple-Principals Agency Theory of Regulation, or ‘Let Them Be Bribed,’ ” Journal of Law and Economics 22 (April 1990): 65–101; and Jean-Jacques Laffont and Jean Tirole, A Theory of Incentives in Procurement and Regulation (Cambridge, MA: MIT Press, 1993). 37. Barry R. Weingast and Mark J. Moran, “Bureaucratic Discretion or Congressional Control? Regulatory Policymaking by the Federal Trade Commission, “Journal of Political Economy 5 (October 1983): 765–800. 38. Krishna K. Ladha, “The Pivotal Role of the Judiciary in the Deregulation Battle between the Executive and Legislature,” working paper, Washington University in St. Louis, Missouri, March 1990, p. 46. 39. For surveys, see Theodore E. Keeler, “Theories of Regulation and the Deregulation Movement,” Public Choice 44

(1984): 103–145; and Sam Peltzman, “The Economic Theory of Regulation after a Decade of Deregulation,” in Martin Neil Baily and Clifford Winston, eds., Brookings Papers on Economic Activity: Microeconomics 1989 (Washington, DC: Brookings Institution Press, 1989), pp. 1–59. 40. Gilbert Becker, “The Public Interest Hypothesis Revisited: A New Test of Peltzman’s Theory of Regulation,” Public Choice 49 (1986): 223–234. 41. Charles D. Delorme Jr., David R. Kamerschen, and Herbert G. Thompson Jr., “Pricing in the Nuclear Power Industry: Public or Private Interest?” Public Choice 73 (June 1994): 385–396. 42. See David L. Kaserman, John W. Mayo, and Patricia L. Pacey, “The Political Economy of Deregulation: The Case of Intrastate Long Distance,” Journal of Regulatory Economics 5 (March 1993): 49–63; and Larry Blank and John W. Mayo, “Endogenous Regulatory Constraints and the Emergence of Hybrid Regulation,” Review of Industrial Organization 35 (November 2009): 233–255. 43. This section draws heavily on Jith Jayaratne and Philip E. Strahan, “Entry Restrictions, Industry Evolution, and Dynamic Efficiency: Evidence from Commercial Banking,” Journal of Law and Economics 41 (April 1998): 239–274; and Randall S. Kroszner and Philip E. Strahan, “What Drives Deregulation? Economics and Politics of the Relaxation of Bank Branching Restrictions,” Quarterly Journal of Economics 114 (November 1999): 1437–1467. 44. Ibid. 45. It cannot be an equilibrium for more than one consumer to incur the organizing cost z. Once a single consumer has incurred the cost, the consumer group is organized effectively, and an additional expenditure by another consumer would reduce that consumer’s welfare without increasing the welfare of any other consumer.

11 Alternatives to Regulation in the Market: Public Enterprise and Franchise Bidding, with an Application to Cable Television

Chapter 10 emphasized an important conflict that can arise if industry production costs are substantially lower when a single firm supplies the entire industry output. Although production costs may be minimized when a single firm serves all customers, the prices that consumers are charged may not be minimized. A monopoly supplier often is inclined to set prices well above relevant production costs. Therefore, even when monopoly supply ensures the lowest operating costs, it does not ensure the lowest prices. To resolve this natural monopoly problem, governments often will authorize production by a single supplier and then regulate the prices that the monopoly supplier can charge for its services. Subsequent chapters in this book will review in detail precisely how such regulation has been implemented in a variety of sectors, including the communications, energy, and transportation sectors. First, though, we discuss two possible alternatives to such regulation in the market: public enterprise and franchise bidding. A public enterprise is a firm that is owned by the government rather than by private shareholders. Local water, gas, and electricity distribution companies are sometimes owned by local municipalities rather than by private shareholders. Franchise bidding arises when a government awards the exclusive right to provide a service of a specified quality to the privately owned firm that offers to charge the lowest price for the service. Local providers of cable television service often are determined by franchise bidding. Public Enterprise Public enterprise might seem to provide a simple resolution to the natural monopoly problem. In principle, the government might take control of the enterprise and then instruct the firm’s managers to set prices for the firm’s products to maximize consumer welfare, not profit. To illustrate, consider the setting depicted in figure 11.1, where an efficient supplier can deliver output Q at average cost ACe(Q). The industry demand curve D(P) in figure 11.1 identifies the amount of the service that consumers will purchase when price P is charged for the service. Observe that the average cost of production declines as output increases for all output levels depicted in figure 11.1. Therefore, the figure illustrates a natural monopoly setting.

Figure 11.1 Prices Set by a Public Enterprise and a Profit-Maximizing Firm

An unregulated, profit-maximizing monopoly will raise the price for its service above the corresponding average cost of production. The profit-maximizing price is Pm in figure 11.1.1 In contrast, a public enterprise that seeks to maximize consumer surplus without incurring a financial loss will set price Pg. At this price, the firm’s average cost of supplying the Qg units of the service that consumers demand is exactly Pg, so the public enterprise earns zero (above-normal) profit and consumers are charged the lowest price that enables the firm to secure nonnegative profit. The potential resolution of the natural monopoly problem illustrated in figure 11.1 may have some theoretical appeal. However, the resolution can fail to resolve fully the problem in practice for at least two important reasons: limited expertise and limited motivation. A government enterprise may lack the expertise that privately owned firms possess. The successful operation of complex tasks like building and operating networks to supply water, electricity, or communications services requires considerable expertise. Relevant government officials may not possess the requisite expertise themselves, or they may be unable to devote the time required to run the firm. Consequently, the government will have to identify, attract, and retain experts to run the enterprise. Just as government officials have limited ability to run the enterprise themselves, they may have limited ability to identify the best managers to operate the firm. Furthermore, governments often impose relatively stringent limits on the compensation that can be paid to employees and contractors. To illustrate, the president of the United States is paid $400,000 annually. In contrast, scores of chief executive officers in the United States are paid more than $20 million annually. Consequently, government enterprises may be unable to attract the

most highly qualified managers. A government enterprise also may not face particularly strong incentives to operate efficiently. When a privately owned firm discovers a new way to reduce its operating costs, the firm secures the full benefit of the associated cost savings in the form of higher profit. Consequently, the privately owned firm has considerable incentive to continually discover new ways to operate more efficiently.2 A government enterprise that is explicitly instructed not to maximize profit typically will have limited incentive to discover innovative ways to reduce costs and operate efficiently. Some government employees may be naturally inclined to labor diligently and discover new ways to improve the performance of the firm that employs them. However, employees typically have interests outside of the workplace, and when they anticipate limited rewards for excelling on the job, they may naturally devote a substantial portion of their attention to other activities. Furthermore, government enterprises do not face the same discipline from capital markets that private firms with publicly traded shares face. If such a private enterprise fails to operate efficiently and secure the maximum possible profit, another firm or different group of managers will have strong financial incentive to take over the underperforming enterprise. This threat of potential takeover of the firm and the corresponding likely ouster of incumbent management can provide strong incentives for managers of private, profitmaximizing enterprises to ensure that the firm whose operations they oversee operates efficiently. Managers of government-owned firms do not face the same discipline, because private firms or groups of managers cannot credibly threaten to take over a government enterprise that is not operating efficiently. This lack of discipline can lead to higher costs and less efficient operations in government-owned enterprises. When a government enterprise operates with higher costs than a private, profit-maximizing firm, consumers are not necessarily better served by the former enterprise, even when the enterprise sets prices that maximize consumer welfare rather than prices that maximize profit. Because it operates with higher costs, the government enterprise may have to charge higher prices than the private enterprise to avoid a financial loss, even when the private firm sets prices to maximize profit. This will be the case in figure 11.1, for example, if the average cost of production rises above ACh(Q) for the public enterprise but remains at ACe(Q) for the privately owned enterprise. Similar logic pertains to performance dimensions other than operating costs. The profit motive may drive a private firm to discover new products and services that consumers value highly. Due to the more limited financial incentive it faces to enhance the revenue it secures from consumers, a government enterprise may be less likely to develop innovative products and services. Despite these drawbacks to government-owned enterprises, such enterprises are not without their potential benefits. When the government owns a firm, it can direct the firm to undertake activities that are not profitable, even though they are highly valued by members of society. For example, the government might direct a public enterprise to extend its electricity or water network to remote regions of the country that are not profitable to serve either because the regions are sparely populated or because residents of the region have limited financial resources. Government ownership also can sometimes encourage enhanced delivery of dimensions of service quality that are difficult to measure accurately. For example, a profit-maximizing supplier might be tempted to deliver less than the welfare-maximizing level of customer service if, by doing so, the firm could substantially reduce its operating costs. In contrast, a public enterprise that is less focused on minimizing operating costs might be more inclined to deliver high levels of customer service.3 Government supply of certain services can also facilitate both oversight of the delivery of the services and secure revenue for the government. To illustrate, several U.S. states require that hard liquor (alcohol) be

purchased in government stores, and most states operate their own lotteries. By selling alcohol and lottery tickets themselves, state governments can enhance their ability to limit undesired sales (to minors, for example) and can collect substantial revenue that can be used to promote desired activities (for example, to fund state education programs). In summary, government ownership may provide a reasonable resolution of the natural monopoly problem in some instances. However, it is not a panacea in general. Therefore, other means of resolving the problem warrant serious consideration. Basic Elements of Franchise Bidding Competition in the market will not resolve the natural monopoly problem, because industry costs rise when multiple firms serve customers in a natural monopoly setting. Franchise bidding attempts to resolve the natural monopoly problem by replacing competition in the market with competition for the market. Harold Demsetz suggested how franchise bidding might operate in practice.4 The government would first define the service in question (for example, the channels that a cable television operator would deliver to all customers who choose to purchase the service). The government would then award the exclusive right to offer the defined service to a single supplier via a competitive bidding process. The bid of each prospective supplier would be the price at which the supplier would deliver the service. The prospective supplier that offered the lowest bid would be awarded the exclusive right to deliver the service to all customers. The central idea behind such franchise bidding is that, in the presence of sufficiently intense competition among prospective suppliers for the right to serve as the sole supplier of the service, the price will be bid down to the average cost of supplying the service, so the winning bidder will secure only a normal profit. In the setting illustrated in figure 11.1, the winning bid would be Pg, so consumer surplus would be at the highest level compatible with nonnegative profit for the supplier. In essence, the government’s role under franchise bidding is to act as an auctioneer rather than as a regulator. Franchise bidding via a modified English auction To better understand how ex ante competition for the market might substitute for competition in the market, consider a particular auction form that might be employed to conduct the franchise bidding. The English (or oral ascending) auction is a common auction form.5 When an item like a painting or an oil lease is to be sold, the English auction works as follows. The auctioneer announces a bid, to which the bidders respond by signaling whether they are willing to buy the item at the specified price. If there are at least two active bidders (a bidder is active if he signals that he is willing to buy the item at the prevailing bid), the auctioneer raises the bid. She continues to raise the bid until there is only one active bidder. The last remaining bidder wins the item and pays a price equal to the final bid. Under franchise bidding, the “object” is the franchise, which is the exclusive right to provide a specified service (for example, cable television service) to customers in the franchise area (for example, the local municipality). Furthermore, the franchise is awarded to the bidder who bids the lowest—not the highest— price for the franchise. Therefore, the standard English auction needs to be modified slightly. After defining the terms of the franchise agreement (such as which program channels the cable television operator must carry), the auctioneer specifies a very high bid. The auctioneer then lowers the bid continually as long as there are two or more active bidders. As soon as the bid is lowered to the point where only one bidder remains, the franchise is awarded to the remaining bidder. In exchange for the franchise, the winning bidder must set a price for the service it supplies equal to the final bid.

Suppose there are four firms bidding for the franchise. Let ACi(Q) represent the average cost function for firm i = 1, 2, 3, 4. As depicted in figure 11.2, the firms have different cost functions. The differences could be due to firms having different production technologies as the result of patents or trade secrets. If only linear pricing (that is, a constant per-unit price) is feasible, the social welfare is maximized if firm 1 supplies the good at a price of In this event, the most efficient firm sets a price equal to its average cost of production.

Figure 11.2 Franchise Bidding Using a Modified English Auction

To determine whether a modified English auction will ensure this ideal outcome, we must determine the bidding behavior that the firms will adopt. Imagine that prior to the start of the auction, a firm decides on the bids for which it will remain active. Observe that firm i will choose to remain active as long as the prevailing bid, B, exceeds If B exceeds and firm i wins the franchise, it will earn an above-normal profit, because it will be permitted to charge a price that exceeds its average cost. If B is less than then firm i will exit the bidding process, because, should it win the franchise at bid B, firm i would be obligated to charge a price that is less than its average cost, and so it would incur a loss. Therefore, the optimal bidding behavior for firm i is to remain active as long as B does not decline below If it were to win the franchise at bid firm i would earn a normal profit, because winning the franchise would require it to set a price equal to its average cost. Now consider the outcome of the franchise bidding process when firms 1, 2, 3, and 4 compete via a

modified English auction and employ this bidding behavior. If the auctioneer begins by specifying a bid above all four bidders will signal they are active. The auctioneer will lower the bid, B, whenever more than one bidder is active. Once B falls below firm 4 exits the bidding. The auctioneer will then continue to lower B, because three active bidders remain. Firm 3 drops out once B declines below which leaves only firms 1 and 2 competing. As soon as B falls just below firm 2 exits the bidding. Because this leaves firm 1 as the only active bidder, firm 1 wins the franchise and is obligated to charge a price just below (the final winning bid). This outcome entails both good news and bad news. The good news is that the firm with the lowest average cost curve becomes the franchise operator. A modified English auction will always produce this outcome, because the most efficient firm can always outbid its less efficient rivals. The bad news is that price is approximately equal to which exceeds the average cost of the selected franchise operator. Consumer surplus is reduced by the value of the shaded area (in figure 11.2) below the level of consumer surplus that arises when the price charged for the service is set equal to the franchise operator’s average cost of production. Franchise bidding does not produce the welfare-maximizing price in this setting, because there is insufficient competition. Firm 1 only faces less efficient rivals. The outcome would differ if firm 1 faced an equally efficient rival. To see why, suppose there are two firms with average cost function AC1(Q), rather than just one such firm. When the bid falls below in the auction in this case, two bidders (the ones with average cost function AC1(Q)) will remain active. Consequently, the auctioneer will continue to reduce the bid until it reaches 6 The two firms are indifferent between winning the franchise and exiting the bidding when the bid is If the bid were to decline below both firms would exit. The auctioneer can award the franchise to either of the two remaining bidders when the bid declines to By doing so, the auctioneer can ensure that a firm with the lowest cost supplies the service and sets the welfare-maximizing price, This discussion implies that whether franchise bidding secures the welfare-maximizing outcome depends on whether there is sufficient competition among the most efficient potential suppliers. Information Advantages of Franchise Bidding The informational requirements of franchise bidding merit emphasis. When sufficient competition is present at the bidding stage, franchise bidding ensures that the most efficient firm serves customers at a price equal to the firm’s average cost of production. In principle, regulation in the market could achieve the same outcome. However, the regulator would need to know the prevailing demand and cost functions in order to implement average cost pricing. The regulator (auctioneer) does not require this information under franchise bidding. Under franchise bidding, competition for the market rather than a regulator’s decision in the market ensures average cost pricing. Therefore, at least in principle, franchise bidding can achieve the same outcome as regulation in the market and can do so even when the regulator has substantially less information. Potential Drawbacks to Franchise Bidding The discussion to this point has painted a fairly rosy picture of franchise bidding. In practice, franchise bidding may not perform as well as it does in the simple setting considered so far. We now turn to several practical problems that can arise when franchise bidding is employed in an attempt to resolve the natural monopoly problem.

The need for information about consumer preferences The first step in franchise bidding is identifying the characteristics of the service that the selected franchise operator will supply. Such identification is relatively straightforward if the service in question is a standard service with immutable quality attributes. In practice, though, most services have many dimensions of service quality that can be varied. To illustrate, the quality of a cable television service depends not only on the programs that can be viewed but also on the speed with which the service is installed for new customers, the number and duration of service outages that customers experience, the clarity of the broadcast signal, the accuracy of monthly bills, and the courtesy of customer service representatives. When a service entails multiple important dimensions of service quality, the regulator must know how highly consumers value each of the dimensions in order to determine the best combination of features to procure from the franchise operator. The need to fully specify all dimensions of service quality If a key dimension of service quality (such as the courtesy of customer service representatives) is difficult to describe and measure accurately, then it may not be clearly specified in the franchise agreement. In such a case, the franchise operator may find it profitable to reduce its costs by supplying little of the relevant dimension of service quality. Franchise bidding can thereby result in the supply of services with relatively low quality. Inefficiency of franchise fees Local government officials may view a cable system operator as both a provider of cable services and a ready source of tax revenue. To maximize their chances of reelection, local officials may prefer to raise revenue by taxing a cable system operator rather than increasing the income and property taxes of their constituents. In such settings, a potential operator may be awarded the franchise not solely because it is the most efficient supplier of services that consumers value most highly, but also because it can generate more tax revenue than its rivals. To raise revenue, local governments often impose a fee on a cable system operator that is a fraction of its gross revenue. We now examine how such fees alter outcomes under franchise bidding. The first point to note is that firms will no longer compete price down to the socially optimal price ( in figure 11.3). To earn normal profit after paying the proportional fee, the selected franchise operator must charge a higher price.

Figure 11.3 Franchise Bidding under a Proportional Franchise Fee

For example, suppose the franchise operator must pay the fraction a of its gross revenue to the local government. The firm’s gross revenue when it sets price P is P[1 − a]D(P), where D(P) is the market demand function. The operator’s average revenue is then P[1 − a], which is depicted in figure 11.3. If competition drives average revenue to average cost, the winning bid will be which exceeds Notice that the franchise operator sells units if it charges price for its service. At that level of output, the firm’s average net revenue (after deducting the franchise fee) is equal to its average cost. Compared to the welfaremaximizing outcome, the welfare loss is the sum of triangle A, which is the forgone surplus from producing fewer units, and rectangle B, which is the increase in the cost of producing the units because average cost increases as the level of production declines. Rectangle C is the franchise fee paid by the firm to the local government and is not counted in the welfare loss calculation, since it is a transfer payment.7 Contractual Arrangements for the Post-bidding Stage The preceding discussion has assumed that franchise bidding occurs only once. In theory, a single bid might be sufficient to determine the terms of a perpetual franchise arrangement if industry conditions never changed. However, in practice, industry conditions do change, and often in ways that are difficult to predict. For example, input prices and production technologies change over time, causing the average cost curves of potential suppliers to shift. The demand function also changes over time as income, preferences, and the number of consumers in the market change. To allow the price that the franchise operator charges to adjust to changing industry conditions, franchise bidding will have to entail procedures that specify how the terms

of the franchise agreement will change as cost and demand conditions change. Oliver Williamson has identified different types of contracts that might be employed to handle unpredictable changes in industry conditions.8 Recurrent short-term contracts One approach to handling changing industry conditions is to employ recurrent short-term franchise agreements (or “contracts”). This approach avoids the need to design contracts that account fully for likely future conditions. Periodically, the franchise is put up for auction, at which time a new award is made and a new contract is issued. This approach can be designed to motivate the current franchise owner to honor the terms of the short-term contract under which it operates. It can do so by penalizing a franchise operator that fails to fulfil its current obligations in future rounds of franchise bidding. For example, the offending operator might only be awarded the franchise in the future if it promises to deliver the service at a substantially lower price than other bidders. Recurrent short-term contracts can be problematic in the absence of ongoing bidding parity. If an incumbent operator enjoys a cost advantage over potential rivals, then the incumbent may be selected to operate the franchise even though the promised service price exceeds the incumbent’s average cost of production. A current franchise owner may naturally have an advantage over potential rivals because it has already invested in plant and equipment. In contrast, a new firm would have to undertake this major, costly investment. Therefore, whereas the incumbent firm would be willing to operate at any price above its average variable cost, a new supplier would only be willing to operate at a price above its average total cost. Consequently, even if a new firm has lower costs than the current franchise owner, the current owner may be able to outbid its more efficient rival. To illustrate this point, consider figure 11.4. Suppose that the new firm’s average cost curve is ACN(Q). Further suppose that the incumbent’s average cost curve is ACI(Q) and its average variable cost curve is AVCI(Q). Even though the new firm’s average cost is less than the incumbent’s average cost (ACN(Q) < ACI(Q)), the incumbent will win the bid for the franchise. It will do so by bidding a price just below The new firm is unwilling to operate at this price, because the price is below its average cost of production. However, ongoing operation at price is profitable for the incumbent, because this price exceeds its average variable cost of production.

Figure 11.4 Franchise Bidding at Renewal Time

In theory, this problem might be rectified by requiring an incumbent supplier to transfer its capital to any new franchise operator on terms that reflect the value of the capital. In practice, though, it can be difficult to identify the true value of capital equipment. Human capital raises additional concerns. Over time, workers and management acquire valuable knowledge about how to run the company. New franchise operators typically will lack this expertise (but conceivably might acquire it by hiring the personnel of the former incumbent franchise operator). Resistance to change can provide an additional advantage for the incumbent operator at the time of franchise renewal. Local government officials may favor ongoing operation by the incumbent operator to avoid the blame the officials might face if the new franchise operator fails to perform up to expectations. Thus, an operator that performs reasonably well may enjoy an additional incumbency advantage. Incomplete long-term contracts Incomplete long-term contracts are the primary alternative to recurrent short-term contracts. A long-term contract might span fifteen or twenty years, for example. Such contracts are necessarily incomplete in the sense that they cannot specify precisely how every conceivable contingency will be handled. Such specification would be prohibitively cumbersome and costly. Outcomes that are not explicitly accounted for in an incomplete contract typically are handled through negotiation and implicit understandings. One advantage of the long-term contract is that it can enhance the incentive of the franchise operator to invest in long-lived assets. An operator that is assured of being able reap the benefits of its investments over

a long period of time typically will be relatively willing to undertake the investment. In contrast, a firm that operates under a short-term contract may be reluctant to undertake a long-term investment, fearing that its right to operate the franchise may not be renewed and facing uncertainty about the terms that will govern any transfer of assets. This issue can be particularly important for natural monopolies, because capital investment often accounts for a large portion of costs. One important disadvantage of a long-term contract is that it can be difficult to write. The contract must specify how price will change in the future as industry conditions change. It is typically difficult to anticipate all possible changes in industry conditions and to specify precisely how the authorized price should vary as conditions change. The contract also must specify precisely the penalties the franchise operator will face if it fails to deliver promised levels of quality. Such specification is less critical under short-term contracts, because the incumbent operator can be penalized at renewal time if it delivers too little service quality. Opportunistic holdup Long-term contracts may also encourage the franchise operator to engage in opportunistic holdup. Such holdup arises when the operator extracts beneficial concessions from the local government that enforces the franchise agreement. To illustrate the nature of opportunistic holdup, recall that when an incomplete long-term contract is employed, the local government and the franchise owner typically will have to negotiate prices and quality over time as new information arrives about the cost of providing the service and consumer demand for the service. If cost turns out to be higher than anticipated or demand turns out to be lower than anticipated (reduced demand results in higher average cost in the presence of scale economies), price will have to be raised. Anticipating such contract modifications in the future, a prospective supplier might offer a low price at the bidding stage and then, after winning the franchise, petition for a price increase on the grounds that average cost was underestimated or demand was overestimated. As long as the franchise owner has undertaken some capital investment and the proposed rate increase is not too pronounced, the government officials may grant the proposed increase to avoid another auction and the need for additional investment by a new franchise operator. Of course, government officials typically have some ability to resist opportunistic holdup by the franchise operator. The officials can threaten to treat the incumbent operator unfavorably at the next auction, for example. Penalties for reneging on contractual obligations also can be specified in the franchise contract. In addition, the officials can be unaccommodating in negotiations with the incumbent operator on other matters that arise during the course of the prevailing franchise agreement. It is important to note that government officials, too, may engage in opportunistic holdup. Once a firm has made considerable sunk investments, government officials may be in a position to exploit the firm. After incurring fixed, sunk costs, a franchise operator will agree to serve customers as long as it is permitted to charge a price above its average variable cost of production. Government officials could conceivably exploit this fact by reducing the authorized price toward the firm’s average variable cost, even though the firm would not have incurred the fixed cost if it knew that it would ultimately be compelled to charge a price below average cost. In practice, government officials might conceivably attempt to exploit the franchise operator in this manner by refusing to authorize price increases, even as inflation increases the firm’s operating costs considerably.9 Reputational effects can play an important role in deterring such opportunistic behavior by government

officials. If prospective franchise operators anticipate that government officials will not maintain price above average cost as industry conditions change, the firms may either decline to bid for the right to operate the franchise or bid relatively high prices. Recognizing these detrimental effects of opportunistic behavior, government officials will be reluctant to engage in such behavior. Assessment of Franchise Bidding In summary, franchise bidding can be preferable to regulation in the market in stationary environments where consumer preferences are well known, where several potential suppliers have similar cost structures, and where government representatives have very limited information about these cost structures. In such settings, franchise bidding can facilitate the selection of the most efficient supplier and secure low prices for consumers. More generally, though, franchise bidding has its limitations. When industry conditions are subject to substantial change over time, ongoing oversight of the franchise operator’s activities is required to ensure that consumers are well served and the industry supplier is not exploited. Therefore, in settings where considerable sunk investment is required and potential suppliers would be reluctant to undertake the investment if they faced significant uncertainty about whether they will be permitted to serve customers for a long period of time, ongoing regulation in the market may be preferable to franchise bidding. The next section reviews the use of franchise bidding in the cable television industry. Early Regulation of Cable Television The first cable systems were constructed in the late 1940s for the purpose of improving the reception of broadcast signals sent by local television stations. In this role, cable service was primarily a complement to the service provided by local television broadcasters. By the late 1950s, cable systems began to employ microwave relay stations to import broadcast signals from other regions and offer additional broadcasts to subscribers. By doing so, cable service became a substitute for local broadcasting. The Communications Act of 1934 created the Federal Communications Commission (FCC) and empowered the Commission to regulate wire and radio communication and television broadcasting. As cable television developed in the 1950s, the FCC imposed little regulatory oversight. The FCC even declined to accept regulatory jurisdiction over cable television on the grounds that cable companies were neither broadcasting entities nor common carriers of wireline communications services. However, in the late 1950s, as cable systems developed into competitive threats to local television broadcasters, the latter began to pressure the FCC to regulate cable television, just as the economic theory of regulation would predict. By 1966, the FCC had implemented substantial regulatory control over cable television. The FCC required cable systems to carry all local television stations and prohibited cable systems from importing any additional broadcast signals in the top one hundred television markets. The FCC ended this prohibition in 1972, but replaced it with a complex set of rules that continued to limit the importation of signals. But even when required to operate under these burdensome restrictions, the number of cable systems and the number of cable subscribers experienced healthy growth. Between 1965 and 1975, the number of subscribers increased more than sevenfold, from 1.2 million to 8.5 million viewers. The subscription ratio, which is the ratio of the number of subscribers to the number of households, had risen to 12.4 percent by 1975 from only 2.3 percent ten years earlier.10

The launch of the Satcom I satellite in 1975 provided cable systems with a relatively inexpensive technology for receiving distant programming. At the same time, the FCC was beginning to loosen its restrictions on the importation of signals, and cable systems secured expanded legal authority to compete freely with broadcast television.11 These events allowed cable companies to increase substantially the programming they offered to subscribers. Before 1971, only 6 percent of cable systems offered more than twelve channels of programming. This figure exceeded 50 percent by 1980, while 90 percent of cable subscribers received more than twenty channels in 1992. Today, many cable systems deliver hundreds of channels. The cable television industry experienced explosive growth following these developments (see table 11.1). In the 1970s, less than one third of all households in the United States had access to cable. Now, cable service is available almost ubiquitously. Subscribership soared as availability increased, rising more than fivefold over twenty years, with the number of household subscriptions exceeding 65 million by 2000.12 The growth in revenue for cable companies has been even more pronounced, rising from less than $3 billion in 1980 to almost $100 billion in 2014. Table 11.1 Cable Television Industry Growth, 1970–2014 Year

Number of Basic Cable Subscribers (millions)

Share of Households with Cable (%)

Average Monthly Basic Cable Rate ($)

Revenue ($ million)

1970 1980 1985 1990 1995 2000 2005 2010 2014

4.5 16.0 32.0 50.5 60.5 66.3 65.7 61.5 54.3

6.7 19.9 42.8 56.4 63.4 68.0 67.4 62.5 59.4

5.50 7.69 9.73 16.78 23.07 30.37 39.63 47.89 54.92

345 2,609 8,831 17,405 23,669 35,963 62,515 88,025 99,328

Source: 1996 U.S. Statistical Abstract, Tables 876, 890, 892; 2003 U.S. Statistical Abstract, tables 1126, 1127, 1144; 2014 U.S. Statistical Abstract, tables 1152, 1166 (Washington, DC: U.S. Bureau of the Census, various years); 2005, 2010, 2014 Television & Cable Factbook (https://warren-news.com/product/television _cable_factbook).

This revenue growth was fueled in part by rising prices for cable services. The rising prices led to calls for explicit regulatory control of cable rates. The resulting regulation is reviewed below.13 Cable Television as a Natural Monopoly In this section, we analyze the rationale for government intervention in cable television. We first explain the technology of cable systems and then review estimates of economies of density and economies of scale in the cable industry. The evidence suggests that cable television likely is a natural monopoly. Technological Background A cable system has three key primary components: (1) the headend, (2) the distribution network, and (3) (3) the subscriber interface within the home network. Figure 11.5 illustrates the physical design of a modern cable system. The receiver/decoder in the headend receives signals sent by content providers, such as news and entertainment companies, and processes the signals for distribution to subscribers.14 The processed

signals are sent to subscribers over the cable system’s distribution network. Much of the long-distance transport of signals occurs over fiber-optic cable (“the fiber ring”) in modern systems. Once signals near their destination, they are transferred from the fiber ring to the coaxial cable in the local network (at the coaxial drop). The final element of the cable system is the subscriber interface, which receives signals from the local distribution plant and delivers them to televisions in the home (often via set-top boxes if the televisions are not cable ready).

Figure 11.5 Physical Design of a Cable System Source: “Learn about Cable TV Systems: Headend and Modulator,” Fiber Optic Telecom Company (December 3 2015) http://www.fiberoptictel.com/learn-about-cable-tv-systems-headend-and-modulator/.

The capacity of a cable system—the number of channels it can carry—is determined by the characteristics of the distribution plant. Until the early 1980s most cable systems had a capacity of forty channels. During the 1980s, the typical cable operator offered fifty-four channels over coaxial cable. The advent of fiber-optic cable introduced the potential for greatly expanded capacity. Leading cable systems today offer more than 300 channels.15 The purchase and installation of the distribution plant accounts for a large portion of the nonprogramming costs of supplying cable television service. Trenching and laying cable is expensive (as much as $1,000 per foot, depending on the terrain and geographic location).16 The marginal cost of serving a household located in the geographic region covered by a cable company’s distribution plant is relatively low, because such service simply entails installing and maintaining a subscriber interface. Because marginal cost is relatively low and the cost of the distribution plant and the headend is high and largely insensitive to the number of subscribers, a cable system generally experiences a declining average cost of serving customers. Economies of Density and Scale An important element of regulatory policy is whether cable companies should be awarded exclusive franchises. The most appropriate industry structure depends on whether, within a given geographic area,

industry costs are minimized when cable systems do not overlap. If economies of density prevail—so the average cost of serving customers in a given region declines as the number of customers in the region increases—then industry costs are minimized when the distribution plants of cable companies do not overlap. To examine the degree of economies of density, we need to estimate the average cost per subscriber for a fixed plant size. There are various measures of plant size, but the most common measures are the number of homes for which cable is available and the number of miles of cable in the system. G. Kent Webb’s study of seventeen cable systems in New Jersey estimated the average cost curve for a system consisting of 1,000 miles of cable and passing one hundred households per mile of cable.17 As figure 11.6 shows, the average cost of serving a subscriber declines as the number of subscribers increases. Therefore, this study found that cable television experiences economies of density. For example, as market penetration (the ratio of subscribers to households passed) doubles from 40 percent to 80 percent, average cost declines by more than 40 percent, from approximately $14 to $8. A related study by Eli Noam used cost data for nearly all of the 4,200 cable systems in operation in 1981.18 He found that a 10 percent increase in the number of subscribers reduced unit cost by about 0.5 percent. Both these studies conclude that cable systems exhibit economies of density. Consequently, industry costs will be minimized when the distribution plants of cable systems do not overlap.

Figure 11.6 Average Total Cost for Cable Television Given Fixed Plant Size (1982 dollars)

Although these studies indicate that industry costs may be lowest when a single cable company serves a given geographic region, the studies do not specify how industry costs change as the region that a single company serves expands. Webb and Noam have also addressed this issue. Webb found that the cost per subscriber declines slightly as the scale of the cable system expands, holding constant the number of subscribers per mile of cable in the system. Thus, as figure 11.7 illustrates, Webb identified mild economies of scale. Noam estimated the elasticity of cost with respect to the number of homes passed to be −1.02. This means that a 10 percent increase in the size of the cable system (as measured by the number of homes passed by the system) is associated with a 0.2 percent reduction in unit cost.19

Figure 11.7 Average Total Cost for Cable Television Given Fixed Market Penetration (1982 dollars)

Even if more pronounced economies of scale prevailed, there is an important reason the geographic region served by any single cable provider might be limited. To illustrate this reason most clearly, consider the extreme case where a single cable company serves all subscribers in the United States. In this case, any developer of programming content that wished to have its content (for example, new television shows) viewed by cable subscribers would have to negotiate with a single cable company. A company in this position is likely to have substantial bargaining power when negotiating carriage terms with programmers. In particular, the single cable company would have substantial ability to engage in opportunistic holdup of program developers, offering them very little compensation for the programming they created. Program developers may be reluctant to invest heavily in developing innovative, high-quality content if they anticipate little financial return from doing so, due to opportunistic holdup by the cable company. Limited incentive to develop programming can harm cable subscribers by reducing the variety and quality of programming that is available through their cable provider. To limit opportunistic holdup of this type, the FCC has imposed limits on the fraction of U.S. cable subscribers that a single cable company can serve. The courts have expressed reservations about the details of the limits that the FCC imposed, so no formal limits are presently in effect.20 However, the relative bargaining strengths of content developers and content transporters remains an area of concern for regulators. This concern will be revisited on our discussion of Internet “net neutrality” policy in chapter 14.

In summary, the evidence suggests that industry costs will be minimized if cable systems do not overlap. However, industry costs will not rise much if different companies serves subregions of broad geographic areas. Furthermore, limits on the geographic region served by any single cable company can limit the bargaining power that the company enjoys in its negotiations with program developers. Franchising Process Before proceeding to assess the performance of franchise bidding, we briefly describe the process that local governments often employ to award franchises. At the start of the franchising process, local governments typically solicit proposals from prospective cable operators. These firms often are cable providers that already operate in other geographic regions. After receiving and reviewing the proposals, the local government often selects a few applicants and asks them to submit formal bids for the right to operate the franchise. The right is usually awarded to one of these bidders. However, if the local government is dissatisfied with all of the bids, the process can begin anew.21 The proposal submitted by an applicant to the local government typically describes the programming services that will be offered to customers and the prices that will be charged for the services. Often, much more information is required. In the state of Massachusetts, the Cable Television Commission requires the franchise proposal to include: (1) duration of the license (not to exceed fifteen years); (2) area(s) to be served; (3) construction schedule; (4) initial rates and charges; (5) amount and type of bond and insurance; (6) criteria to be used in assessing applicant qualifications; and (7) location of any free installations (for example, public schools, police and fire stations, and other public buildings).22 A typical franchising process takes between two and ten years.23 An extreme case occurred in Philadelphia. Four rounds of franchise bidding took place starting in 1966; twenty years later Philadelphia still had not issued a franchise. Thus, the franchise bidding process often is complex and can be extremely time consuming. Franchise authorities usually employ long-term, nonexclusive contracts to govern the franchise relationship. The typical contract is fifteen years in length. Nonexclusivity gives the local government the right to re-auction the franchise if the government deems the performance of the current franchise operator to be unsatisfactory. Assessment of Franchise Bidding Recall that franchise bidding will only serve consumers well if substantial competition prevails at the bidding stage. Recall also from our discussion of rent-seeking behavior that competition must take place over performance measures that consumers value highly—price and quality—rather than measures that benefit local governments but reduce social welfare. The performance of franchise bidding also depends on whether franchise operators implement their proposals faithfully or engage in opportunistic behavior by raising prices above or reducing quality below promised levels. Competition at the bidding stage A study of ninety-two franchise bidding cases in Massachusetts between 1973 and 1981 found that the number of applicants ranged from one to seventeen, with an average of 5.2.24 Later evidence revealed a declining number of applicants. In twenty-seven of the largest franchises awarded between 1982 and 1989, the average number of bidders was only 2.9.25 The franchise bidding process in Baltimore brought forth only

one proposal in 1984. Competition for a cable television franchise proceeds along several dimensions, including price and quality. Quality encompasses the number of channels, the programming carried on the channels, and signal quality and reliability. Prospective franchise operators also compete on the franchise fees they deliver to local governments. To limit such competition, the FCC imposed a rule in 1984 that limits to 5 percent of gross revenue the payment that local governments can require of cable operators. Because of the prevailing restrictions on the financial payments a prospective cable operator can make to local governments, bidders have actively competed in supplying nonprice concessions. One common nonprice concession is the provision of so-called “PEG” channels that can be employed to deliver public, education, and government programming. Other nonprice concessions include complementary hookups of cable service for schools and government offices, local studios to create programming, and even funding for public parks. It is estimated that basic cable rates would decline by 5.24 percent if all nonprice concessions were eliminated.26 Thus, as our discussion of rent-seeking behavior would suggest, nonprice concessions tend to increase the price of cable service.27 A study by Thomas Hazlett identifies four types of costs associated with franchising bidding in the cable television industry (see table 11.2). He estimates that the cost of uneconomic investment associated with nonprice concessions ranges between $4.81 and $6.00 per subscriber per month. The value of forgone consumer surplus and industry profit as the parties await the conclusion of the bidding process is estimated to be between $0.97 and $2.72 per subscriber per month. Lobbying costs (for example, the costs that bidders incur to hire lobbyists, consultants, and attorneys) are thought to range between $0.71 and $1.71 per subscriber per month. The costs of franchise fees are estimated to be between $1.29 and $1.88 per subscriber per month. All four types of costs combined are estimated to be between $7.78 and $12.31 per subscriber per month, which constitutes roughly one-third of a typical cable company’s gross revenue. Although these estimates are imprecise, they suggest that the franchising bidding process may impose substantial social costs. Table 11.2 Politically Imposed Costs of Franchise Monopoly (dollars per month per subscriber)

Low estimate Midpoint estimate High estimate

Uneconomic investment

Delay

Lobbying

Franchise fees

Total costs

4.81 5.41 6.00

0.97 1.84 2.72

0.71 1.21 1.71

1.29 1.58 1.88

7.78 10.04 12.31

Source: Thomas W. Hazlett, “Private Monopoly and the Public Interest: An Economic Analysis of the Cable Television Franchise,” University of Pennsylvania Law Review 134 (July 1986): 1335–1409.

Performance after the initial award In assessing the performance of franchise bidding, it is important to consider the incidence of opportunistic holdup. There is anecdotal evidence of opportunistic holdup by franchise operators. To illustrate, the winning proposal for the Milwaukee franchise in 1982 called for a 108-channel system at a monthly basic service price of $4.95. Not long after the award was issued, the franchise winner renegotiated its contract and instead installed a 54-channel system and charged $11.95 monthly for basic service.28 The systematic evidence on opportunistic holdup is mixed. On the one hand, cable franchises were renegotiated in twenty-one of the thirty largest television markets, including eight before the franchise

operator even wired any homes.29 On the other hand, a survey of the trade press during the 1980–1986 time period revealed only sixty cases in which cable operators appeared to have reneged on their contracts in a sample of more than 3,000 cases.30 In addition, a survey of cable television franchises in Massachusetts between 1973 and 1984 found that construction schedules and quality levels were generally consistent with the accepted proposals.31 Furthermore, the average time between the issuance of the franchise and the first rate increase was almost thirty-three months. The average time between the first rate increase and the second rate increase was more than twenty-nine months.32 On average, cable operators waited for a considerable period before requesting a rate increase. Furthermore, of sixty-two franchises, forty-nine experienced a decrease in the real price of cable service between December 1975 and June 1984, while only thirteen had a real increase in the price of cable service.33 Incumbent franchise operators fare well at the renewal stage. One study found that of 3,516 refranchising decisions, there were only seven in which the local government replaced the incumbent franchise operator.34 This finding could indicate either that local governments usually are pleased with the performance of franchise operators or that incumbents typically face little meaningful competition at the renewal stage. To try to distinguish between these two possibilities, Mark Zupan examined fifty-nine randomly chosen renewal agreements between 1980 and 1984 and compared the terms of the initial contract and the renewal contract. His findings, which are summarized in table 11.3, suggest that the renewal contract tends to be more favorable than the initial contract for the franchise operator. On average, channel capacity declines by nine channels and the number of PEG channels (supplied for educational and government use) declines by 0.8. In addition, the monthly price for basic service increases by $0.01 per channel and the monthly price for the premium channels increases by $1.13. However, the monthly price for basic service declines by $0.35 and the franchise fee rises by 0.2 percent. Table 11.3 Deviation between Terms of Initial Contract and Renewal Contract Term of Trade

Average for Initial Contract Sample

Estimated Deviation in Renewal Sample

Channel capacity Franchise fee Community channels Basic system price (monthly) Basic price per channel offered Lead pay channel price (monthly)

46.4 2.9% 2.8 $9.35 $0.52 $9.51

9 fewer channels 0.2% higher 0.8 fewer channels $0.35 lower $0.01 higher $1.13 higher*

Source: Mark A. Zupan, “Cable Franchise Renewals: Do Incumbent Firms Behave Opportunistically?” RAND Journal of Economics 20 (1989): 473–482. *The difference is statistically significant at the 0.05 level in a one-tailed test.

Rate Regulation Rate regulation in the cable industry has taken many twists and turns. In the early 1980s, it appeared that the cable television industry might be on a path toward little or no price regulation. Deregulation began with the federally mandated elimination of price controls over pay channels in 1979. The Cable Communications Policy Act of 1984 then deregulated the prices charged for basic television service. However, the Cable Television Consumer Protection and Competition Act of 1992 reversed course by reintroducing rate regulation. Four years later, the Telecommunications Act of 1996 instituted a plan to implement full price deregulation by 1999.35 We now review this experience in more detail.

Rate deregulation, 1984–1992 The Cable Communications Policy Act of 1984 prohibited federal, state, and local regulation of rates for most basic cable television services.36 Continued regulation was authorized only where effective competition was absent. The FCC considered a cable company to face effective competition if it competed with three or more over-the-air television stations. This meant that only 3 percent of the cable systems in the United States were subject to rate regulation. In addition to prohibiting local authorities from controlling cable rates, the 1984 act constrained competition. It mandated that all cable systems be franchised by local governments, thereby eliminating the threat of unauthorized entry by a second cable operator. The 1984 act also limited the competition that incumbent operators faced at the time of the franchise renewal. The act stipulated that local governments must renew a cable company’s franchise contract unless the company had violated its franchise agreement. Even in that case, the company must be afforded adequate opportunity to rectify the situation. The act also codified the FCC’s 1970 ban on local telephone companies providing cable service in their operating territories. Cable rates were deregulated in December 1986. Steady rate increases in excess of the rate of inflation ensued (see figure 11.8). By 1991, basic cable rates had risen 36.5 percent in real terms. Some of the rate increases likely reflected the fact that the average basic service package expanded from twenty-nine to thirty-seven channels during this period. Furthermore, cable operators increased their spending on basic programming from approximately $300 million to over $1 billion. Therefore, price and quality both increased following the deregulation of cable rates.37

Figure 11.8 Real Cable Service Rates, 1984–1995 Source: Thomas W. Hazlett and Matthew L. Spitzer, Public Policy toward Cable Television: The Economics of Rate Controls (Cambridge, MA: MIT Press and AEI Press, 1997).

A study by Robert Rubinovitz, an economist at the Antitrust Division of the U.S. Department of Justice, estimated the impact of the 1984 act on quality-adjusted rates.38 He compared the prices and characteristics of basic cable packages in 1984 (the regulated benchmark) with those in 1990 (the unregulated benchmark). During this period, the real price of cable service increased by 42 percent. Rubinovitz estimated that roughly half of this increase reflected increases in service quality (including the number of channels supplied). To fully assess how a simultaneous increase in rates and service quality affects consumers, one must analyze both the reduction in consumer surplus due to the higher rates and the increase in consumer surplus due to the increased number of channels. Figure 11.9 depicts this analysis. Suppose that the additional channels cause consumer demand for cable service to increase from D(P) to Further suppose that the price of cable service increases from PL to PH. For simplicity, assume that the total number of cable subscriptions does not change. Consumer surplus initially is given by the area of triangle PLbc. Consumer surplus after the increase in price and quality is given by triangle PHad. This change reflects higher expenditure as measured by rectangle PLbaPH, and higher value from enhanced quality as measured by parallelogram cbad. Removing the area that is common to these two figures, the net change in consumer surplus is the difference between area A and area B. Using data from June 1991 to March 1993, a study found these two measures to be of roughly equal magnitude. Therefore, the reduction in consumer surplus due to the price increase is roughly the same as the increase in consumer surplus due to the increased quality associated with additional channels.39

Figure 11.9 Welfare Analysis of Higher Cable Rates and More Cable Channels

Rate reregulation, 1992–1994 Rising cable rates brought pressure on regulators and legislators to reinstitute some form of regulation. In June 1991 the five FCC commissioners voted unanimously to reinstate local rate regulation for many cable television operators. Then Congress passed the Cable Television Consumer Protection and Competition Act of 1992. This act required that rates for basic cable services be regulated either by the franchising authority or by the FCC. The FCC reduced rates by 10 percent in September 1993 and by an additional 7 percent in July 1994. Economist Thomas Hazlett described the progression from the 1984 act to the 1992 act: While the profitability of the cable industry has never been higher, its political vulnerability has never been more evident— and the two events are connected. By fortuitously steering themselves through the regulatory maze to arrive at the bliss point of a legally protected but unregulated monopolist [as created by the 1984 Act], cable companies have traded friends for wealth in the political game.40

As figure 11.8 demonstrates, real cable rates fell during 1992–1993.41 Despite this fact, growth in the number of basic cable subscribers declined. However, subscriptions to premium services (whose rates were not regulated) increased. These outcomes suggest that cable system operators circumvented the intent of the 1992 act by watering down basic cable services. This action was anticipated by John Malone, CEO of the largest cable system operator, TCI: These noxious FCC rules are not going to be able to constrain the economics in entertainment very long. What’s gonna happen is there’ll be a shift from basic to [unregulated] à la carte services.… We’ll continue to diversify away from the regulated government-attacked core.42

Rate deregulation, 1994–present The experience between 1992 and 1994 suggests that regulators have limited ability to effectively control quality-adjusted prices in the cable industry. Perhaps in part for this reason, the FCC began to effectively deregulate cable rates in 1994. Although cable rates rose in response, growth in subscriptions to both basic cable service and premium channels accelerated. This experience suggests that consumers were not averse to paying higher prices in return for increased quality. The Telecommunications Act of 1996 further deregulated pricing in the cable industry. The act mandated an immediate deregulation of cable rates for small cable systems, which served about 20 percent of cable households. A three-year phaseout of rate regulation was implemented for larger cable systems. Furthermore, if a cable system operator faced direct competition from a local telephone company in the provision of cable services, then deregulation could occur immediately. This act also facilitated industry entry by eliminating the cross-ownership restriction imposed by the FCC in 1970 (and codified in the 1984 act), which prohibited a local telephone company from providing cable service in the geographic area where it provided telephone service. In fact, though, local telephone companies generally refrained from providing cable television services for many years. Perhaps in part as a result of limited competitive pressure, the average price of expanded basic cable service increased from $29.41 per month to $36.47 between 1999 and 2002, an annual rate of increase of about 7.4 percent. By comparison, the general price level rose only 2.6 percent annually during this period.43 The Limits of Government Regulation

Although cable television may be a natural monopoly, recent experience raises questions about whether government intervention in general and franchise bidding in particular can protect consumers adequately in the industry.44 When regulators mandate reductions in basic cable rates, a cable company can reduce the number of channels in the basic service package and/or shift the most popular channels from basic to premium (unregulated) tiers. Even if regulators mandate a reduction in the average price per channel of basic service, the cable company can simply replace the shifted channels with lower-quality channels. Lower rates do not necessarily benefit consumers if cable companies reduce service quality in response to mandated rate reductions or prohibitions against substantial rate increases. Regulating quality is intrinsically difficult, and in addition, cable regulators face legal barriers to controlling programming content because cable operators have First Amendment rights as “electronic publishers.” Competition among Suppliers of Video Services Given the potentially limited role that government intervention can play, competition is likely the primary source of consumer protection. Fortunately, industry competition from several sources has increased substantially in recent years. Telephone companies have begun to expand their supply of video programming, and direct broadcast satellite (DBS) suppliers now offer service to almost all locations in the United States. Furthermore, online video distributors (OVDs) such as Netflix, Hulu, and Amazon have begun to expand their operations in recent years. Even when it has been awarded an exclusive cable franchise and so serves as a monopoly supplier of cable services in a specified region, the cable operator is seldom the exclusive supplier of video services. The primary DBS suppliers, AT&T-DIRECTV and DISH Network, provide consumers with an alternative to their cable provider. Table 11.4 suggests that many consumers view satellite television as a substitute for cable television. The table reports that as of December 2013, DIRECTV and DISH Network each had more subscribers than every cable company, with the exception of Comcast. Table 11.4 Multichannel Video Subscribers, as of December 2013 Supplier

Number of Subscribers (millions)

Comcast DIRECTV DISH Network Time Warner Cable AT&T (U-verse) Verizon (Fios) Charter

21.7 20.3 14.1 11.2 5.5 5.3 4.2

Source: United States Federal Communications Commission, Annual Assessment of the Status of Competition in the Market for the Delivery of Video Programming, Sixteenth Report, MB Docket 14–16, Washington, DC, April 2, 2015.

Table 11.4 also documents the substantial inroads that telephone companies have begun to make in the supply of video programming. As of December 2013, Verizon’s Fios service had 5.3 million subscribers and AT&T’s U-verse service had 5.5 million subscribers. Verizon and AT&T only began offering these services in 2005 and 2006, respectively. As of 2013, each of these services had more subscribers than the third largest cable operator, Charter Communications.45 Some consumers also have begun to substitute the services of OVDs for cable services. Some (“cord

cutters”) have completely replaced cable services with OVD services. Others (“cord shavers”) have reduced their purchases of premium cable packages and pay-per-view services with OVD services. In part due to the actions of cord cutters, the cable industry as a whole experienced its first ever year-over-year decline in subscribership in 2013 (from 56.5 million to 54.4 million subscribers).46 The decline in cable subscribership has not been accompanied by price reductions. The average price of basic cable service increased by 6.5 percent and the average price of expanded basic service increased by 5.1 percent in 2013. These price increases largely reflect increased programming costs for the cable companies. Summary In principle, franchise bidding can provide an attractive resolution to the natural monopoly problem. In the presence of sufficient competition among potential suppliers, franchise bidding can ensure that the most efficient supplier serves customers at a price that reflects the lowest possible average cost of production. In theory, franchise bidding can achieve this ideal outcome even when the regulator has limited knowledge of prevailing industry conditions. In practice, franchise bidding often fails to achieve the ideal outcome. Limited competition among potential suppliers can raise the price of the winning bid well above the average cost of production. Opportunistic holdup by the selected franchise operator can further increase the actual price above the bid price. Efforts by local governments to extract franchise fees from operators also can increase the prices charged to consumers. Furthermore, even intense bidding on price can produce low levels of service quality. Consequently, franchise bidding is unlikely to constitute a panacea for the natural monopoly problem in practice. The cable industry has experienced a variety of problems under franchise bidding. However, cable companies are no longer the sole suppliers of multichannel video programming even in regions where they face no effective competition from television broadcasters and other cable providers. Satellite (DBS) suppliers have imposed some competitive discipline on cable operators since the 1990s, and telephone companies and online video distributors have begun to offer some competitive discipline in certain geographic regions in recent years. Competitive pressures seem likely to increase in the future and spread to other venues, as consumers begin to view more video programming on their computers and smartphones rather than exclusively on their televisions. Questions and Problems 1. Do you think social welfare is likely to be higher under a regulated private enterprise or under a public enterprise? Does your answer depend on the industry in question? Explain your answers fully. 2. The privatization of public enterprises is an important policy issue in many countries. Should all public enterprises be privatized? Of those privatized, which should be regulated? Explain your answers fully. 3. Why do public enterprises tend to have higher costs than private enterprises? Is this reason sufficient to always prefer regulated private enterprises to public enterprises? 4. Compare public enterprise and franchise bidding. Discuss the advantages and disadvantages of each. If you had to choose one of these two alternatives for all natural monopolies, which alternative would you choose? 5. Do you think a regulated private enterprise or a public enterprise is more likely to adopt a cost-reducing innovation quickly? What about a product-improving innovation? 6. In 1990, Britain privatized its electric power industry by converting government enterprises into privately owned

enterprises. The average salary for the chief executives of Britain’s twelve regional electricity distribution companies increased nearly threefold in the two years following industry privatization.47 The salary increases arose even though the identities of the chief executives largely remained the same. What do you think was the reason for this large increase in salary? (During these two years, average company revenue was fairly constant, but the average number of employees declined significantly and the value of the firm’s assets increased significantly.) 7. Compare the following three methods of franchise bidding: a. Recurrent short-term contracts. b. Long-term contracts. c. Recurrent short-term contracts where the local government owns the capital. 8. In practice, almost all cable television franchises are renewed. a. Do you think this fact is evidence that franchise owners are performing in a satisfactory way, or that competition at renewal time is weak? b. How would you go about determining which hypothesis is correct? 9. A local government plans to auction off a franchise to supply service to the town. The (inverse) demand curve for the service is P(Q) = 100 − Q, where P denotes price, and Q denotes quantity. Only two firms, firm 1 and firm 2, are qualified to compete for the franchise. Firm 1’s cost function is C1(Q) = 10Q; firm 2’s cost function is C2(Q) = 20Q. The franchise is auctioned off using a modified English auction, where the firm that promises to supply the service at the lowest price wins the franchise. Which firm will win the franchise? What will the winning bid be? 10. Continuing with question 9, suppose the local government decides to award the franchise to the firm that promises the largest up-front payment to the government. The selected franchise operator is permitted to charge its preferred price for the service. The government employs an English auction, awarding the franchise to the firm that promises the largest payment. a. Which firm will win the franchise? b. What will the winning franchise fee be? c. What price will the selected franchise operator charge for the service? Hint: Calculate the monopoly profit for each bidder. The marginal revenue curve is MR(Q) = 100 − 2Q. 11. Which of the two methods described in questions 9 and 10 should a local government use if its objective is to: a. Maximize consumer welfare. b. Maximize government revenue. c. Maximize the number of consumers who buy the service. Can you propose a better method for awarding the franchise if the government’s objective is to maximize consumer welfare? 12. In 1984, the FCC imposed a rule that limits to 5 percent of gross revenue the payment that local governments can require of cable operators. What was the likely rationale for this rule? Do you think this rule enhanced consumer welfare? 13. In 1984, Congress prohibited the regulation of the price of cable services. In response to the passage of this legislation, the market value of a cable system was largely unchanged in the top 100 broadcast markets, but generally increased in other markets. Explain these distinct changes in market values. 14. Should free entry of cable operators be permitted? Would free entry be sufficient to end the need for any regulation of cable television operators?

Notes 1. This is the price corresponding to the level of output at which marginal revenue is equal to marginal cost. For expositional clarity, the marginal revenue and marginal cost curves are not shown in figure 11.1. 2. See Dennis L. Weisman and Johannes P. Pfeifenberger, “Efficiency as a Discovery Process: Why Enhanced Incentives

Outperform Regulatory Mandates,” Electricity Journal 16 (January–February 2003): 55–62. 3. See Oliver Hart, “Incomplete Contracts and Public Ownership: Remarks, and an Application to Public-Private Partnerships,” Economic Journal 113 (March 2003): C69–C76. 4. Harold Demsetz, “Why Regulate Utilities?” Journal of Law and Economics 11 (April 1968): 55–65. 5. For additional information on this and other auction forms, see, for example, Paul Klemperer, “Auction Theory: A Guide to the Literature,” Journal of Economic Surveys 13 (July 1999): 227–286; David Lucking-Reiley, “Vickrey Auctions in Practice: From Nineteenth-Century Philately to Twenty-First-Century E-Commerce,” Journal of Economic Perspectives 14 (Summer 2000): 183–192; and Paul Milgrom, Putting Auction Theory to Work (Cambridge: Cambridge University Press, 2004). 6. This analysis assumes that the two bidders do not collude to try to keep the bid above For example, one firm could agree to drop out of the bidding when the bid declines to where in exchange for a cash payment. This collusion would allow both firms to earn higher profit than they would secure if the bid declined to The inability to collude is an essential condition if franchise bidding is to achieve a socially desirable outcome. 7. We thank James Prieger for correcting an error that appeared in previous editions. 8. Oliver E. Williamson, “Franchise Bidding for Natural Monopolies—In General and with Respect to CATV,” Bell Journal of Economics 7 (Spring 1976): 73–104. 9. For additional discussion of opportunistic holdup, see Patrick W. Schmitz, “The Hold-Up Problem and Incomplete Contracts: A Survey of Recent Topics in Contract Theory,” Bulletin of Economic Research 53 (January 2001): 1–17. 10. Thomas W. Hazlett, “Cabling America: Economic Forces in a Political World,” in Cento Veljanovski, ed., Freedom in Broadcasting (Washington, DC: Institute of Economic Affairs, 1989). 11. See Home Box Office Inc. v. Federal Communications Commission, 567 F. 2d 9 (1977). 12. As table 11.1 indicates, cable subscriptions have begun to decline in recent years as some households replace subscriptions to cable television with subscriptions to the services provided by online video distributors, such as Hulu and Netflix. 13. For more background on the history of regulation in the cable television industry, see Stanley M. Besen and Robert W. Crandall, “The Deregulation of Cable Television,” Law and Contemporary Problems 44 (Winter 1981): 77–124; Bruce M. Owen and Paul D. Gottlieb, “The Rise and Fall and Rise of Cable Television,” in Leonard W. Weiss and Michael W. Klass, eds., Regulatory Reform: What Actually Happened (Boston: Little, Brown, 1983), pp. 78–104; and Thomas W. Hazlett, “Private Monopoly and the Public Interest: An Economic Analysis of the Cable Television Franchise,” University of Pennsylvania Law Review 134 (July 1986): 1335–1409. 14. The receiver/decoder effectively functions as an antenna, which helps explain why cable systems were originally referred to as community-antenna television, or CATV. Modern headends also have the capacity to store signals and schedule their subsequent distribution. 15. United States Federal Communications Commission, Annual Assessment of the Status of Competition in the Market for the Delivery of Video Programming, Sixteenth Report, MB Docket 14–16, Washington, DC, released April 2, 2015. 16. See U.S. Department of Transportation, Office of the Assistant Secretary for Research and Technology (http://www.itscosts.its.dot.gov/its/benecost.nsf/DisplayRUCByUnitCostElementUnadjusted? ReadForm&UnitCostElement=Fiber+Optic+Cable+Installation+&Subsystem=Roadside+Telecommunications+). 17. G. Kent Webb, The Economics of Cable Television (Lexington, MA: Lexington Books, 1983). 18. Eli M. Noam, “Economies of Scale and Regulation in CATV,” in Michael A. Crew, ed., Analyzing the Impact of Regulatory Change in Public Utilities (Lexington, MA: Lexington Books, 1985), pp. 95–110. 19. For more recent evidence on scale economies, see Stephen M. Law and James F. Nolan, “Measuring the Impact of Regulation: A Study of Canadian Basic Cable Television,” Review of Industrial Organization 21 (November 2002): 231– 249. 20. See, for example, John Eggerton, “Court Throws Out FCC’s Cable Subscriber Cap,” Broadcasting & Cable, August 28, 2009 (http://www.broadcastingcable.com/news/washington/court-throws-out-fccs-cable-subscriber-cap/564 20). 21. For case studies of franchise bidding, see Webb, Economics (for an analysis of the Philadelphia market), Williamson,

“Franchise Bidding” (for an analysis of the Oakland market), and Robin A. Prager, “Franchise Bidding for Natural Monopoly: The Case of Cable Television in Massachusetts,” Journal of Regulatory Economics 1 (1989): 115–131. 22. Hazlett, “Private Monopoly.” 23. Prager, “Franchise Bidding,” p. 118. 24. Ibid. 25. Mark A. Zupan, “The Efficacy of Franchise Bidding Schemes in the Case of Cable Television: Some Systematic Evidence,” Journal of Law and Economics 22 (October 1989): 401–456. 26. Ibid. 27. For a study that explores which aspects of a proposal are conducive to winning a franchise, see Phillip A. Beutel, “City Objectives in Monopoly Franchising: The Case of Cable Television,” Applied Economics 22 (September 1990): 1237–1247. 28. Thomas W. Hazlett, “Wiring the Constitution for Cable” Regulation (1988): 30–34. For additional anecdotal evidence, see Thomas W. Hazlett, “Franchise Bidding and Natural Monopoly: The Demsetz Solution on Cable,” mimeo, University of California–Davis, June 1989. 29. Hazlett, “Wiring the Constitution.” 30. Zupan, “Efficacy.” 31. Prager, “Franchise Bidding.” 32. Ibid. 33. Ibid. 34. Mark A. Zupan, “Cable Franchise Renewals: Do Incumbent Firms Behave Opportunistically?” RAND Journal of Economics 20 (Winter 1989): 473–482. 35. For estimates of the effect of deregulation on the value of cable franchises, see Adam B. Jaffee and David M. Kantner, “Market Power of Local Cable Television Franchises: Evidence from the Effects of Deregulation,” RAND Journal of Economics 21 (Summer 1990): 226–234. 36. For an overview of the regulation of the cable industry, see Thomas W. Hazlett and Matthew L. Spitzer, Public Policy toward Cable Television: The Economics of Rate Controls (Cambridge, MA: MIT Press and AEI Press, 1997). 37. Thomas W. Hazlett, “Cable TV Reregulation: The Episodes You Didn’t See on C-Span,” Regulation 2 (1993): 45–52. 38. Robert N. Rubinovitz, “Market Power and Price Increases for Basic Cable Service since Deregulation,” RAND Journal of Economics 24 (Spring 1993): 1–18. 39. T. Randolph Beard, Robert B. Ekelund Jr., George S. Ford, and Richard S. Saba, “Price-Quality Tradeoffs and Welfare Effects in Cable Television Markets,” Journal of Regulatory Economics 20 (September 2001): 107–123. 40. Thomas W. Hazlett, “Should Telephone Companies Provide Cable TV?” Regulation (Winter 1990): 72–80. 41. This discussion is based on Thomas W. Hazlett, “Prices and Outputs under Cable TV Reregulation,” Journal of Regulatory Economics 12 (September 1997): 173–195. 42. Quoted in ibid, p. 176. 43. One study estimates that higher programming fees account for approximately 42 percent of this increase. See William P. Rogerson, “Correcting the Errors in the ESPN/Cap Analysis Study on Programming Cost Increases,” November 11, 2003 (www.makethemplayfair.com/docs/CorrectingErrors.pdf). 44. For a discussion of these issues, see Thomas W. Hazlett, “Duopolistic Competition in CATV: Implications for Public Policy,” Yale Journal on Regulation (Winter 1990): 65–119. 45. Federal Communications Commission, “Annual Assessment,” Washington, DC, 2015. 46. ibid. 47. Catherine Wolfram, “Increases in Executive Pay Following Privatization,” Journal of Economics and Management Strategy 7 (Fall 1998): 327–361.

12 Optimal Pricing

Chapter 11 reviewed some potential drawbacks to public enterprise and franchise bidding as resolutions to the natural monopoly problem. This chapter begins our discussion of the third, and more pervasive, alternative—price regulation in the market. The ensuing discussion explains in detail how a regulator should set prices for the services supplied by a natural monopoly in order to secure the highest possible level of consumer surplus while ensuring the supplier’s financial solvency. Thus, this chapter takes a normative view of regulation, investigating how prices should be set rather than how they are set in practice. This chapter also focuses on pricing principles. Subsequent chapters will consider the extent to which the principles derived here are applied in practice. The discussion of optimal pricing begins by considering a setting where the regulated (natural monopoly) supplier produces only a single product. Then the more common setting where the regulated enterprise (for example, a railroad) sells several products (such as transport of different commodities to and from different locations) is explored. To set the stage for the latter discussion, we begin by determining when a multiproduct natural monopoly prevails. Subadditivity and Multiproduct Monopoly Firms that supply only a single product or service are rare in practice. To illustrate, electric utilities supply high-voltage and low-voltage power, telephone companies provide local and long-distance services, and railroads transport many different types of commodities to and from many different locations. Therefore, it is important to understand the natural monopoly issue in settings where firms supply multiple services. A natural monopoly exists in the multiproduct setting when it is less expensive to produce a group of services in a single firm than to have multiple distinct firms produce the services. When a natural monopoly prevails, the prevailing cost function is said to be subadditive.1 The concept of subadditivity is illustrated most readily by beginning with the special case where firms supply only one product. In the setting of figure 12.1, the average cost of production declines as output increases for all levels of output below Q′. The average cost increases as output increases for output levels above Q′. Therefore, industry cost is minimized when a single firm serves all customers if, in total, they demand fewer than Q′ units of output.

Figure 12.1 Economies of Scale, up to Output Q′

To determine how to produce outputs above Q′ at minimum cost, consider the minimum average cost function for two firms, which is labeled AC2 in figure 12.2. The average cost curve for a single firm, reproduced from figure 12.1, also appears as curve AC in figure 12.2. The curve AC2 is obtained from AC in the following manner. To ensure that the total output is produced at minimum cost, all firms must produce the same amount of output and have the same marginal cost. If this were not the case, then output could be shifted from the firm with the highest marginal cost to the firm with the lowest marginal cost, thereby reducing total industry cost. Consequently, for any given level of output, the point on the AC curve provides the average cost that will prevail when two firms produce twice that level of output. To illustrate, consider the output Q′ at which average cost is minimized on curve AC. As point M in figure 12.2 indicates, a single firm can produce output Q′ at average cost m. Consequently, two firms can produce twice this level of output, 2Q′, at precisely the same average cost (m), as indicated by point M′ on curve AC2 in figure 12.2.

Figure 12.2 Minimum Average Cost Curve for Two Firms, AC2

The intersection of AC and AC2 at output Q* defines the range of outputs for which subadditivity prevails. A single firm can produce all outputs below Q* at lower cost than can multiple firms (because AC lies below AC2 for all such outputs). Therefore, the industry cost function is subadditive for outputs less than Q*. Diseconomies of scale (that is, rising average costs) prevail when output is between Q′ and Q*. However, industry production costs are lowest when a single firm produces output in this range. This observation explains why economies of scale (declining average cost) are not necessary for a natural monopoly to prevail in the case where firms produce a single product. However, the presence of economies of scale is sufficient to ensure that a natural monopoly prevails for all output levels when only a single product is produced. Now consider the case of primary interest: Firms produce multiple products. A natural monopoly prevails in this setting if the industry cost function is subadditive, so that whatever combination of outputs is produced (say, eighty-five cars and sixty-three trucks, or twenty-five cars and seventy-eight trucks), industry costs are minimized when the outputs are produced by a single firm. Economies of scale are neither necessary nor sufficient for costs to be subadditive when firms produce multiple products. This is the case because the concept of scale economies ignores potentially relevant interactions among the products being produced. In particular, the concept ignores economies of scope, which prevail when it is less costly to produce products together than in isolation.2 For example, economies of scope prevail when it is less expensive to produce eighty-five cars and sixty-three trucks in a single firm than it is to have one firm produce eighty-five cars and a different firm produce sixty-three trucks. Similarly, economies of scope arise in the provision of local telephone calls and long-distance telephone calls, because both products can be supplied using the same switching and transmission equipment. In a multiproduct setting, economies of scale prevail when total cost increases by less than a specified percentage when the production of all outputs is increased by that percentage. The following cost function exhibits economies of scale but is not subadditive:3

To see why this cost function exhibits economies of scale, notice that when each output is increased by 10 percent (so 1.1 Q1 units of the first commodity and 1.1 Q2 of the second commodity are produced rather than Q1 units of the first commodity and Q2 of the second commodity), total cost is

This total cost is less than 110 percent of the cost of producing Q1 units of the first commodity and Q2 of the second commodity, which is

Although this cost function exhibits economies of scale, it entails diseconomies of scope that outweigh the economies of scale, so costs are not subadditive. To see why, observe that the third term in the cost function in equation 12.1 implies that when the outputs are produced together, production costs are higher than when the outputs are produced separately. If, for example, firm A produced Q1 and firm B produced Q2, then the sum of the firms’ costs would be less than the cost a single firm, firm D, would incur if it produced the same outputs, Q1 and Q2. Specifically, CA(Q1) = Q1 and CB(Q2) = Q2, so CA + CB = Q1 + Q2, whereas CD(Q1, Q2) = Q1 + Q2 + (Q1Q2)1/3 > Q1 + Q2. This example illustrates the more general conclusion that if diseconomies of scope are sufficiently pronounced, costs may not be subadditive, even when economies of scale prevail. In summary, natural monopoly prevails in the multiproduct setting when the industry cost function is subadditive, so that all relevant output combinations are produced at minimum cost by a single firm. This will tend to be the case when the cost function exhibits both economies of scale and economies of scope.4 However, in the presence of sufficiently strong diseconomies of scope, a natural monopoly may not prevail even in the presence of economies of scale. Before proceeding to discuss regulatory policy in natural monopoly settings, we note that an entrant may be able to operate profitably even when industry costs would be minimized if the incumbent supplier served all consumers and when the price is set to eliminate any extranormal profit for the incumbent supplier. To illustrate this point, consider figure 12.3, which reproduces the cost function from figure 12.2. Recall that this cost function is subadditive for outputs less than Q*, so industry costs are minimized when a single firm serves all customers. Suppose the market demand curve DD intersects average cost at an output between Q′ and Q*, where the average cost curve AC is rising. The quantity P0 is the lowest price at which a single incumbent supplier could serve the entire market demand of Q0 without incurring a loss. Even when the incumbent supplier sets this relatively low price, an entrant could profitably serve a portion of the market under certain conditions and thereby increase industry production costs. The natural monopoly is said to be unsustainable in such a situation.

Figure 12.3 Sustainable Natural Monopoly, up to Output Q0

The conditions in question are that (1) the entrant expects the incumbent firm to keep its price unchanged for a period of time after entry and (2) the incumbent will supply the all consumer demand that the entrant does not supply.5 Under these conditions, the entrant could secure a profit by offering to sell output Q′ at a price above its minimum average cost (point M in figure 12.3) but slightly less than the price P0 being charged by the incumbent. This example illustrates that setting prices to preclude extranormal profit may not ensure that industry costs are minimized. A regulator may have to consider restrictions on entry also. Optimal Pricing Policies We analyze the optimal design of pricing policies, first in a setting where the regulated firm supplies only a single service and then in a setting where the firm supplies multiple services. Optimal Pricing of a Single Service We begin by analyzing optimal pricing in the relatively straightforward setting where the regulated firm sells only one service. For instance, an electricity distribution company might be viewed as supplying a single service—“electricity.” Linear Pricing Consider first the standard, and perhaps most familiar, case: linear pricing. A firm engages in linear (or uniform) pricing when the revenue it receives from each customer is simply the product of a single per-unit charge, P, and the number of units of the service the customer purchases. Thus, under linear pricing, a customer that purchases Q units of the service pays P × Q.

To identify the optimal linear price for a natural monopoly, consider figure 12.4. The figure depicts a natural monopoly setting because the firm’s average cost curve (AC) declines as output (Q) increases for all relevant output levels. Because average cost declines with output, the firm’s marginal cost curve (MC) lies everywhere below its average cost curve. The demand curve for the firm’s service is labeled DD in figure 12.4.

Figure 12.4 Marginal Cost Pricing Can Cause Financial Losses

Recall that aggregate welfare (the sum of consumer surplus and industry profit) is maximized when the price of a service coincides with its marginal cost of production.6 It seems natural, then, that P0 might be the best price to charge in the setting of figure 12.4. When the firm charges price P0 for its service, consumers will purchase Q0 units of the service. The firm’s marginal cost of production is P0 at this level of output, so price P0 would maximize aggregate welfare by ensuring that price is equal to marginal cost. Marginal cost pricing introduces an important concern in natural monopoly settings. Because marginal cost is always less than average cost, marginal cost pricing will impose a financial loss on the firm. The firm’s loss in figure 12.4 is the area of the shaded rectangle RP0ST. This loss is the difference between the firm’s total cost of production (R × Q0)7 and the firm’s total revenue (P0 × Q0). A privately owned, profit-maximizing firm will not be willing to operate under marginal cost pricing in a natural monopoly setting unless it receives a subsidy to supplement the revenue it receives from its customers. Government subsidies are problematic, though, for at least three reasons. 1. If the firm’s total cost exceeds the revenue it collects from its customers, it is possible that the cost exceeds the corresponding benefit to consumers. If this is the case, then aggregate welfare is negative, so welfare would be higher if the service were not supplied.

Figure 12.5 illustrates this point. The total benefit that consumers derive from Q0 units of output is D0OQ0B, the area under the demand curve D0D.8 The firm’s total cost of producing Q0 units of output is AOQ0B, the area under the firm’s marginal cost curve (MC). AOQ0B exceeds D0OQ0B in figure 12.5. Therefore the price that maximizes aggregate welfare in this setting generates a negative level of welfare. Consequently, aggregate welfare would be higher if the service were not produced at all.

Figure 12.5 Natural Monopoly with Costs Exceeding Benefits

2. If a firm’s management knows that the firm’s losses will be offset by a subsidy, the management may have limited incentive to control the firm’s costs. For example, postal service management may not feel compelled to minimize the wages and benefits paid to employees if the U.S. Treasury is expected to fully subsidize the losses the postal service incurs. Aggregate welfare declines as production costs increase. 3. It is often not apparent why citizens who do not purchase the service supplied by the natural monopoly should be required to subsidize consumption by other citizens. For example, why should the income taxes paid by citizens who do not have telephone service be employed to allow other citizens to purchase telephone services at below-cost prices? Concerns like these explain why regulators typically attempt to set prices that maximize consumer surplus while ensuring that the revenue that the regulated enterprise secures from its customers is at least as great as the firm’s production costs. When the firm sells a single product, the price that maximizes consumer surplus while ensuring nonnegative (extranormal) profit for the firm will reflect the firm’s average cost of production. This price is P1 in figure 12.6. When they face price P1, consumers will purchase Q1 units of

output. At this level of output, the firm’s average cost of production is P1 and so its total cost is P1 × Q1, which is exactly the firm’s revenue.

Figure 12.6 Welfare Loss with Average Cost Pricing

Average cost pricing is not ideal in the sense that aggregate welfare is reduced below its maximum feasible level by the shaded region in figure 12.6.9 However, this reduction in aggregate welfare is the minimum reduction that can be achieved under linear pricing without forcing the supplier to incur a loss. Nonlinear Pricing Nonlinear pricing can help to reduce the welfare losses that arise under linear pricing. Under nonlinear pricing, the average price a customer pays for a service varies with the amount of the service the customer purchases. Two-part tariffs A two-part tariff is a particularly simple form of nonlinear pricing. As its name implies, a two-part tariff has two components: (1) a fixed fee, F, that the customer is charged for the right to purchase the service and (2) a constant per-unit price, P. A customer that purchases Q > 0 units of a service under a two-part tariff is charged F + PQ. To illustrate how a two-part tariff can secure a higher level of welfare than a linear price can, return to the setting of figure 12.4 and suppose that the loss that the monopoly supplier incurs under marginal cost pricing (the area of the shaded rectangle) is K. The per-unit price can be set at P0 (the marginal cost price) in this setting and the fixed fee can be set equal to K/N, where N is the number of customers that purchase the monopolist’s service. When each of the N consumers pays the fixed fee K/N, the firm receives K in addition

to the revenue (P0 × Q0) it receives when customers purchase Q0 units of the service at unit price P0. Therefore, the two-part tariff induces the welfare-maximizing level of output and ensures a normal profit for the monopoly supplier in this setting. One potential drawback to a two-part tariff is that some consumers may be unable to afford a large fixed fee. Consequently, a high fixed fee could force them to reduce their consumption of the service to zero. Such an outcome is likely to be of particular concern when the service in question is deemed to be an essential one, such as electricity or water. Discriminatory fixed fees can sometimes be useful in this regard. If customers for whom a large fixed fee causes substantial economic hardship can be identified, then these customers might be charged a lower (or no) fixed fee. Even if such discriminatory fixed fees are not feasible and even if welfare declines because high fixed fees cause some consumers to purchase none of the monopolist’s service, two-part tariffs generally can be designed to achieve a higher level of aggregate welfare than can linear pricing. A small fixed fee will provide some revenue for the firm that can be employed to reduce the per-unit price toward marginal cost, thereby expanding consumer demand toward the welfare-maximizing output level (Q0 in figure 12.4). The optimal (welfare-maximizing) two-part tariff typically will increase the fixed fee (F) above zero and raise the per-unit price (P) above marginal cost to balance the welfare reductions that accompany each of the two increases. Increasing only F often will reduce welfare excessively by forcing many consumers to cease consumption of the monopolist’s service. Increasing only P (to the level of average cost) often will reduce welfare excessively by unduly curtailing consumption by all customers. Thus, optimal two-part tariffs typically entail a positive F and a P above marginal cost. The fixed fee F will tend to be relatively small, and P will tend to be relatively high in settings where a large F would cause many consumers to end their consumption of the monopolist’s service.10 Multipart tariffs Two-part tariffs are not the only form of nonlinear pricing. Electricity, water, and telephone utilities often implement multipart tariffs. One such tariff is the following declining-block tariff that reflects pricing structures that have been employed for local (landline) telephone service: Fixed fee = $5 + 10 cents per call for up to 100 calls + 5 cents per call for all calls between 100 and 200 + 0 cents per call for all calls above 200

Notice that the marginal price declines as consumption increases under this tariff—from 10 cents to 5 cents to 0 cents. This multipart tariff is plotted in figure 12.7 as the bold segmented line ABCD. (The reason for the extensions of these segments in the figure will be explained shortly.) The figure shows how a customer’s monthly expenditure on telephone calls changes as she makes more calls that month.

Figure 12.7 Multipart Tariff for Local Telephone Service

Multipart tariffs of this sort can effectively tailor the prices that different consumers face to their willingness to pay for the monopolist’s service. Multipart tariffs can thereby better approximate the welfaremaximizing outcome in which no consumer is induced to reduce her consumption to zero and prices reflect the monopolist’s marginal cost of production. In particular, this multipart tariff can be viewed as the following set of optional two-part tariffs. The tariffs are optional in the sense that each consumer chooses her preferred tariff from the three two-part tariffs that are offered to all consumers:

Table 12.1 Tariff Type

Fixed Fee ($)

Price per Unit (cents)

Low fixed fee Moderate fixed fee High fixed fee

5 10 20

10 5 0

These three tariffs appear as the three solid line segments in figure 12.7. The vertical intercept of each line segment represents the tariff’s fixed fee. The slope of each line segment reflects the tariff’s per-unit price. A customer’s preferred tariff depends on the number of telephone calls she intends to make. The customer will select the tariff that permits her to secure the intended number of calls at minimum cost. A consumer who plans to make fewer than 100 calls will choose the low fixed fee tariff. A consumer who plans to make between 100 and 200 calls will select the moderate fixed fee tariff. A consumer who plans to make more

than 200 calls will choose the high fixed fee tariff. These choices ensure that no consumer will end up with an expenditure/calls pair that lies above the bold segmented line (ABCD) in figure 12.7. Therefore, the same outcomes will arise whether the firm sets the single declining block tariff described above or offers the three two-part tariffs as distinct options.11 Either pricing structure allows consumers with a relatively low willingness to pay for telephone calls to purchase a relatively small number of calls in return for a relatively modest charge. This outcome secures a higher level of welfare than the outcome in which these consumers make no telephone calls in order to avoid the large fixed fee that would be necessary to ensure a normal profit for the monopolist when the price of each telephone call is set equal to its marginal cost of production. Optimal Pricing of Multiple Services The discussion to this point has considered the simple setting in which the monopolist supplies only a single product. In practice, though, regulated enterprises often supply multiple services. For example, telephone companies typically supply both local and long-distance telephone calls. Ramsey Pricing Frank Ramsey has demonstrated how to set the (linear) prices charged by a multiproduct monopoly supplier in settings where marginal cost pricing would force the monopolist to incur a financial loss.12 In essence, the identified “Ramsey prices” are the linear prices that minimize deadweight welfare losses while equating the monopolist’s total revenue and total cost. To illustrate the key features of Ramsey prices, consider the following example. Suppose the monopolist produces two services, x and y. Further suppose the firm’s total cost (in dollars) of supplying X units of service x and Y units of service y is

For simplicity, suppose the demands for the two services are independent, so the demand for service x does not depend on the price of service y (denoted Py) and the demand for service y does not depend on the price of service x (denoted Px).13 Further suppose the market demand curves are linear

It is apparent from equation 12.2 that the firm’s marginal cost of producing each service is 20. Furthermore, if the price of each service were set equal to its marginal production cost, the firm’s revenue would be equal to its variable cost. However, the firm would incur a financial loss equal to its fixed cost, $1,800. To eliminate this loss, at least one price must be increased above marginal cost. One possibility might be to raise each price above marginal cost by the same proportion until total revenue is equal to total cost. This possibility is illustrated in figure 12.8a, which graphs the demand curves for services x and y.14

Figure 12.8 (a) Proportionate Price Increase versus (b) Ramsey Pricing

As the figure illustrates, the price of each service would need to be increased from $20 to approximately $36.1 to equate the firm’s total revenue and total cost.15 When the price of service y is $36.1, consumers purchase 47.8 units of the service. Therefore, the revenue derived from the sale of service y is $1,726. This revenue exceeds the variable cost of producing 47.8 units of service y by approximately $770, the area of rectangle CEFD in figure 12.8a. Similarly, consumers purchase 63.9 units of service x when its price is 36.1. The resulting revenue exceeds the variable cost of producing service x by approximately $1,030, the area of rectangle CEKJ in figure 12.8a. The sum of the areas of these two rectangles is $1,800, the firm’s fixed cost of production. Having identified prices that ensure a normal profit for the monopolist, we now consider the deadweight losses that arise from this proportionate increase in the prices of both services. The deadweight loss from increasing the price of service y above its marginal cost is approximately $260, the area of triangle DFH. The corresponding deadweight loss for service x is approximately $130, the area of triangle JKH. Therefore, the deadweight loss that arises when securing a normal profit for the firm by raising both prices above marginal cost by the same proportion is $390. The key question is whether we can find another method for raising prices that entails a smaller welfare loss while continuing to ensure the firm secures a normal profit. To answer this question, observe that the demand for service y is more price elastic than is the demand for service x at point H in figure 12.8a. Therefore, identical increases in the prices of services x and y will cause a larger reduction in the demand for service y and a corresponding larger increase in deadweight loss. To reduce the total deadweight loss that arises as prices rise, it may be preferable to increase the price of service y by less than the price of service x is increased. The Ramsey rule for setting prices reflects precisely this consideration. The rule states that to set prices that ensure a normal profit for the multiproduct monopolist while minimizing deadweight losses, prices should be raised above marginal cost in inverse proportion to demand elasticities. Mathematically, the rule is16

where Pi is the price of service i, MCi is the marginal cost of service i, ei is the absolute value of the price elasticity of demand for service i, and r is a nonnegative constant. Figure 12.8b identifies Ramsey prices when cost and demand functions are as specified in equations 12.2 and 12.3. The aggregate deadweight loss is minimized in this setting if the price of service x is $40 and the price of service y is $30. At these prices, the absolute values of the demand elasticities for services x and y are 0.67 and 1.0, respectively. The deadweight loss for service x is $200 (the area of triangle MTV) and the deadweight loss for service y is $100 (the area of triangle NTV). The total deadweight loss of $300 here is $90 less than deadweight loss that arises when the prices of the two services are raised above marginal cost proportionately. The Ramsey pricing rule in equation 12.3 may provide some justification for so-called value of service pricing that regulators employed in the railroad industry in the early 1900s. Relative to shipping costs, rail rates for transporting gravel, sand, potatoes, oranges, and grapefruits often were lower than rates for transporting “high value” commodities, such as liquor, electronic equipment, and cigarettes. These prices reflect Ramsey principles if the price elasticities of demand for shipping high value products are lower than the corresponding elasticities for shipping low value products. Non-Ramsey Pricing of Telephone Services The Ramsey pricing rule is widely regarded as a sound principle that merits serious consideration when setting prices for regulated services. However, regulators do not always follow the Ramsey rule, as experience in the telecommunications sector reveals. Before 1984, AT&T served as a regulated supplier of both local and long-distance wireline telephone service. Due to substantial scale economies, marginal cost prices generated revenue below the total cost of delivering these services. At the time, the demand for local telephone service was substantially less elastic than the demand for long-distance service. The (absolute value of the) elasticity of access to basic local telephone service was estimated to be between 0.05 and 0.20, which means that a 10 percent increase in the price of the service would reduce the number of service subscribers by between 0.5 and 2 percent. In contrast, estimates of the price elasticity demand for long-distance service ranged from 0.5 and 2.5, with a common estimate exceeding 1.0. This means that a 10 percent increase in the price of long-distance service would reduce the number of long-distance calls by more than 10 percent. These estimates reflect the fact that many customers regarded local telephone calls as a necessity but viewed long-distance calls as more of a luxury. Given these elasticities, the Ramsey pricing rule indicates that prices should be set much farther above marginal cost for local telephone service than for long-distance calls. However, regulators implemented a very different pricing structure. They set prices for long-distance calls well above marginal cost and set prices for basic local telephone service close to, if not below, marginal cost. The stark contrast between Ramsey prices and actual prices in this instance likely reflected the goal of universal access—ensuring that all citizens are connected to the telephone network. This goal reflects in part the presence of network externalities, which are benefits that accrue to all network subscribers as more subscribers join the network. The ability to access the telephone network and make calls to any network subscriber becomes more valuable as more individuals subscribe to the network. Low prices can encourage network subscription and thereby promote universal access to the telephone network.

The experience in the telecommunications sector highlights an important assumption that underlies the Ramsey pricing rule. In minimizing deadweight loss regardless of its source, the rule assumes that the relevant goal is to maximize aggregate welfare. Prices other than Ramsey prices can be optimal when other goals (such a universal access) prevail or when particular emphasis is placed on the welfare of particular groups of consumers (for example, those who primarily consume network access and local telephone calls). Optimal Pricing in Two-Sided Markets Departures from the standard pricing rule in equation 12.3 also can arise when a regulated enterprise serves two sides of a market.17 Consider, for instance, a supplier of broadband Internet access services (BIAS). A BIAS supplier serves both consumers and content providers. Content providers include news organizations like CNN and the BBC, online merchants such as Amazon and Alibaba, and suppliers of video programing like Netflix and Hulu. A BIAS supplier provides Internet access to consumers and enables the programming (and other services) of content providers to reach consumers. Consumers value access to content providers’ programming, and content providers secure revenue (either from consumers or from advertisers) when their content is viewed by consumers. In this setting, then, BIAS suppliers serve two sides of a market: the “buyer side” and the “seller side.”18 The standard Ramsey pricing rule typically is not optimal when a firm serves two sides of a market, because the standard rule does not consider externalities that arise when a firm serves two sides of a market. Consumers value Internet access more highly when the access enables them to enjoy the programming of a larger number of content providers. Similarly, content providers value the greater exposure for their programming that is provided by a BIAS supplier serving a larger number of subscribers. Therefore, expanded participation on one side of the market creates a positive externality for the other side. The optimal pricing rule in two-sided markets accounts for the externality that each side of the market creates for the other side. In essence, the externality is accounted for by replacing the physical marginal cost of serving each side of the market with the corresponding net marginal cost. The net marginal cost of serving side i of a market is the difference between the physical marginal cost of serving that side of the market (MCi) and the extra value (v−i) that the other side of the market experiences as participation on side i expands. For example, suppose a BIAS supplier’s physical marginal cost of serving consumers is $100, and content providers experience a $25 increase in profit for every new customer that secures Internet access from the BIAS supplier. Then this supplier’s net marginal cost of serving consumers is $75 (= $100 − $25). Once net marginal cost replaces physical marginal cost, the welfare-maximizing rule for the linear prices set by a monopoly firm that serves two sides of a market is

Observe that the optimal pricing rule in equation 12.4 is identical to the optimal Ramsey pricing rule in equation 12.3 except that the physical marginal cost of service i, MCi, is replaced by the net marginal cost of service i, MCi − v−i.19 Equation 12.4 implies that, holding other factors constant, it is optimal to set a price very close to (and perhaps even below) the physical marginal cost of supplying a service when expanded participation by

consumers of the service creates substantial value for participants on the other side of the market. To illustrate, if Internet service subscribers value very highly the ability to access additional content providers, then welfare can be maximized when a BIAS supplier charges content providers very little for the services they secure from the supplier. The relatively low charges can encourage participation by content providers, which generates substantial value for consumers. In practice, BIAS suppliers typically do not charge content providers for access to the supplier’s subscribers. We return to the issue of pricing in two-sided markets in chapter 14, where we discuss net neutrality regulation. Rate Structure Whether they serve one or two sides of a market, regulated enterprises often supply multiple services to different groups of customers. For example, a telephone company typically supplies local and long-distance calls to residential and industrial customers. Electricity distribution firms also supply electricity to both residential and commercial customers.20 Regulated enterprises like these often employ the same facilities to supply multiple services. For instance, local and long-distance calls both travel over the local telephone network. Similarly, electricity delivered to industrial customers and electricity delivered to residential customers both travel over many of the same power lines. The multiproduct enterprise incurs common costs in situations like these. A common cost is a cost that is incurred in supplying multiple services and would not be avoided if just one of the services were no longer supplied. To illustrate, the cost of the local telephone network is a common cost for a telephone company that needs the local network to complete long-distance calls. Because they are not caused by the supply of a single service, common costs are not readily assigned particular services. Yet regulatory commissions often attempt to assign all of the regulated firm’s costs to the individual services the firm supplies and then set prices to ensure that each of the firm’s services recovers its assigned costs. This process is known as fully distributed cost (FDC) pricing. FDC Pricing To illustrate FDC pricing, suppose a monopolist sells electricity to two classes of customers. Let X denote the electricity sold to residential buyers, and let Y denote the electricity sold to industrial customers. Further suppose the firm faces the following total costs of production:

Observe that the joint production of X and Y is subadditive here, because it is less costly for the firm to produce X and Y together than to produce them separately. The firm reduces its fixed cost by $150 (= $1,050 − $700 − $600) when it produces X and Y together rather than producing them separately. When the firm produces X and Y together, the $1,050 fixed cost is a common cost. The important practical question that arises in settings like this is: How much of the $1,050 common cost should be assigned to the production of X and how much should be assigned to the production of Y? It has been

observed that utilities’ common costs “may be distributed on the basis of some common physical measure of utilization, such as minutes, circuit-miles, message-minute-miles, gross-ton-miles, cubic feet, or kilowatthours employed or consumed by each. Or they may be distributed in proportion to the costs that can be directly assigned to the various services.”21 Any of these methods for allocating common costs may seem reasonable. However, none of them can generally be expected to produce marginal cost prices or Ramsey prices. To illustrate this more general conclusion, suppose some “reasonable” cost allocation procedure leads to an allocation of 75 percent of the common costs to product X and 25 percent to product Y. In this case, calculated average costs would be

Specifically, the calculated average cost of X is the sum of its average variable cost ($20) and its assigned 75 percent share of the $1,050 common cost, divided by the X units produced. FDC prices are then set equal to the calculated average costs to ensure that the revenue derived from each service is equal to the calculated cost of supplying the service. To illustrate this calculation, suppose that the demand functions for X and Y are

FDC prices can now be determined by equating price (from equation 12.9) and calculated average cost (from equation 12.8) for each service. Doing so provides the following prices and associated output levels:

It can be shown that the Ramsey prices (and associated output levels) in this setting are

Figure 12.9 illustrates the Ramsey solution. By definition, the Ramsey prices have the smallest deadweight loss triangles (shaded in both panels of figure 12.9) for all possible pairs of prices that ensure the firm’s total revenue is equal to its total production costs.

Figure 12.9 Ramsey Pricing for (a) Product X and (b) Product Y

This example illustrates one important drawback to common FDC pricing procedures: They typically fail to produce prices that maximize welfare. A second drawback to FDC pricing procedures is that they can engender serious disputes among customer classes. Residential customers will naturally argue that they should not be required to bear 75 percent of the firm’s common costs, and industrial customers also may argue that their share should be less than 25 percent. Regulators must weigh the countervailing arguments and specify prices that are consistent with regulatory objectives. Avoiding Inefficient Entry As noted above, regulators may adopt cost allocation rules that promote objectives like universal access. When pursuing such objectives in settings where they cannot preclude the operation of additional suppliers, regulators are advised to avoid cost allocation rules producing prices that invite inefficient entry that serves to increase aggregate industry production costs. To avoid such entry, regulators can allocate common costs so as to produce a price for each service that exceeds the average cost of producing the service in isolation, which is called the stand-alone cost of supplying the service. In the example described by equations 12.5–12.7, the stand-alone cost of supplying X is 20 + 700/X, and the stand-alone cost of supplying Y is 20 + 600/Y. In settings where three or more services are supplied, the prices for each subset of services should be set below the average cost of supplying the subset of services to avoid inefficient supply of a subset of services in isolation. It is not always possible to find prices that preclude inefficient entry, even in natural monopoly settings where industry costs are minimized when all services are supplied by a single firm. In such cases, a regulator would need to explicitly prohibit entry to ensure that industry costs are minimized. To illustrate this possibility, consider the following setting. Suppose three towns require well water for their citizens. Each town could serve the needs of its own citizens by drilling a shallow well at a cost of $300 per town, for a total cost of $900. Any two towns could serve the needs of their citizens by drilling a single deeper well at a cost of $400. This arrangement would require the third town to drill its own well at a cost of $300, for an aggregate cost of $700. The least costly way for all three towns to serve the needs of their citizens is to drill one very deep well at a total cost of $660. Suppose the three towns tentatively agree to drill the deep well and divide the cost equally, so each town is charged $220 (= $660/3). Although this arrangement ensures that industry costs are minimized and treats each of the three towns identically (and so may seem “fair” to all parties), the arrangement is not sustainable. That is, any two towns could reduce their water supply costs by drilling a well for $400 and dividing the cost equally, so each town is charged only $200. Thus, in settings like these, it is not possible to find charges that cover the natural monopolist’s costs and preclude incentives for a subset of consumers to procure service from an alternative supplier and raise total industry costs in the process of doing so. Time of Use Pricing The discussion to this point has not considered the possibility that a firm might charge a different price for the same service at different times during the day. However, such time-of-use (TOU) pricing is employed in the electricity sector. As we will see, TOU pricing can help to match price to marginal cost in settings where

marginal cost varies considerably at different times during the day. Costs of Power Production The marginal cost of supplying electricity can vary substantially during a twenty-four hour period because of prevailing demand and supply conditions. The demand for electricity often is relatively low during nighttime hours, as most people are sleep and many business are inactive. As figure 12.10 illustrates, electricity consumption typically is higher during the daytime hours when homeowners operate appliances and businesses run their operations.

Figure 12.10 Average Daily Load Curve for Electricity Note: 1200 = noon; 2400 = midnight.

Electricity suppliers typically employ a mixture of different types of generating units to meet the timevarying demand for electricity. Baseload units (often nuclear or coal-powered units) typically have high fixed costs but generate electricity at relatively low marginal cost up to capacity. Peaker units (often powered by natural gas) have relatively low fixed costs but higher marginal production costs. A firm that both generates and delivers electricity to consumers typically will own both baseload and peaker units. The firm will run the baseload units throughout the day and only employ the peaker units when they are needed to meet peak demand. Consequently, the firm’s short-run marginal cost curve will take a shape similar to the one depicted in figure 12.11. The flat portion of the curve along segment AB might represent the marginal

cost of supplying electricity using the baseload unit. Once output exceeds QK, the capacity of the baseload unit, the electricity supplier employs peaker units of varying capacities, vintages, and efficiencies. The supplier will employ the most efficient peaker unit first as required output expands above QK, and then employ successively less efficient units with higher marginal costs as output expands further.

Figure 12.11 Short-Run Marginal Cost Curve for Electricity Supply

TOU Pricing Model To explain the principles of TOU pricing most clearly, it is helpful to consider a simpler approximation of the setting, as reflected in figure 12.11.22 Suppose that the electricity supplier’s marginal cost is b up to capacity K. Further suppose the firm cannot produce more than K units of electricity unless it expands its capacity. Therefore, as shown in figure 12.12, the firm’s short-run marginal cost (SRMC) curve becomes vertical at capacity K. The SRMC curve in figure 12.12 can be viewed as an approximation of its counterpart in figure 12.11.

Figure 12.12 Time-of-Use Pricing

Suppose the unit cost of expanding capacity is β. Therefore, if the firm seeks to increase its output beyond its current capacity by one unit, the firm can do so by purchasing one additional unit of capacity at cost β and then employ that capacity to produce an extra unit of output at cost b.23 The firm’s long-run marginal cost (LRMC) of supplying electricity in this setting is β + b, the sum of the unit capacity cost and the SRMC. For simplicity, suppose each day consists of one twelve-hour (daytime) period of peak demand and one twelve-hour (nighttime) period of off-peak demand. Further suppose that the demands for electricity in the two periods are independent, so the amount of electricity demanded during the peak period depends on the price of electricity in this period but not on the price in the off-peak period.24 The two original demand curves are labeled “Peak0” and “Off-peak” in figure 12.12. Given the level of installed capacity, prices should be set to reflect the prevailing SRMC of supplying electricity in order to maximize welfare. Specifically, as illustrated in figure 12.12, the price of electricity in the off-peak period should be b, and the price of electricity in the peak period should be b + β. Notice that capacity K is optimal when the peak-period demand curve is Peak0. The SRMC and LRMC curves both intersect the peak-period demand curve at output K. Consequently, the marginal cost of expanding output above K (b + β) is precisely the marginal value that consumers place on expanding output, which is reflected in the prevailing price of electricity. If the demand curve for electricity during the peak period were to increase to Peak1 in figure 12.12, then a higher level of capacity (K*) would be optimal. With the increased peak demand, the marginal value of electricity during the peak period when output is K is P1, which exceeds the (long-run) marginal cost of supplying electricity. Consequently, increasing capacity above K allows the production of additional electricity that produces value in excess of cost. To maximize welfare, capacity should be increased to K*, because for all capacity levels below K*, the extra value that can be secured for peak-period consumers by expanding capacity exceeds the corresponding costs. When capacity and electricity production are increased

from K to K*, consumer surplus increases by the area of the shaded region below the Peak1 demand curve in figure 12.12. This area is the difference between total incremental value secured for peak-period consumers (which is the area below the demand curve for output between K and K*) and the extra cost of producing the incremental output (which is the area of rectangle EFK*K). One can view the welfare-maximizing prices identified in figure 12.12 as arising from a fully distributed cost (FDC) pricing policy. Under this policy, all of the common (capacity) costs are allocated to peak-period customers, even though the capacity is employed to produce electricity in both the off-peak and peak periods. Although this policy may seem “unfair” to peak-period customers, the policy produces prices that maximize aggregate welfare. To understand the value of setting distinct prices for peak and off-peak electricity consumption, suppose that the firm charged the same price for electricity during the peak and off-peak periods. This single price is labeled P* in figure 12.13. Capacity K0 is required to satisfy peak demand at this price. The optimal capacity is K, where price equals LRMC. Therefore, an excessive amount of capacity will be installed under the identified single-price policy. The associated deadweight loss during the peak period is the area of the shaded triangle EFG. This loss is the difference between the cost of the excess capacity, rectangle EFK0K, and the value that consumers place on the incremental capacity and output, EGK0K. This deadweight loss arises, because at price P*, peak-period consumers are charged less than the (long-run) marginal cost of satisfying their demand for electricity.

Figure 12.13 Deadweight Losses Due to Nonpeak Pricing

The single price P* also introduces a deadweight loss during the off-peak period. This loss, which appears as the shaded region below the off-peak demand curve in figure 12.13, stems from the suboptimal use of the existing generating capacity during the off-peak period. Given the capacity that has been installed to serve the peak demand for electricity, the (short-run) marginal cost of supplying electricity during the off-peak

period is only b. The single price P* exceeds b and thereby reduces off-peak consumption to Q0, below the welfare-maximizing level (Q) at which price equals (short-run) marginal cost. The setting considered in figure 12.13 is one where the peak-period demand for electricity is so pronounced relative to the off-peak demand that even when the peak-period price includes all capacity costs and the off-peak price includes no capacity costs, peak-period electricity consumption exceeds off-peak consumption. A different outcome can arise when peak and off-peak demands are more similar, as figure 12.14 illustrates.

Figure 12.14 Time-of-Use Pricing with Shared Capacity Costs

To simplify the figure, suppose that the SRMC of supplying electricity below capacity is zero. When the off-peak price is set equal to SRMC (zero), off-peak consumption is S. When the peak period price is set equal to LRMC (β), peak period consumption is R, which is less than S. Therefore, allocating all capacity costs to the peak period reduces peak period consumption below the level of off-peak consumption. This outcome suggests that more than the welfare-maximizing level of capacity has been installed. To determine the optimal amount of capacity in this setting, it is useful to identify the marginal value of capacity, which is the sum of its marginal value to peak and off-peak consumers. This sum is the vertical sum of the demand curves in the two periods, which appears as the kinked curve ABC in figure 12.14. Welfare is maximized by expanding capacity to the point where its marginal value is equal to its marginal cost, which is at K in the figure. At output level K, the marginal value of electricity is P0 for off-peak consumers and Pp for peak-period consumers. Therefore, the welfare-maximizing prices for electricity are P0

in the off-peak period and Pp in the peak period. At these prices, peak and off-peak consumers effectively share capacity costs. Summary This chapter reviews the key principles of optimal pricing in regulated industries. We noted that although marginal cost prices maximize aggregate welfare, they can impose financial losses on the regulated supplier in the presence of a natural monopoly. Ramsey prices are identified as the linear prices that maximize aggregate welfare while ensuring a normal profit for the multiproduct supplier. We also characterized welfare-maximizing prices in two-sided markets and explored the value of time-of-use pricing in settings where marginal costs of production differ substantially at different times during the day. The discussion throughout this chapter has viewed the firm’s operating costs as being beyond its control. The next chapter explores regulatory policies that enhance the regulated firm’s incentive to reduce its operating costs. Questions and Problems 1. A firm’s total cost of producing Q units of output is C(Q) = 79 + 20Q. The (inverse) market demand curve for the firm’s product is P(Q) = 100 − Q, where P denotes the price of the product. a. If the price of the product is set equal to the firm’s marginal cost, what profit will the firm earn? b. If the price of the product is set equal to the firm’s average cost, what will the price be, what output will the firm produce, and how much deadweight loss will arise? c. Now suppose that a two-part tariff is set, so each consumer must pay a fixed fee regardless of consumption level plus a per-unit price. Further suppose the market consists of ten consumers, each with the inverse demand curve P(Q) = 100 − 10Q. If the price is set equal to the firm’s marginal cost, what is the largest fixed fee that a consumer would pay for the right to buy at that price? What would the deadweight loss be if this marginal cost and fixed fee were imposed? 2. A firm’s total cost of producing Q units of output is C(Q) = 500 + 20Q. There are ten potential consumers of the firm’s product: six “rich” consumers, each with inverse demand P(Q) = 100 − 5Q; and four “poor” consumers, each with inverse demand P(Q) = 100 − 80Q. a. What is the largest fixed fee (Fpoor) that a poor consumer would pay for the right to purchase the product at a per-unit price equal to marginal cost? What is the corresponding largest fixed fee (Frich) that a rich consumer would pay? b. Suppose a two-part tariff is implemented. The per-unit price is set equal to the firm’s marginal cost, and the fixed fee is set equal to Fpoor. How much profit would the firm earn if it operated under this tariff? c. Now suppose that price discrimination is legal, consumers cannot resell the product after purchasing it, and the monopolist can distinguish between rich and poor consumers. Further suppose that the monopolist sets a per-unit price equal to its marginal cost for all consumers. In addition, the monopolist charges each consumer a fixed fee equal to onefourth of the maximum amount he is willing to pay for the right to purchase at marginal cost. How much profit will the monopolist earn under these discriminatory two-part tariffs? How much deadweight loss will arise? 3. Continue to consider the setting of question 2, but suppose that price discrimination is not feasible. It can be shown that the welfare-maximizing nondiscriminatory two-part tariff is the one that raises the per-unit price above marginal cost and reduces the fixed fee below Fpoor just enough to leave the poor consumers willing to purchase some of the product while allowing the firm to earn zero profit. a. Verify that if the fixed fee is 38.51 and the per-unit price is 21.50, then the poor consumers are just willing to buy the product (because the net surplus they receive from doing so is zero). Calculate the total welfare (consumer surplus plus profit) secured under this nondiscriminatory two-part tariff.

b. Find the per-unit price and the fixed fee in the nondiscriminatory two-part tariff that (i) sets the per-unit price equal to the firm’s marginal cost and (ii) sets the fixed fee just high enough to secure zero profit for the firm when (only) the rich consumers purchase the firm’s product under this tariff. Verify that the poor consumers are not willing to purchase the firm’s product under this tariff. Show that total welfare under this two-part tariff is less than the total welfare generated by the two-part tariff identified in part (a) of this problem. 4. Consider a new two-part tariff with a fixed charge of 54.2 and a per-unit price of 20.50. Demonstrate that Pareto gains can be secured by offering both this tariff and the original tariff considered in problem 3a (with a fixed charge of 38.51 and a per-unit price of 21.50) rather than just the latter tariff. To do so, demonstrate that when both tariffs are offered and all consumers can choose their preferred tariff: (i) poor consumers will choose the original tariff; (ii) rich consumers will choose the new tariff and increase their net surplus by doing so; and (iii) the firm’s profit will increase above the level it earns when it offers only the original tariff. 5. Assume that a water distribution monopoly serves two consumer types, industrial and residential. Let PI denote the price that industrial consumers pay for water, and PR denote the price that residential consumers pay for water. The demand for water by industrial consumers is QI(PI) = 30 − PI. The demand for water by residential consumers is QR(PR) = 24 − PR. The only cost that the monopoly incurs is a fixed cost of installing a water pipeline. This cost is $328. Find the Ramsey prices in this setting. Hint: The absolute value of the price elasticity of demand is PI /QI for industrial customers and PR/QR for residential customers. 6. A monopoly supplier of electricity can install capacity at a cost of k per unit. The additional cost of supplying electricity is c per unit, up to capacity. (The maximum amount of electricity the firm can supply is the number of units of capacity it has installed.) The demand for electricity varies by time of day. The (inverse) demand curve during the daily twelve hours of peak demand is P(Q) = a − bQ, where Q denotes quantity, P denotes price, and a > c + k. The (inverse) demand curve during the daily twelve hours of off-peak demand is P(Q) = a − 2bQ. 7. How many units of capacity should be installed to maximize welfare? Hint: Your answer should be an expression that includes the parameters a, b, c, and k. a. Suppose a = 16, b = 0.08, c = 1, and k = 3. What capacity should be installed and what prices should be set for peak and off-peak electricity consumption in order to maximize welfare? b. Suppose a = 8, b = 0.08, c = 3, and k = 3. What capacity should be installed and what prices should be set for peak and off-peak electricity consumption in order to maximize welfare?

Notes 1. See William J. Baumol, “On the Proper Cost Tests for Natural Monopoly in a Multiproduct Industry,” American Economic Review 67 (December 1977): 809–822. 2. See, for example, John C. Panzar and Robert D. Willig, “Economies of Scope,” American Economic Review 71 (May 1981): 268–272. 3. This example is from William W. Sharkey, The Theory of Natural Monopoly (New York: Cambridge University Press, 1982). 4. For an in-depth analysis of when a natural monopoly prevails in the multiproduct setting, see William J. Baumol, John C. Panzar, and Robert D. Willig, Contestable Markets and the Theory of Industry Structure (New York: Harcourt Brace Jovanovich, 1982). 5. An additional condition is that the entrant incurs no sunk costs. A sunk cost is a cost that cannot be recovered on leaving the industry. For instance, if firm A procures specialized equipment to operate in an industry, it may not be able to sell the equipment to another firm operating in a different industry if firm A decides to terminate its operations. Consequently, the cost of the specialized equipment would be a sunk (nonrecoverable) cost for firm A. 6. Chapter 4 explains this conclusion in detail. For additional analyses of optimal pricing policies, see Ronald R. Braeutigam, “Optimal Policies for Natural Monopolies,” in Richard Schmalensee and Robert D. Willig, eds., Handbook of Industrial Organization, vol. 2 (Amsterdam: North-Holland, 1989), pp. 1289–1346; and Daniel F. Spulber, Regulation and Markets (Cambridge, MA: MIT Press, 1989). For a more geometrical treatment, see Kenneth E. Train, Optimal Regulation

(Cambridge, MA: MIT Press, 1991). 7. Observe that R is the firm’s average cost of producing Q0 units of output in figure 12.4. 8. Throughout this chapter we make the common assumption that the area under the demand curve measures the value that consumers derive from consumption. This will be the case if the income elasticity of demand is sufficiently close to zero. See Robert D. Willig, “Consumer’s Surplus without Apology,” American Economic Review 66 (September 1976): 589–597. 9. See chapter 4 for a discussion of welfare loss determination. 10. For an extended and more formal analysis of the optimal design of a two-part tariff, see Stephen J. Brown and David S. Sibley, The Theory of Public Utility Pricing (New York: Cambridge University Press, 1986). 11. These tariffs are often referred to as self-selecting tariffs. 12. Frank Ramsey, “A Contribution to the Theory of Taxation,” Economic Journal 37 (March 1927): 47–61. 13. The more general case of interdependent demands involves more complex mathematics and is beyond the scope of this discussion. See Brown and Sibley, Theory of Public Utility Pricing. 14. The two demand curves are assumed to intersect where price equals marginal cost for both services to simplify the graphical exposition. The principles discussed here apply more generally. 15. This $36.1 is determined by equating the firm’s total revenue and total cost, recognizing that Px and Py must be identical, because equal mark-ups above a common marginal cost ($20) are being considered. 16. See Brown and Sibley, Theory of Public Utility Pricing, p. 39, for a formal derivation. 17. The ensuing discussion is drawn from Mark Armstrong, “Competition in Two-Sided Markets,” RAND Journal of Economics 37 (Autumn 2006): 668–691; and Jean-Charles Rochet and Jean Tirole, “Two-Sided Markets: A Progress Report,” RAND Journal of Economics 37 (Autumn 2006): 645–667. 18. Credit card companies also serve two sides of a market: consumers who use credit cards to pay for purchases and the companies that issue credit cards. 19. Recall that r is a nonnegative constant, and ei is the price elasticity of demand for service i. Also recall the corresponding discussion of pricing with a two-sided platform in chapter 9. 20. Electricity usually is delivered to residential customers at a lower voltage than it is delivered to industrial customers, so the two services truly are distinct. 21. Alfred E. Kahn, The Economics of Regulation: Principles and Institutions, vol. 1 (New York: John Wiley & Sons, 1971), p. 151. 22. This exposition reflects Peter O. Steiner, “Peak Loads and Efficient Pricing,” Quarterly Journal of Economics 71 (November 1957): 585–610. 23. In practice, electricity producers that wish to expand their capacity typically will have to do so in large, discrete units, representing the purchase of additional generating units. The assumption that the electricity supplier can expand capacity more “smoothly” allows principles that apply more generally to be explained relatively simply. 24. This assumption is strong because, for example, residential customers often can run appliances like dishwashers at night rather than during the day if it is far less costly to do so. Businesses may have less flexibility in this regard if they do not operate during the nighttime hours.

13 Incentive Regulation

The discussion of regulation to this point has focused on the merits of setting prices that reflect production costs, taking costs as given. In practice, though, a firm’s operating costs are endogenous. Through diligent management and innovative effort, a firm often can reduce its operating costs if it is motivated to do so. This chapter explores the role of incentive regulation in motivating a firm to reduce its operating costs and otherwise improve its performance. Incentive regulation is sometimes viewed as an alternative to rate of return regulation (RORR), which is the traditional method for regulating a natural monopoly. Therefore, to understand the role and nature of incentive regulation, it is useful to begin by reviewing the details of RORR regulation and explaining its drawbacks. We then discuss several forms of incentive regulation. The discussion emphasizes how, in principle, each form of incentive regulation might overcome some of the problems inherent in RORR. We note, though, that every form of regulation has its flaws. Consequently, no single regulatory policy is ideal in all settings. Thus, one important task for a regulator is to understand the wide variety of regulatory policies that might be implemented and to select the best policy (or policies) for the prevailing environment. Incentive regulation has replaced RORR in many settings. For example, one form of incentive regulation known as price cap regulation has been widely deployed in telecommunications industries throughout the world. Alternative forms of performance based regulation have been employed in the electricity sector. The variety of regulatory plans that we observe in practice likely reflects in part the fact that every regulatory plan has both strengths and weaknesses, and no single plan is ideal in all settings. Traditional Rate of Return Regulation RORR is often described as a form of cost-plus regulation. In other words, under RORR, the regulator determines the regulated firm’s operating costs and then sets prices to generate revenues to cover these costs and provide any additional compensation required to ensure that the firm will continue to deliver highquality service to its customers. In essence, RORR seeks to provide a normal profit (and only a normal profit) for the regulated enterprise. The manner in which the firm’s costs are calculated under RORR is important. Total costs consist of operating expenses and capital costs. Operating expenses include wages and the cost of materials, for example. Capital costs include the return the regulated firm must provide to investors to induce them to provide funding for the firm’s infrastructure investments. Regulated firms often must acquire costly, longlived assets (such as electricity generating plants and the wires that transport electricity from the generating plants to customers) before they can begin to use the assets to serve customers. Consequently, the firms must acquire funds from investors to pay for the assets before employing the assets to generate revenue that

can then be used in part to compensate investors. Therefore, one task the regulator faces is to determine what constitutes reasonable compensation for investors. RORR can be characterized formally by

The expression to the left of the equality in equation 13.1 is the firm’s revenue, n is the number of services the regulated firm supplies, pi is the price of the ith service, and qi is the quantity of the ith service that the firm supplies. The expression to the right of the equality in equation 13.1 reflects the firm’s estimated capital costs. The quantity B denotes the firm’s rate base, which is a measure of the value of the firm’s infrastructure investments; s is the allowed rate of return on investment, which is the regulator’s assessment of what constitutes reasonable compensation for investors. When setting a value for s, regulators typically consider the returns that investors tend to receive from investments that entail similar risks. A key risk associated with investment in a regulated utility is that future regulators may not set prices that generate the revenue required to cover operating expenses and provide the anticipated compensation to investors. Current regulators cannot fully dictate the actions of future regulators, so investors inevitably face some risk when they finance the infrastructure investments of regulated utilities. However, regulators typically try to limit this risk, knowing that investors generally will finance a project in return for less generous compensation if they perceive the project to entail little risk. Regulators employ different methodologies to determine the rate base B. Under the original cost methodology, B is the amount that the firm paid for its plant and equipment less the reduction in the value of these investments due to depreciation. Under the reproduction cost methodology, B is the amount the firm would have to pay currently to reproduce its current plant and equipment. Under the replacement cost methodology, B is the cost the firm would incur to replace its current plant and equipment with corresponding assets that embody the most modern technology. As is apparent from equation 13.1, different methodologies for determining B can all have the same implications for prices and revenues under RORR if the allowed rate of return (s) varies with the adopted rate base valuation methodology. It is the product of s and B that determines authorized revenues and thus the funds that are available to compensate investors after operating expenses are paid. Regulators tend to maintain the same methodology for determining B for very long periods. However, they change s (and thereby change the prices that are charged for the regulated firm’s services) periodically as industry conditions change. Rate Hearings Under RORR, the prices that a regulated firm charges for its services are determined at rate hearings. A rate hearing is a quasi-judicial process in which evidence is put forth to determine the firm’s costs and thus an appropriate level of authorized revenue. A regulatory commission can initiate a rate hearing. More commonly, though, the regulated firm requests a rate hearing in order to secure higher prices in response to rising production costs. At the rate hearing, the firm attempts to document that its costs have risen above the levels estimated at the preceding rate hearing, and so increased revenue is required to cover operating expenses and capital costs. Higher costs can arise from many sources. Inflation can increase input costs, for

example. In addition, increased demand for the firm’s services (due to an expanding customer base, for example) can increase both operating expenses and required infrastructure investment. At a typical rate hearing, the regulatory commission reviews evidence presented by the firm, by customer representatives, and by the commission’s staff. The firm often will argue that substantial price increases are required to finance projected large increases in costs. In contrast, customer representatives tend to argue for smaller rate increases or perhaps even price reductions. The commission’s staff will sometimes offer its assessment of the arguments presented by other parties and typically will present its own assessment of likely costs and reasonable prices. After reviewing the evidence presented by all parties, the regulatory commission formulates its own estimate of the firm’s costs and specifies the revenue that is required to cover the firm’s projected operating expenses and provide a reasonable return on its rate base. After determining the authorized revenue, the regulatory commission specifies the prices that the firm can charge for each of its services in order to generate the authorized revenue. Rate hearings typically take many months to conduct, because the issues addressed at a hearing can be quite complex. Predictions about future costs (and thus required revenues) vary with underlying assumptions about such factors as likely input prices, levels of consumer demand, and prevailing production technologies. Different parties often adopt different assumptions, and regulators must determine which assumptions are most plausible. This determination typically is complex and time consuming. Averch-Johnson Effect RORR has been criticized in part on the grounds that it may induce the regulated firm to employ too much capital (plant and equipment) and too little noncapital inputs (such as labor) and thereby operate with inefficiently high production costs. This potential distortion stems from the manner in which the firm’s costs are estimated under RORR. As described above, costs consist of operating expenses and capital costs. Under RORR, the firm is effectively reimbursed for its operating expenses and receives an authorized return on its capital investment. If this authorized rate of return (s) exceeds the minimum rate actually required to induce investors to finance the firm’s infrastructure investment, then the firm is effectively making a profit on its capital investments, and so it will favor capital inputs over noncapital inputs in the production process. To explain this potential drawback to RORR more rigorously, consider the following analysis that reflects the work of Harvey Averch and Leland Johnson.1 Suppose a regulated firm employs two inputs—capital and labor—to produce a single output (electricity, for example). Let Q(K, L) denote the firm’s production function, which specifies the maximum amount of output the firm can produce when it employs K units of capital and L units of labor. Also let P denote the price of the firm’s product, w denote the unit price of labor, and r denote the unit cost of capital. As above, s denotes the allowed rate of return on capital. In this setting, much as in equation 13.1, RORR can be specified as

The firm will choose K and L to maximize its profit,

subject to the rate of return constraint in equation 13.2. Let MPL denote the marginal product of labor, which reflects the additional output the firm can produce as it employs more labor, holding constant the amount of capital it uses. Similarly, let MPK denote the marginal

product of capital, which reflects the additional output the firm can produce as it uses more capital, holding constant the amount of labor it employs. Then a standard mathematical technique (known as the Lagrange multiplier method) can be employed to show that the firm’s profit-maximizing choice of K and L is characterized by

or, equivalently,

where

The variable λ in equation 13.4 is called a Lagrange multiplier. It is a measure of how constraining the rate of return mandate in equation 13.2 is. More precisely, λ measures the increase in actual profit the firm secures when the allowed profit increases by $1. The value of λ is between 0 and 1. Consequently, α is a positive number when s exceeds r, which will be the case when the regulator authorizes a return on the firm’s rate base that exceeds the minimum rate required to induce investors to fund the firm’s infrastructure projects. Because this minimum rate r is difficult to determine precisely in practice, a regulator may well set s above r to ensure that the firm can attract the funding required to continue to deliver high-quality service to customers. If the firm were not subject to the RORR constraint, then λ would be zero and α in equation 13.4 would be zero. Then the equation indicates that the (unconstrained) profit-maximizing firm would choose K and L to ensure

or, equivalently,

In other words, the firm would choose its inputs to ensure that the extra output secured from the last dollar spent on capital is the same as the extra output secured from the last dollar spent on labor. This efficient hiring of inputs serves to minimize the firm’s production costs. Equation 13.4 indicates that the firm will not hire inputs efficiently in this manner under RORR. In essence, RORR reduces the effective cost of capital that the firm perceives. The firm must pay r for each unit of capital it employs, but is effectively paid s > r for each unit of capital. Therefore, capital serves to increase profit, as it facilitates the firm’s operation. This dual role of capital reduces its perceived cost to the

regulated firm, which induces the firm to employ more than the cost-minimizing amount of capital. This outcome is illustrated in figure 13.1. The curved line in the figure is the isoquant associated with output Q*. Recall that an isoquant depicts the combinations of inputs (K and L) that can be employed to produce the specified output. The two parallel straight line segments in figure 13.1 (MM and NN) are isocost lines. An isocost line identifies combinations of K and L that cost the same amount to hire. The isocost line that is closer to the origin (NN) represents a smaller level of total expenditure for the firm. The slope of each of these isocost lines is r/w, the ratio of the price of capital to the price of labor.

Figure 13.1 Averch-Johnson Effect versus Least-Cost Production

The slope of an isoquant is the ratio of the marginal products of capital and labor (MPK / MPL). Therefore, as equation 13.5 indicates, to produce output Q* at minimum cost, the firm would hire K′ units of capital and L′ units of labor. At point E in figure 13.1, the slopes of the isoquant and the isocost curve are equal, so MPK /MPL = r / w. When the firm operates under RORR, it will choose a different combination of inputs. Because the firm effectively perceives a lower cost of hiring capital, it perceives flatter effective isocost lines like the one labeled TT in figure 13.1. The slope of this isocost line is [r − α]/w. Given the lower perceived cost of capital, the firm will employ K * units of capital and L* units of labor, as indicated by point F in the figure. The increase in the firm’s operating costs resulting from the “overcapitalization” induced by RORR is reflected by the vertical distance between isocost lines MM and NN. The former line captures the firm’s actual costs under RORR (because it passes through point F), whereas the latter line captures the lower costs

that the firm could have incurred in producing Q* if it employed the cost-minimizing levels of capital and labor (at point E). Prudence Reviews and Cost Disallowances Regulators attempt to limit overcapitalization in part by conducting prudence reviews. At a prudence review, the regulator tries to assess whether capital investments that the firm has undertaken are serving the best interest of the firm’s customers. If the regulator determines that a particular investment was not prudent, then the regulator will decline to include some or all of the cost of the investment in the firm’s rate base, thereby reducing the revenue that the firm is entitled to recover from its customers. Investments in nuclear power plants provide a case in point. During the 1970s, electricity producers anticipated rapid growth in electricity consumption and so began to build large-scale nuclear generating units to meet the projected demand. However, the actual costs of building the nuclear plants often turned out to be far greater than the projected costs. Furthermore, the demand for electricity did not increase at the anticipated rate, and so many of the nuclear plants that had been constructed were not needed to meet the realized demand for electricity. In response, many regulatory commissions declined to place the full costs of the nuclear plants in the utilities’ rate bases. For example, consider the following excerpt from an annual report of the California Public Utilities Commission: In May 1987, the PUC Public Staff Division’s Diablo Canyon Team recommended that of the $5.518 billion that PG&E spent before commercial operation of Diablo Canyon Nuclear Power Plant, the utility should only be allowed to collect $1.150 billion in rates.… The Public Staff Division alleges that unreasonable management was to blame for a large part of this cost overrun.2

The prospect of being unable to recover the cost of investments may help to deter overcapitalization. However, it could discourage efficient capital investment if regulators effectively penalize utilities for undertaking investments that, despite seeming appropriate given all that was known at the time of the investment, turned out to be an unwise undertaking in light of information that emerged after the fact.3 Regulatory Lag RORR has also been criticized on the grounds that it provides the regulated firm with limited incentive to discover innovative ways to reduce its operating costs. Even if the regulator sets the allowed rate of return on capital (s) equal to the firm’s actual cost of capital (r), so the firm has no incentive to employ more than the cost-minimizing level of capital, RORR can stifle cost-reducing innovation by operating as a form of cost-plus regulation. When it operates under cost-plus regulation, a firm knows that if it manages to reduce its operating costs, its authorized revenue will be reduced accordingly to ensure that it earns only a normal profit. Consequently, the firm may not devote substantial effort to ensuring that its managers continually search diligently for new ways to reduce operating costs. This potential drawback to RORR may be alleviated in part by what is referred to as regulatory lag. Recall that rate hearings often take many months to complete, and perhaps even longer to prepare for. Consequently, rate hearings do not occur frequently, so a substantial amount of time usually elapses between rate hearings. Prices do not change during this time period. In particular, authorized revenues are not ratcheted down to reflect cost reductions that the firm achieves. Consequently, the firm can increase the profit it earns between rate hearings by reducing its operating costs. Thus, regulatory lag can provide some incentives for cost reduction under RORR. These incentives are limited, though, because the firm recognizes that once it becomes apparent that the firm can operate more efficiently, the regulator will set prices at all

future rate hearings to compensate the firm only for the lower costs that it has demonstrated can be achieved. Incentive Regulation In part to provide enhanced incentives for innovation and cost reduction, regulators have adopted various forms of incentive regulation in many industries.4 Price Cap Regulation Price cap regulation (PCR) is a form of incentive regulation that has been widely deployed in the telecommunications industry. PCR was suggested by Stephen Littlechild in the early 1980s as a means to regulate British Telecom.5 Under PCR, the prices that the regulated firm is permitted to charge for its services are allowed to rise, on average, at the rate of economywide price inflation, less an offset, for a specified period of time (known as the price cap period). Formally:

where denotes the rate at which the firm’s prices are increasing, I denotes the rate of price inflation in the economy, and X is a productivity offset or X factor. The X factor is an estimate of the extent to which the firm’s prices could increase more slowly than other prices in the economy without causing the firm to earn less than a normal profit. To illustrate, suppose the price cap period is four years. (In practice, price cap periods often are four or five years.) Further suppose the X factor is set at 3 percent. In addition, suppose the economywide rate of price inflation turns out to be 2 percent during each of the four years in the price cap period. Then the firm would be required to reduce its prices, on average, by 1 percent (= 3 − 2 percent) each year during the price cap period. The key feature of PCR is that authorized prices are linked to price increases elsewhere in the economy and not to the realized costs of the regulated firm during the price cap period. Consequently, authorized prices are not ratcheted down during the price cap period as the firm reduces its operating costs. If the firm lowers its costs, it is permitted to retain the entire cost savings in the form of increased profit during the price cap period. In essence, PCR institutionalizes the regulatory lag that provided some incentive for cost reduction under RORR. Furthermore, by implementing a fairly long price cap period, regulators can provide relatively strong incentives for cost reduction under PCR. Regulators typically employ all available information to set the X factor at the start of each price cap regime. Consequently, the regulated firm knows that if it secures substantial cost reductions during one price cap period, it is likely to face higher X factors (and associated less rapid price increases) in future price cap periods. However, if the current price cap period is sufficiently long, the firm may devote the effort required to reduce its operating costs in order to secure the associated increase in profits for the remainder of the price cap period. Pricing flexibility PCR offers another potential advantage over RORR in settings with emerging competition. Under RORR, a comprehensive rate hearing generally is required to authorize any price changes. Because rate hearings

typically take many months to conduct, incumbent suppliers effectively have limited ability to respond to competitive pressures in a timely manner under RORR. Consequently, RORR can promote creamskimming, whereby new industry suppliers offer only the most profitable services and leave incumbent suppliers to deliver less profitable services. For example, in settings where regulators have established similar prices for all customers across broad geographic regions, (unregulated) competitors may only offer service in densely populated regions, where unit production costs tend to be relatively low. PCR can enable incumbent suppliers to lower some of their prices toward marginal cost quickly in order to meet competitive pressures. By doing so, PCR can help to minimize industry costs by ensuring that customers are served by the least-cost supplier.6 Although pricing flexibility for incumbent suppliers can help ensure that consumers are served by the least-cost industry supplier, excessive pricing flexibility can discourage industry entry. To illustrate, suppose an incumbent supplier faces competition from a more efficient new supplier in one “competitive” geographic region. Further suppose the incumbent supplier is afforded unlimited flexibility to set the prices for its services as long as prices do not increase on average.7 In such a setting, if the incumbent supplier sought to drive its rival out of business, it could do so by setting a price below the rival’s cost (and thus below its own cost) in the competitive region. To offset the loss that the incumbent would thereby incur in the competitive region, it could increase the price of its service in noncompetitive regions up to the point where its prices in all regions combined have not increased on average.8 To avoid undesirable outcomes like this one, PCR typically precludes below-cost pricing. Many PCR plans also rule out major changes in the prices of individual services, regardless of whether, on average, price changes comply with the overarching restriction on prices. In addition, PCR plans often distinguish between services for which the incumbent supplier faces no competition and those for which competition is beginning to emerge. The plans then (1) place the “competitive” services in one basket of services; (2) place the noncompetitive services in a distinct basket; and (3) impose a distinct, separate price cap constraint on each basket of services. For example, the inflation-adjusted prices of services in the competitive basket might be permitted to rise, on average, by 3 percent annually, whereas the inflation-adjusted prices of services in the noncompetitive basket might be permitted to increase, on average, by 1 percent annually. Segregating services in this manner and placing separate constraints on each basket of services prevents the incumbent supplier from automatically raising the prices of services for which it faces little competition whenever it reduces the prices of services for which it faces increasing competition. Of course, when competition for some services develops to the point where such competition alone can fully discipline the incumbent supplier, the provision of these services should be deregulated. Z factors Many PCR plans allow prices to be revised during the price cap regime to reflect the financial implications of major, unanticipated industry events. For example, if exceptionally severe weather (such as a tornado) damages the regulated firm’s production facilities despite the firm’s best efforts to prevent such damage, then prices will be raised to cover the efficient costs of repairing the damage. Price changes of this sort often are accounted for through use of what is called a Z factor. Formally, the firm’s prices are permitted to rise at the rate

where Z represents the increase in the rate of price escalation required to offset the financial implications of

the major, unanticipated industry event in question. To qualify for a Z factor adjustment, an event typically must (1) be beyond the control of the regulated firm (that is, exogenous); (2) be of sizable financial magnitude; and (3) affect the regulated firm disproportionately, so that its financial impact is not fully reflected in the inflation index in the prevailing price cap formula. Requirement (1) is crucial. If the regulated firm were compensated for any increase in costs above projected levels, the firm would have no incentive to control its costs. Requirement (2) is designed to conserve regulatory resources by avoiding frequent hearings to assess whether small, unanticipated increases in cost were truly beyond the firm’s control. Requirement (3) serves to avoid payments to the firm that constitute double counting. To illustrate, a regulated firm might suggest that it should be permitted to increase its prices to offset the unexpectedly high wages that it, like other employers in the economy, is forced to pay to retain qualified workers. However, economywide wage inflation typically will cause the prices of most goods and services to increase. Therefore, the regulated firm would be authorized to increase its prices more rapidly as the economywide inflation rate increases, which obviates any need for an additional Z factor adjustment. Z factors can help ensure that the regulated firm is held responsible for its performance on dimensions that it can control and is not held responsible for its performance on dimensions that it cannot control. If the firm is held financially liable for variations in revenue or cost that it cannot control, then shareholders will demand greater compensation in order to assume this risk. Consequently, the firm’s costs of capital will increase with no offsetting benefit of motivating the firm to enhance its performance. Z factors can be implemented in the form of automatic adjustments in authorized retail prices as uncontrollable wholesale prices change. For instance, an electricity distribution company that delivers electricity to customers (but does not generate electricity itself) often has limited control over the wholesale price of electricity. Consequently, it can be advisable to permit the distribution company to vary its retail prices in response to substantial changes in the wholesale price of electricity. As we will see in chapter 17, the failure to include such linkage between the retail and wholesale prices of electricity under the PCR plan that was implemented in California in 2000 created havoc in the state’s electricity sector. Earnings variation Although PCR can provide stronger incentives for cost reduction than can RORR, PCR is not without its drawbacks. PCR can admit very high or very low earnings for the regulated firm. Because prices are not adjusted to reflect reduced costs during the price cap period, the firm may experience very low earnings if its costs rise or its revenues decline far more than anticipated. Alternatively, the firm may enjoy very high earnings if its costs decline or its revenues increase much more rapidly than expected. This turned out to be the case in the United Kingdom, where PCR was imposed on British Telecom beginning in 1984.9 During the initial five-year period of PCR, British Telecom secured rates of return on capital in excess of 20 percent. In response, the industry regulator (Oftel) increased the X factor from 3 percent to 4.5 percent during the second price cap period, which was scheduled to last for four years. Even under this more challenging price cap regime, British Telecom’s rate of return continued to exceed 20 percent. The persistent high earnings led Oftel to increase the X factor to 6.25 percent even before the end of the second price cap period. This experience demonstrates that PCR can admit substantial variation in earnings. It also illustrates that if the firm’s earnings substantially exceed expected levels under PCR, regulators may feel compelled to alter the terms of PCR prematurely to limit criticisms that their policies favor the regulated firm unduly.

Earnings Sharing PCR can be modified to limit undue variation in the regulated firm’s earnings. Earnings sharing regulation (ESR) proceeds much like PCR with the exception that explicit limits on realized earnings are announced in advance. Extremely high and extremely low earnings can be precluded under ESR. In addition, incremental earnings above and below specified thresholds can be shared by the regulated firm and its customers. The California Public Utilities Commission (CPUC) implemented ESR in the California telecommunications sector in 1989.10 The CPUC established an X factor of 4.5 percent for the telecommunications supplier, Pacific Bell. The CPUC also imposed the earnings sharing requirements illustrated in figure 13.2.

Figure 13.2 California Public Utilities Commission’s Earnings Sharing Plan

The rate of return on equity that Pacific Bell would have achieved in the absence of any earnings sharing is specified on the horizontal axis in figure 13.2. The corresponding rate of return after sharing is specified on the vertical axis. In the absence of earnings sharing, pre- and postsharing earnings would be identical, and so would be represented by the dashed 45° line in figure 13.2. The actual relationship between Pacific Bell’s pre- and postsharing earnings under the CPUC’s ESR plan is depicted by the thick, solid line in figure 13.2. This line reflects the following considerations. No earnings are shared if Pacific Bell’s realized rate of return on equity is between 8.25 percent and 13 percent. If realized presharing earnings are between 13 percent and 16.5 percent, the incremental earnings above 13 percent are shared equally by Pacific Bell and its customers. (Therefore, the slope of line segment BD is 0.5 in figure 13.2.) The maximum postsharing return that Pacific Bell can secure is 14.75 percent. If Pacific Bell’s realized rate of return falls below 8.25 percent for two successive years, the Pacific Bell can request a reduction in the X factor. This stipulation is approximated in figure 13.2 by the solid horizontal

line segment at a postsharing return of 8.25 percent. In principle, the share of earnings to which customers are entitled under ESR can be distributed in any one of several ways. Consumers might receive a credit on their monthly bill, for example. Alternatively, some or all prices might be reduced (below the levels dictated by the X factor) to reduce the firm’s earnings by the amount owed to customers. Another possibility is to require the firm to modernize its network or expand its network into sparsely populated regions that are not profitable to serve. Like all regulatory plans, ESR has its advantages and disadvantages. The primary advantage of ESR relative to PCR is that ESR can avoid particularly high and particularly low earnings, both of which can be problematic for regulators. An important disadvantage of ESR is that it can dull incentives for major cost reductions. The regulated firm recognizes that once its earnings rise to the level where incremental earnings are shared with customers, it will only be permitted to retain a fraction of any cost reductions it achieves. Consequently, incentives for cost reduction are less pronounced under ESR than under PCR. In essence, ESR can be viewed as somewhat of a middle ground between RORR and PCR. PCR and ESR have both been employed in the U.S. telecommunications industry. State regulators (like the CPUC) oversee the intrastate operations of telecommunications suppliers, and the Federal Communications Commission (FCC) regulates interstate operations. Table 13.1 reviews the adoption of RORR, PCR, and ESR by state regulators between 1985 and 2007. Before 1985, all state regulators employed RORR. By 1986, a few state regulators began to experiment with alternatives to RORR. The most popular alternative in the 1980s was known as a rate case moratorium (RCM). As its name implies, an RCM is a temporary suspension of rate hearings,11 so prices are not altered as the firm’s operating costs change. In a sense, a RCM resembles PCR with an X factor set equal to the realized rate of inflation (I). However, the firm typically had no authority to change the price of any service under an RCM even if, on average, prices did not change. Furthermore, the length of time for which an RCM would be in effect often was not specified. Consequently, the firm could not be certain that it would be permitted to retain the cost savings it achieved for any significant period of time. Table 13.1 Number of U.S. States Employing the Identified Form of Regulation, various years Year

Rate of Return Regulation

Earnings Sharing Regulations

Rate Case Moratorium

Price Cap Regulation

1985 1987 1990 1993 1995 1998 2000 2003 2007

50 36 23 17 18 13 7 6 3

0 3 14 22 17 2 1 0 0

0 10 9 5 3 3 1 0 0

0 0 1 3 9 30 39 40 33

Source: David E. M. Sappington and Dennis L. Weisman, “Price Cap Regulation: What Have We Learned from Twenty-Five Years of Experience in the Telecommunications Industry?” Journal of Regulatory Economics 38 (December 2010): 227–257.

RCMs were adopted in part because the pronounced inflation of the 1970s had begun to subside during the 1980s. Consequently, the rate at which the firm’s operating costs were rising unavoidably declined. In addition, due to rapid technological progress in the computer industry in the 1980s, the cost of telecommunications switching equipment was declining. Consequently, telephone companies often were able to operate profitably without raising prices for their services.

ESR was first adopted in the U.S. telecommunications industry in 1987. ESR was viewed as a moderate departure from RORR that, relative to PCR, provided safeguards against extremely high and low earnings. ESR had become quite popular by the early 1990s. During the 1990s, regulators became more comfortable with the pricing flexibility and earnings variation that were admitted by ESR. The experience with ESR also improved regulators’ ability to predict the earnings that regulated telecommunications suppliers could secure when they were afforded some flexibility to alter prices for their services in the face of increasing competition from alternative suppliers of telecommunications services. As industry competition continued to increase in the 1990s, many state regulators began to implement PCR. (The Telecommunications Act of 1996 encouraged substantial industry competition.) By the early 2000s, most state regulators had replaced RORR and ESR with PCR. In 2003, 80 percent of state regulators were employing PCR in their telecommunications industry. After 2003, PCR began to be replaced by more widespread deregulation of telecommunications services. In the face of strengthening competition from cable companies and suppliers of internet telephony and wireless communication, many state regulators have largely deregulated the prices of most intrastate retail telecommunications services. Given the unavoidable imperfections of regulation, deregulation is appropriate whenever competition develops to the point where it can adequately discipline incumbent suppliers. Service quality One additional potential drawback to PCR is that it can reduce a firm’s incentive to deliver high-quality service to its customers. Recall that PCR can provide strong incentives for cost reduction. One way to reduce costs is to reduce service quality. For example, an electricity supplier might reduce its short-run operating costs by limiting the maintenance and routine servicing of its plants, or by reducing its efforts to keep all vegetation away from transmission lines. Reduced maintenance of all sorts can increase the frequency of service interruptions and thereby reduce the quality of service delivered to customers. PCR plans can be modified to limit undue reductions in service quality. To illustrate, PCR plans can specify the level of quality the regulated firm is expected to deliver on each major dimension of service quality and penalize the firm if it fails to deliver the identified levels of quality. The penalty can take the form of a higher X factor if realized quality falls sufficiently far below specified quality targets. Of course, it can be difficult to identify in complete detail and measure accurately all relevant dimensions of service quality. Rather than attempt to do so, regulators can simply threaten to terminate PCR and return to RORR if a substantial deterioration in service quality arises under PCR. Credible threats of this sort can limit a firm’s incentive to allow service quality to decline under PCR. Perhaps in part for this reason, empirical studies generally do not find systematic reductions in service quality under PCR.12 Performance Based Regulation in the Electricity Sector PCR has not achieved the same popularity in the electricity sector that it achieved in the U.S. telecommunications sector in the early 2000s.13 Yet various alternatives to RORR—often referred to as performance based regulation (PBR)—have been adopted in the electricity sector. Many PBR plans identify particularly important dimensions of service quality and link financial rewards and penalties to realized performance on the identified dimensions. For example, service interruptions often are identified as a particular problem, and electricity suppliers are rewarded for reducing interruptions below specified levels and penalized if interruptions rise above specified levels.14 Targeted reward structures of this type can help motivate regulated suppliers to focus their attention on particularly important dimensions of service quality.

However, they also can encourage a firm to focus excessive attention on the specified performance dimensions and devote insufficient attention to other important performance dimensions that are difficult to measure (such as customer service). The Office of Electricity Regulation has introduced a more comprehensive PBR plan for electricity transmission and distribution companies in the United Kingdom. The plan is known as RIIO, for “Revenue = Incentives + Innovation + Outputs.”15 The plan resembles PCR in that it specifies a fairly long period of time (eight years) for which the plan will be in force. RIIO also resembles PCR by linking the revenue the electricity supplier receives to inflation. Revenue also is linked to the number of customers the firm serves. The Office of Electricity Regulation estimates the costs an electricity supplier will incur during the upcoming eight-year period if it operates efficiently.16 Actual costs are then measured throughout the period. The firm is permitted to keep half of any cost savings it achieves relative to its projected costs. The firm is held responsible for half of any corresponding cost overruns. Thus, RIIO operates much like ESR. RIIO also allows the supplier to request changes to plan details if unanticipated industry developments arise.17 In addition to providing general incentives for efficient operation, RIIO includes specific rewards and penalties for particularly important performance dimensions, including network reliability and the environmental impact and safety of the firm’s operations. RIIO also provides funding for research and development. All regulated suppliers are eligible for some funding, provided they agree to share the findings of their research with other suppliers. RIIO offers additional funding to those suppliers whose proposed research and development projects appear to be most promising. RIIO also provides incentives for the regulated suppliers to collaborate with customer representatives when developing proposals for future eight-year RIIO plans. Plans that are presented to the Office of Electricity Regulation with the explicit endorsement of key customer representatives are promised more rapid regulatory review than plans that lack this endorsement. This feature of RIIO is an attempt to streamline the regulatory review process and promote industry consensus about the best way to meet the needs of all industry participants.18 Regulatory Options The discussion to this point has emphasized the design of a single regulatory plan that best suits the needs of all industry participants. Sometimes consumers can be better served if the regulated supplier is afforded a choice among regulatory plans. This is the case when, as is often the case in practice, the regulated firm is better informed about prevailing and likely future industry conditions than is the regulator. To explain the value of allowing the regulated firm to choose one plan from a carefully designed set of regulatory plans, consider the following setting.19 A regulator would like to replace RORR with PCR in order to motivate the regulated firm to reduce its operating costs substantially. However, the regulator is very unsure about the firm’s ability to reduce its costs. This ability may be limited, in which case the firm can only operate profitably if the prices it charges for its services are permitted to increase, on average, at the rate of inflation. It is also possible, though, that the firm could reduce its costs so dramatically that it could operate profitably even if its inflation-adjusted prices declined by 6 percent annually. In such a setting, if the regulator wished to be nearly certain to avoid financial distress for the firm under PCR, she would have to set an X factor close to zero. The regulator may be able to better protect consumers by affording the firm a choice between two regulatory plans. One plan might be RORR, for example. The other plan might be PCR with an X factor of 4

percent. Faced with this choice, the firm that knows it can reduce its costs substantially will choose the PCR plan, even though this plan requires substantial reductions in inflation-adjusted prices. In this event, the PCR plan that is implemented will be more favorable than the plan (with an X factor of zero) that the regulator would have implemented to avoid inflicting financial distress on the firm if she were not permitted to offer the firm a choice among regulatory plans. The drawback to offering the firm this choice among plans is that the firm will choose to operate under RORR if it knows that it cannot secure a normal profit under PCR with a 4 percent X factor. However, consumers can be better served by RORR than by PCR that imposes financial distress on the firm, with the associated pressures to reduce service quality and potentially even suspend operations. Thus, while it is important to consider the best single regulatory plan to implement in any given setting, it is also useful to keep in mind that consumers can sometimes be best served by affording the regulated firm a choice among a set of regulatory options. The FCC has afforded the regional Bell operating companies (RBOCs) a choice among regulatory plans. Each RBOC provides long-distance telephone companies with access to its network in order to complete long-distance telephone calls. Each RBOC serves a different geographic region in the United States. The FCC regulates the prices that the RBOCs charge for this access service. In the mid-1990s, the FCC allowed each RBOC to choose its preferred regulatory plan from among the following three. The first plan was a pure PCR plan (with no earnings sharing) with an X factor of 5.3 percent. The second plan entailed a 4 percent X factor coupled with limited earnings sharing and a maximum postsharing rate of return of 12.25 percent. The third plan coupled an intermediate 4.7 percent X factor with expanded earnings sharing that admitted a maximum rate of return of 13.25 percent. Thus, an RBOC could secure increased earnings potential by agreeing to reduce its access prices more rapidly. Most of the RBOCs decided to avoid any explicit limit on their earnings by selecting the pure PCR plan. Regulatory options of this sort can be particularly valuable when the regulator is overseeing the operations of multiple firms that operate in different geographic regions. In this event, different firms that face different industry conditions can each choose a plan that is well suited to the prevailing environment. But even when a regulator oversees the operations of a single firm, regulatory options can serve consumers well by inducing the firm to use its superior knowledge of the environment to select the plan that will best serve the firm and its customers.20 Yardstick Regulation Regulators can employ an alternative form of incentive regulation when they oversee multiple firms that operate in distinct geographic regions. Such oversight often arises in the water industry, for example, where distinct firms operate as monopoly suppliers of water in distinct geographic regions. In such settings, the regulator can employ the performance of one firm as an indicator of the performance that other firms should be able to achieve. For instance, the regulator might allow each firm to charge a price equal to the average cost of the other firms it oversees. By doing so, the regulator would avoid linking a firm’s authorized price to the firm’s own costs. Therefore, a regulatory policy of this sort could provide strong incentives for all firms to operate efficiently.21 Although such yardstick (or benchmark) regulation has considerable intuitive appeal, it can encounter serious problems in practice. Firms seldom operate under identical circumstances. Consequently, holding each firm subject to the standards set by other firms can impose financial hardship on firms that, for reasons beyond their control, face particularly high operating costs. For example, one water distribution network

may be older than other networks, and so may unavoidably experience more water leakage. Alternatively, the ground through which one water company must drill to access subterranean water may be particularly rocky and difficult to penetrate. In addition, customer density may be lower in one firm’s service territory than in other territories, so the firm must lay more transmission pipe than other firms do to deliver water to the same number of customers. There are ways to try to control for the cost implications of such differences among firms. However, these controls are seldom perfect. Consequently, benchmark regulation typically does not provide a simple resolution to difficult problems that regulators face when attempting to motivate suppliers to operate efficiently and to deliver high-quality services to customers at low prices. However, the performance of similarly situated firms can sometimes provide useful information about the level of performance to which another firm can reasonably be held, which explains the use of yardstick regulation in certain settings, such as the UK water industry.22 Summary This chapter has reviewed the key elements of rate of return regulation (RORR) and discussed some of its potential drawbacks. We have also reviewed several possible alternatives to RORR—including price cap regulation (PCR), earning sharing regulation, and yardstick regulation—and explored their primary advantages and disadvantages. In addition, we noted that a regulator can sometimes serve consumers well by affording the regulated firm a choice among a carefully designed set of regulatory options. By doing so, the regulator can induce the regulated firm to use its privileged information to choose a regulatory policy that best suits the prevailing environment. No regulatory plan is ideal in all respects, and the best choice of regulatory plan varies with the prevailing environment and regulatory goals. RORR regulation may perform well if the primary goal is to attract capital to the regulated industry, whereas PCR may be a preferable policy if the key objective is to reduce industry operating costs. Intermediate forms of regulation like earnings sharing regulation (ESR) may be preferable if the objectives of capital attraction and cost reduction are deemed to be of comparable importance. Of course, every regulatory policy is at best an imperfect substitute for competition. Therefore, regulators often best serve consumers by encouraging competition from efficient suppliers. As we will see in the ensuing chapters, the widespread deregulation that was introduced in several industries during the 1980s secured substantial welfare gains. Questions and Problems 1. Regulated firms often state that they will be unable to attract capital to finance required infrastructure investment if their allowed rates of return are too low. Does this imply that regulators cannot use the allowed rate of return as a means to induce desired levels of performance: raising the rate for good performance and lowering the rate for poor performance? Defend your answer. 2. The Edison Electric Company produces electricity using capital and labor. The (inverse) demand curve for electricity is P(Q) = 100 − Q, where P denotes price, and Q denotes quantity. If Edison were unregulated, it would produce with a constant average cost and marginal cost equal to 20. Under Averch-Johnson (AJ) regulation, the utility commission sets the allowed unit cost of capital above its actual cost. Consequently, the regulated firm employs more than the costminimizing level of capital and so operates with an average and marginal cost of 24. Edison Electric charges price 26 for electricity under AJ regulation.

a. Find the profit-maximizing price that Edison Electric would set if it were not regulated. Calculate the profit that the company would earn in this case. Also calculate the level of consumer surplus and welfare (the sum of consumer surplus and profit) that would arise in the absence of regulation. Hint: Edison Electric’s marginal revenue is 100 – 2Q when it produces Q units of output. b. Calculate the profit that Edison Electric earns under AJ regulation. Also calculate the level of consumer surplus and welfare that arise under AJ regulation. Prove that, despite its drawbacks, AJ regulation increases welfare above the level that would arise in the absence of regulation. Welfare is maximized when output is produced at minimum cost and price is equal to marginal cost. What are the welfaremaximizing (average and marginal) cost, price, and output in the present setting? Draw a figure that shows the two types of losses that arise under AJ regulation relative to the welfare-maximizing outcome in the present setting. 3. The president of a regulated electricity company argues that residential electric rates need to be increased relative to industrial rates. His reason is that the rate of return that the company earns on its assets is higher for its industrial customers than for its residential customers. Is this a good reason? Hint: The company employs the same assets to serve residential and business customers. 4. Two electricity transmission and distribution companies operate in country X. Company N serves the northern portion of the country, and company S serves the southern portion of the country. Historically, country X has always employed RORR to regulate the activities of company N and company S. The regulator in country X seeks your advice on whether to continue to employ RORR in the electricity sector or adopt a different form of regulation. The alternative forms of regulation initially under consideration are PCR and yardstick regulation. a. What questions would you ask the regulator when preparing your recommendation? What answers would lead you to recommend the adoption of PCR? Of yardstick regulation? b. Identify conditions under which you would be inclined to recommend a different alternative to RORR. What would that alternative be? Why would it be preferable to RORR, PCR, and yardstick regulation? 5. How would you illustrate the relationship between presharing earnings and postsharing earnings in figure 13.2 if the regulated firm operated under RORR, where the authorized rate of return is 12 percent? 6. To qualify for a Z factor adjustment under PCR, both the event in question and the financial ramification of the event should be beyond the control of the regulated firm. Explain this conclusion. In doing so, provide an example of an event that typically is beyond the control of a regulated firm, but the financial ramification of the event is in the firm’s control. 7. Suppose a newspaper editorial criticizes a public utility commission for “selling out” to the regulated firm by allowing the firm to choose between RORR and PCR. Would you agree with the editorial? Why or why not?

Notes 1. Harvey Averch and Leland L. Johnson, “Behavior of the Firm under Regulatory Constraint,” American Economic Review 52 (December 1962): 1052–1069. 2. State of California, Public Utilities Commission Annual Report 1986–1987, p. 13. Later, a negotiated agreement led to a disallowance of “only” about $2 billion, and the company had the opportunity to lower the disallowance further by good performance. See Yeon-Koo Che and Geoffrey Rothwell, “Performance-Based Pricing for Nuclear Power Plants,” Energy Journal 16 (1995): 57–77. 3. Evidence suggests that the substantial disallowances of investments in nuclear plants that were observed in the electricity sector in the 1980s seem to have primarily penalized electricity suppliers for avoidable cost overruns and did not represent regulatory opportunism. See Thomas Lyon and John Mayo, “Regulatory Opportunism in Investment Behavior: Evidence from the U.S. Electric Utility Industry,” RAND Journal of Economics 36 (Autumn 2005): 628–644. 4. For reviews of the theory and practice of incentive regulation, see, for example, David E. M. Sappington, “Price Regulation,” in Martin E. Cave, Sumit Majumdar, and Ingo Vogelsang, eds., Handbook of Telecommunications, vol. 1, Structure, Regulation and Competition (Amsterdam: Elsevier, 2002), pp. 225–293; and Ross C. Hemphill, Mark E. Meitzen, and Philip E. Schoech, “Incentive Regulation in Network Industries: Experience and Prospects in the U.S.

Telecommunications, Electricity, and Natural Gas Industries,” Review of Network Economics 2 (December 2003): 316–337. 5. See, for example, Stephen Littlechild, Regulation of British Telecommunications’ Profitability. Report to the Secretary of State, UK Department of Industry (London: February 1983); and Stephen Littlechild, “Reflections on Incentive Regulation,” Review of Network Economics 2 (December 2003): 289–315. 6. If regulation mandates a price for the incumbent supplier’s service that is well above the firm’s marginal cost, then a less efficient competitor may be able to profitably serve customers by setting a price above its own cost (which exceeds the incumbent’s cost) but below the incumbent’s price. 7. This restriction can be viewed as PCR where the X factor is set equal to the rate of inflation (I). 8. Mark Armstrong and John Vickers, “Price Discrimination, Competition and Regulation,” Journal of Industrial Economics 41 (December 1993): 335–359. 9. This discussion of PCR in the United Kingdom is drawn from Mark Armstrong, Simon Cowan, and John Vickers, Regulatory Reform: Economic Analysis and British Experience (Cambridge, MA: MIT Press, 1994). 10. See CPUC, In the Matter of Alternative Regulatory Frameworks for Local Exchange Carriers, Decision No. 89-10-031 (California: October 12, 1989). 11. Recall that a rate hearing is a quasi-judicial process in which evidence is put forth to determine the firm’s costs and thus an appropriate level of authorized revenue. 12. See, for example, David Sappington, “The Effects of Incentive Regulation on Retail Telephone Service Quality in the United States,” Review of Network Economics 2 (December 2003): 355–375; Chunrong Ai and David Sappington, “Reviewing the Impact of Incentive Regulation on U.S. Telephone Service Quality,” Utilities Policy 13 (September 2005): 201–210; Tooraj Jamasb and Michael Pollitt, “Incentive Regulation of Electricity Distribution Networks: Lessons of Experience from Britain,” Energy Policy 35 (December 2007): 6163–6187; and Anna Ter-Martirosyan and John Kwoka, “Incentive Regulation, Service Quality, and Standards in U.S. Electricity Distribution,” Journal of Regulatory Economics 38 (December 2010): 258–273. 13. For a comparison of the adoption of PCR in the telecommunications and electricity sectors, see David E. M. Sappington and Dennis L. Weisman, “The Disparate Adoption of Price Cap Regulation in the U.S. Telecommunications and Electricity Sectors,” Journal of Regulatory Economics 49 (April 2016): 250–264. 14. The system average interruption duration index (SAIDI) is a common measure of service interruption. SAIDI is the ratio of the total outage time experienced by all customers to the number of customers. 15. For a detailed description of RIIO, see Peter Fox-Penner, Dan Harris, and Serena Hesmondhalgh, “A Trip to RIIO in Your Future? Great Britain’s Latest Innovation in Grid Regulation,” Public Utilities Fortnightly 151 (October 2013): 60–65. 16. Estimated costs can differ across suppliers. Each supplier serves a distinct geographic region, and efficient operating costs can vary across regions due to differences in terrain and population density, for example. 17. This provision of RIIO parallels the use of Z factors in PCR plans. 18. For further discussion and analysis of expanded stakeholder involvement in the regulatory process, see the following works of Stephen Littlechild: “Regulation, Over-Regulation and Some Alternative Approaches,” European Review of Energy Markets 9 (2009): 153−159; “Stipulated Settlements, the Consumer Advocate and Utility Regulation in Florida,” Journal of Regulatory Economics 35 (February 2009): 96–109; “The Bird in Hand: Stipulated Settlements in Florida Electricity Regulation,” Utilities Policy 17 (September–December 2009): 276–287; and “Regulation and Customer Engagement,” Economics of Energy and Environmental Policy 1 (December 2011): 53–67. 19. See David E. M. Sappington and Dennis L. Weisman, Designing Incentive Regulation for the Telecommunications Industry (Cambridge, MA: MIT Press, 1996), chapter 6. 20. For a review of the literature that analyzes the design of regulatory policy when regulated firms are better informed than regulators about prevailing industry conditions, see Mark Armstrong and David Sappington, “Recent Developments in the Theory of Regulation,” in Mark Armstrong and Robert Porter, eds., The Handbook of Industrial Organization, vol. 3 (Amsterdam: Elsevier Science, 2007), pp. 1557–1700. 21. For a formal development of this conclusion, see Andrei Shleifer, “A Theory of Yardstick Competition,” Rand Journal of Economics 16 (Autumn 1985): 319–327. 22. For further discussion of the design and implementation of yardstick regulation in the water sector, see Sanford Berg, Water Utility Benchmarking: Measurement, Methodologies, and Performance Incentives (London: IWA Publishing, 2010).

14 Dynamic Issues in Natural Monopoly Regulation: Telecommunications

The discussion in chapter 13 focused on the design of regulatory policy in settings where it is known that the industry is characterized by natural monopoly. This chapter considers the realistic possibility that an industry that was once a natural monopoly may transition to an industry in which multiple firms, each operating at minimum efficient scale, can serve customers. Industry conditions can change substantially over time. Consumer demand can change as preferences and income vary and as substitute products become available. Industry supply can change as input costs vary and as new production technologies emerge. Regulators should adapt their policies as these changes arise. Otherwise, policies that served consumers well in the past may fail to do so in the present. Two types of policy adaptation are of particular importance. First, welfare-maximizing prices typically change as prevailing demand and cost functions vary, so regulators should adjust authorized prices. Second, changes in demand and cost may be so pronounced that an industry that was once a natural monopoly may now be ripe for competition. In such settings, regulators should determine whether competition alone is sufficient to protect consumers or whether some residual regulation is required. When residual regulation is warranted, the details of appropriate regulatory policy should be specified. This chapter explores potential causes of substantial industry transformation and assesses the effects of different regulatory policies when such transformation is thought to have occurred. We apply these concepts to events that have taken place in the interstate telecommunications market since World War II. Before undertaking this investigation, we briefly review the conditions under which monopoly regulation is socially desirable. We begin by resuming our analysis of natural monopoly. Basis for Natural Monopoly Regulation A natural monopoly exists at output Q0 if the total cost of producing Q0 is minimized when a single firm produces this output. In this event, the cost function C(Q) is said to be subadditive at Q0. For example, if a cost function is subadditive at Q0, then industry costs are lower when a single firm produces output Q0 than when each of two firms produces half of Q0. Recall from chapter 10 that when a firm produces a single product, the firm’s cost function is subadditive for all output levels if the firm’s average cost of production always declines as output increases. (See figure 10.1.) In practice, though, a firm’s average cost may decline as output increases for relatively small output levels and increase with output for higher output levels. In the presence of such a U-shaped average cost curve, knowledge of industry demand is required to determine whether the firm’s cost function is subadditive at the welfare-maximizing level of output. Consider the U-shaped average cost curve AC(Q) in figure 14.1. Scale economies are exhausted at

quantity If the market demand curve is D(P), this industry is a natural monopoly, because D(P) intersects AC(Q) at Q* and average cost is declining for all output levels below Q*. If the regulators were restricted to using linear pricing (see chapter 12 for details), the optimal policy would be to allow only one firm to operate and to set a price equal to P* (that is, to implement average cost pricing). This regulatory policy maximizes welfare for two reasons. First, industry cost is minimized because a single firm supplies the entire industry output. Second, P* is the price that maximizes welfare while ensuring nonnegative profit for the producer.

Figure 14.1 Natural Monopoly

To understand the importance of the role of market demand in determining the proper regulatory policy, consider a setting where the average cost function in figure 14.1 prevails but the demand curve is now The welfare-maximizing price in this setting is which is equal to the minimum average cost At this price, market demand is Three firms, each operating at the minimum efficient scale, could serve industry demand Under such circumstances, competition among three firms might suffice to secure a market price near in which case industry regulation might not be necessary. Monopoly regulation is generally thought to be appropriate when the firm’s minimum efficient scale (that is, the smallest level of output at which average cost is minimized) is approximately equal to or larger than market demand at the welfare-maximizing price. In figure 14.1, when the demand curve is D(P), the firm’s minimum efficient scale is which exceeds market demand (at a price equal to average cost) Q*.

Consequently, regulated monopoly may be appropriate in this industry. In contrast, when is the demand curve, market demand at price is three times the efficient firm size. In this situation, industry deregulation may well be appropriate. In this example, the welfare-maximizing output is an integer multiple of the minimum efficient scale. Specifically, when demand is market demand at a price equal to minimum average cost is exactly three times the minimum efficient scale. Because one would not expect such an outcome in general, it is important to consider more realistic outcomes. Engineering estimates suggest that a U-shaped average cost curve is not typical. Instead, average cost declines until minimum efficient scale is achieved, and then it is roughly constant over an extended range of output, as illustrated in figure 14.2. Average cost is minimized here for any output between and For this average cost function, a natural monopoly prevails if the market demand function is D(P). In contrast, a natural monopoly does not prevail if the demand curve is In this case, two firms, each producing Q0/2 units of output, could operate at minimum efficient scale and supply the market demand of Q0 that prevails when price is set equal to minimum average cost The discussion of figure 14.2 illustrates that the conclusions drawn above continue to hold if the average cost curve has a minimum over a range of quantities, as is often the case in practice. For simplicity, the ensuing discussion focuses on settings where standard U-shaped average cost curves prevail. Remember, though, that the key conclusions drawn below continue to hold in settings where more realistic average cost functions of the type depicted in figure 14.2 prevail.

Figure 14.2

Efficient Market Structure with a Flat-Bottomed Average Cost Curve

Sources of Natural Monopoly Transformation In this section, we consider changes that might cause an industry that once was a natural monopoly to transition to an industry where multiple firms operating at minimum efficient scale could serve customers, and so monopoly regulation may no longer be appropriate. Demand Side Changes of this sort reduce the minimum efficient scale relative to market demand at the welfaremaximizing price. One such change is an expansion of industry demand, which increases the welfaremaximizing level of output. For example, in figure 14.1, if the demand curve shifts from D(P) to then * the optimal output increases from Q to When the minimum efficient scale is only one-third of this output level, natural monopoly regulation is no longer the optimal policy. Industry demand can increase for many reasons. For example, consumers may come to value the service in question more highly, consumer income may increase, the prices of complementary services may decline, or new complementary services may become available. Cost Side Recall that a firm’s cost function is determined by the best available production technology and by current input prices. Consequently, technological innovations or changes in input prices can alter a firm’s average and marginal cost functions. Changes in cost functions can affect whether a natural monopoly prevails in two ways. First, they can affect the minimum efficient scale. Second, they can affect the welfaremaximizing level of output. To analyze these two effects, recall that total cost C(Q) is the sum of fixed cost FC and variable cost VC(Q). Formally:

Dividing total cost by output (Q) provides average cost:

where AVC(Q) denotes average variable cost, which is the ratio of variable cost to output. Observe that as Q increases, fixed costs are spread over a larger output, so average fixed cost FC/Q declines. In contrast, AVC(Q) may increase as output increases. In this event, fixed costs are large relative to variable costs when Q is small, so the reduction in average fixed cost as Q increases outweighs the increase in average variable cost. Consequently, when Q is initially low, average cost declines as Q increases. As output is increased further, the reduction in average fixed cost as Q increases becomes less pronounced, because fixed costs are already spread out over many units of output. If average variable cost rises sufficiently rapidly with output, then average cost will no longer decline as output expands above some level. This level of output is the minimum efficient scale, which occurs where the slope of the average cost curve is zero. Average cost is minimized at this level of output. Average cost rises as output increases above the minimum efficient scale. These considerations underlie the standard U-shaped average cost curve depicted in figure 14.3.

Figure 14.3 Effect of Change in Fixed Costs on the Efficient Market Structure

Now consider the effect of a reduction in fixed costs. Clearly, average cost declines at all levels of output, which reduces the minimum efficient scale from to in figure 14.3. Prior to the reduction in fixed costs, the welfare-maximizing price that ensures nonnegative profit for the sole industry supplier is P*, which is equal to the firm’s average cost of production. Following the reduction in fixed costs, the corresponding welfare-maximizing price declines to Market demand at this price is and industry costs are minimized when two firms—not one firm—serve consumers and each firm is operating at the minimum efficient scale. In general, an innovation that reduces fixed costs makes it less likely that an industry is a natural monopoly. For the setting illustrated in figure 14.3, the reduction in fixed costs changes the industry from a natural monopoly to an industry in which two firms can profitably serve industry demand at the welfaremaximizing price. Although a decline in fixed costs makes it less likely that an industry is a natural monopoly, the effect of a change in variable costs is more ambiguous. To illustrate, suppose that an increase in input prices causes average variable cost to increase, as reflected by an increase in the average cost curve from AC(Q) to in figure 14.4. The increase in average cost causes the minimum efficient scale to decline from to

Figure 14.4 Effect of Change in Variable Costs on the Efficient Market Structure

A smaller efficient scale implies that the industry is less likely to remain a natural monopoly. However, the higher average cost reduces the welfare-maximizing output, which implies that the industry is more likely to be a natural monopoly. The net impact of these two countervailing effects depends on the price elasticity of demand. If demand is largely insensitive to price, as in demand curve D(P) in figure 14.4, then the welfare-maximizing level of output does not decline much as the welfare-maximizing price increases to P*. (Output declines from to in figure 14.4.) Because two firms can operate profitably at minimum efficient scale when the prevailing price is P*, monopoly regulation is no longer appropriate. In contrast, if demand is instead which is relatively price elastic, the welfare-maximizing output declines 0 considerably, from to Q . In this case, a natural monopoly still exists, even though the increase in variable costs caused the minimum efficient scale to decline. To summarize, reductions in fixed costs reduce the minimum efficient scale and reduce the likelihood that the industry is a natural monopoly. A reduction in variable costs can either increase or reduce the likelihood that the industry is a natural monopoly, depending on the price elasticity of demand. Regulatory Response We now consider the policy options that a regulator might pursue when changes in industry conditions raise questions about whether the regulated industry is a natural monopoly.

Regulators have three primary alternatives in such a setting. The first alternative is to continue price and entry regulation. This is the appropriate policy if the best available evidence suggests that the industry is still a natural monopoly. The second alternative policy is full deregulation: Allow free entry and remove price controls. This policy is appropriate in the presence of solid evidence that the industry is no longer a natural monopoly or that scale economies are limited, so any possible increase in industry costs from production by multiple firms is likely to be more than offset by the general benefits of competition. The third alternative is partial deregulation, which would entail instituting relatively lenient standards for industry entry while maintaining substantive control over industry prices. One possibility would be to specify a maximum price but permit prices below the upper bound. A price floor might also be considered if there is concern that a firm might be setting a price below cost temporarily to drive rivals from the market. The same form of price regulation might be imposed on all industry suppliers. Alternatively, as is often the case in practice, new industry suppliers might be afforded considerable pricing freedom while the incumbent supplier is required to secure regulatory approval in order to change prevailing prices substantially. Partial deregulation can be a useful intermediate policy when regulators face considerable uncertainty about whether the industry continues to be a natural monopoly. Partial deregulation can also constitute a reasonable compromise in the presence of intense lobbying against deregulation, and it may serve as a transition to full deregulation. Asymmetric Regulation and Cream-Skimming When implementing partial deregulation in an industry, it can be wise to avoid policies that encourage entry even when industry costs would be minimized if a single supplier served all customers. We illustrate such a policy in a setting where two services, X and Y, are supplied. Let QX denote the number of units of service X produced, and let QY denote the number of units of service Y produced. The total cost of producing both outputs is C(QX, QY). A multiproduct natural monopoly will exist in this two-service setting if the cost function exhibits economies of scope and product-specific economies of scale. Economies of scope are present when it is less costly to have a single firm produce both QX and QY than to have one firm produce QX and a second firm produce QY:

The term to the left of the inequality in expression 14.1 is the cost incurred by a single firm when it produces both QX and QY. The sum to the right of the inequality is the total cost incurred when one firm produces QX and a different firm produces QY. Product-specific economies of scale are the natural counterpart to scale economies when a firm supplies multiple services. To define this concept formally, let IC(QX) denote the incremental cost of supplying QX, which is the extra cost incurred when QX is produced, given that QY is already being produced:

The average incremental cost of supplying QX, denoted AIC(QX), is simply the incremental cost per unit of output:

If service X exhibits product-specific economies of scale, then the average incremental cost of supplying X declines as the amount of X supplied increases. We now explain why a multiproduct natural monopoly is present if the prevailing cost function exhibits both economies of scope and product-specific economies of scale for all services supplied. When a multiproduct natural monopoly prevails, the total cost of supplying QX and QY is minimized when a single firm supplies both outputs. In the presence of economies of scope, if m firms were supplying service X and m distinct firms were supplying service Y, total industry cost would be reduced if these 2m firms combined into m two-product firms. In other words, the average incremental cost of a single firm supplying X, AIC(QX), is less than the average cost of a single firm supplying X, AC(QX) (see figure 14.5a). Furthermore, in the presence of product-specific economies of scale, average cost declines as a firm supplies more of a service. Therefore, total cost is lower when the m two-product firms combine into a single firm. Therefore, economies of scope and product-specific economies of scale for all services imply that a natural monopoly prevails.

Figure 14.5 Cross-Subsidization and Cream-Skimming

Suppose that the demand curves for services X and Y are as shown in figure 14.5. Further suppose that the regulated price of service X is and the regulated price of service Y is These prices are set to ensure no extranormal profit for the regulated enterprise:

Notice that the price of service Y is less than the average incremental cost of supplying (see figure 14.5b). A regulator might set such a low price for a service (like access to a telecommunications network) to encourage its consumption. In contrast, to ensure the firm’s viability, the regulator sets the price of service X above the average incremental cost of supplying (figure 14.5a).

A price structure like this one can encourage entry into a multiproduct natural monopoly industry, which increases total industry production costs. To see why, suppose the average cost curve for a firm that supplies only X is given by the AC(QX) curve in figure 14.5a. This firm can enter the industry profitably by setting a price slightly below This practice of entering only the more profitable markets is known as cream1 skimming. Entrants take the “cream” (the more profitable markets) and leave the “milk” (the less profitable markets) for the incumbent regulated supplier to serve. Two important facts are of note about cream-skimming in the presence of a natural monopoly. First, cream-skimming increases industry costs, because all output pairs are supplied at minimum cost by a single firm. As shown in figure 14.5a, the average cost of a single-product firm exceeds the average (incremental) cost of the regulated two-product firm for service X. Second, cream-skimming is facilitated when incumbent suppliers are compelled to supply some services at prices below average incremental cost and other services at prices above incremental cost. Inefficient entry typically can be avoided if other pricing structures (such as Ramsey prices—see chapter 12) are adopted.2 Interstate Telecommunications Market It is instructive to consider the foregoing analysis in the context of the interstate telecommunications market (ITM).3 The service provided in this market is a telephone call between distinct locations in different states at a particular point in time. Three types of ITM services are provided: message-toll service (MTS), which is often referred to as long-distance telephone service; wide-area telephone service (WATS), otherwise known as “800” services, whereby the telephone call is automatically charged to the party that receives the call rather than, as is normally the case, to the party that initiates the call; and private-line service (PLS), which is a circuit that connects two or more points to meet the communication needs of specific users for full-time access. For example, a PLS may be used by a manufacturer to provide point-to-point communications between several of its factories. The users of PLS are typically medium-sized or large firms or government organizations. Regulatory Background Federal regulation of the ITM has its roots in the Mann-Elkins Act of 1910. This piece of legislation gave the Interstate Commerce Commission the power to regulate interstate telephone service. The Interstate Commerce Commission’s ability to control entry resulted in American Telephone & Telegraph (AT&T) achieving a de jure monopoly in long-distance voice communications services. The Interstate Commerce Commission also had the power to set maximum and minimum rates for services. The Communications Act of 1934 transferred power over the ITM from the Interstate Commerce Commission to the newly created Federal Communications Commission (FCC). The FCC had control over most aspects of competition through its control of price, entry, and interconnection. Interconnection is the linking of long-distance lines with local telephone lines. Until the late 1950s, the ITM was a classic case of a regulated monopolist. AT&T was the sole supplier of MTS, WATS, and PLS. The FCC controlled price and prevented other firms from competing with AT&T. The ITM was regulated in this manner because it was considered to be a natural monopoly. For many years the best available production technology was the open-wireline system, which involves stringing wires between poles to send messages across geographic regions. Because of the high fixed cost of constructing such a system and the relatively low marginal cost of adding another customer to the network, economies of

scale were believed to exist at the prevailing level of demand for ITM services. In the 1930s, AT&T developed coaxial cable, which replaced the open-wireline system as the best available technology. Coaxial cable can carry a much larger number of long-distance communications lines simultaneously. However, the use of coaxial cable entails particularly large fixed costs, so a natural monopoly prevails even for the largest intercity telecommunication routes. Thus, cost minimization entailed production by a single firm, and that firm was AT&T. Transformation of a Natural Monopoly Although microwave transmission existed prior to World War II, it was not commercially viable until certain technological breakthroughs were achieved by the research and development program funded by the U.S. government as part of the war effort. Microwave transmission permits large amounts of information to be transmitted via radio beams at relatively low cost. In contrast to open wireline or coaxial cable, which requires a physical connection between two points, microwave transmission is achieved through a series of microwave relay stations that are located approximately twenty to thirty miles apart. Each station receives the microwave signal, amplifies it, and transmits it to the next station. The first microwave radio relay system for telephone service in the United States was installed between Boston and New York in 1947.4 Because it eliminates the need for a physical connection between two points in a communications network, microwave radio technology greatly reduced the fixed cost of providing telecommunication services. Recall that a reduction in fixed costs results in a smaller minimum efficient scale. Consequently, many private firms and government organizations asked the FCC for permission to build their own privateline systems in the 1950s. As the minimum efficient scale in the ITM industry was declining, the demand for telecommunications service was increasing for several reasons. First, per capita income was increasing, so consumers had more money to spend on telecommunications services. Between 1949 and 1984, real per capita disposable personal income rose at an annual rate of almost 2.2 percent.5 Estimates of long-run income elasticity of demand for long-distance telephone service ranged from 0.038 to 2.76 at the time.6 These positive numbers indicate that the demand for long-distance telephone service increase as income increases. Second, the microwave technology permitted the delivery of a broader range of telecommunications services, which increased the demand for the services. An FCC report noted some functions that a microwave system could perform for a manufacturer: Thus the central station in a microwave system can start, stop, slow or speed unattended equipment; open and close valves; record pressure, temperature, engine speed, rate of processing and other data; telemeter voltage, current and power; locate line faults, and perform other supervisory functions.7

In contrast, the open-wireline system was quite limited in its ability to perform tasks other than those of standard telephone service. Third, the development of computers in the 1950s and their subsequent widespread use increased the demand for telecommunications services, which provide data processing and transmission services on which computers rely. All three of these factors increased the demand for interstate telecommunications services. The reduced minimum efficient scale and the increased demand for interstate telecommunications services led to questions about whether the industry had transformed sufficiently to render continued monopoly regulation undesirable. Figure 14.6 illustrates two relevant possibilities. First, it is possible that the average cost curve had shifted from AC(Q) to while market demand shifted from D(P) to A natural monopoly still prevails in this case, because the cost function is subadditive at Q0. Second, the

average cost curve may have shifted from AC(Q) to while the demand curve shifted from D(P) to At the welfare-maximizing price the market can support four firms at the efficient size.

Figure 14.6 Potential Transformation of a Natural Monopoly

The empirical evidence regarding which of these two possibilities prevailed in practice is mixed. As a starting point, consider the cost studies performed by AT&T and Motorola in 1962.8 The estimated average cost curves for a 200-mile microwave system are depicted in figure 14.7. Economies of scale are apparent, although the average cost curve flattens out around 240 circuits. If economies of scale are exhausted at this size, then a natural monopoly would have existed for many telecommunication routes in the late 1940s. With the increase in demand since the 1950s, only routes with the smallest densities would still be a natural monopoly at 240 circuits, assuming no change in the estimated cost structure. The larger intercity routes routinely entail several thousand circuits, suggesting that many efficient-sized firms could have operated profitably.

Figure 14.7 Economies of Scale for Interstate Telecommunication Services

Leonard Waverman estimated that scale economies were exhausted at between 1,000 to 1,200 circuits in the mid-1960s.9 In the late 1960s the New York–Philadelphia route routinely employed roughly 79,000 circuits. Consequently, several suppliers could have operated profitably on many intercity routes. Econometric work by David Evans and James Heckman lends support to this conclusion.10 They estimated a multiproduct cost function for the Bell System based on data for the period 1958–1977. Their empirical estimates suggest that the cost function was not subadditive at any relevant output configuration during that period. Other studies have estimated more substantial scale economies for AT&T between 1947 and 1976.11 The estimates of the ratio of average cost to marginal cost ranged from 1.58 to 2.12. If average cost exceeds marginal cost, then average cost is falling, so economies of scale prevail. These studies considered only product-specific economies of scale and not economies (or diseconomies) of scope. In contrast, the Evans and Heckman study allowed for both. Regulatory Policy in the Microwave Era As noted above, many private firms and government organizations requested permission from the FCC to

build and operate their own point-to-point communication networks in the 1950s. These requests led to the Above 890 Mc decision12 in 1959, which required AT&T (the common carrier) to share frequencies above 890 megacycles with private network operators. The FCC’s decision stated that a system built by an entity other than a common carrier could be used only for private use. The firm could not sell telecommunications services to other customers. In response to the Above 890 Mc decision, AT&T requested the FCC’s permission to introduce the Telpak tariff, which offered substantial volume discounts on PLS. The tariff appeared to be designed to make it more economical for many businesses to buy AT&T’s services rather than build their own systems. The FCC ultimately disallowed the Telpak tariff on the grounds that the terms of the tariff were not adequately justified by the estimated cost of supplying PLS. In 1963, Microwave Communications Incorporated (MCI) petitioned the FCC to allow it to serve the St. Louis–Chicago PLS market as a common carrier. MCI sought permission to supply PLS and act as a competitor to AT&T in this market. After six years of hearings and $10 million of expenses incurred by MCI, the FCC approved MCI’s application. Soon after doing so, the FCC found itself inundated with requests by other firms that sought to supply PLS services. In response to the flood of requests it received for permission to service specific PLS routes, the FCC issued the Specialized Common Carrier decision13 in 1971, which allowed free entry into the PLS market. A prospective supplier was only required to submit an application, and lengthy regulatory hearings were avoided. In 1975, MCI attempted to enter the more expansive and potentially more lucrative MTS market by introducing its Execunet service. However, the FCC ruled in its Execunet I decision14 that its earlier Specialized Common Carrier decision only authorized entry into the PLS market, so MCI must discontinue its Execunet service. MCI appealed the FCC’s Execunet I decision and in 1978, the U.S. Court of Appeals overturned the FCC’s decision, allowing entry into the entire ITM. The FCC pursued a policy of partial deregulation during this period. Although it allowed entry into the PLS market and then, by court order, into the MTS market, the FCC continued to regulate rates. In doing so, the FCC employed relatively high prices for long-distance telephone calls to finance relatively low prices for local telephone service. The FCC also required AT&T to set the same rates for intercity traffic carried on high-density and low-density routes, even though unit costs were substantially lower on the high-density routes. The established price structure induced entry primarily on high-density routes. AT&T argued that entrants like MCI were less efficient suppliers and were only able to operate profitably by engaging in creamskimming. Although it is difficult to assess claims about a firm’s efficiency empirically, the prevailing price structure did provide an opportunity for profitable cream-skimming by inefficient suppliers.15 As a result of a seven-year antitrust case against AT&T by the U.S. Department of Justice, AT&T agreed on January 8, 1982, to divest its twenty-two telephone operating companies.16 These twenty-two companies were subsumed by seven holding companies, the regional Bell operating companies (RBOCs). Local Bell System operations were subdivided into 161 local exchange and transport areas (LATAs), and each LATA was assigned to one of the RBOCs. The RBOCs were not permitted to provide interLATA services. These services were supplied by “long-distance” companies, like the restructured AT&T. In return for divesting its telephone operating companies, AT&T was permitted to retain Western Electric (its manufacturing division), Bell Labs (its research and development division), and Long Lines, which supplies interstate telecommunication services. Regulated Monopoly to Regulated Competition

Even though the divestiture resulted in AT&T operating exclusively in markets with competitors, regulation persisted. AT&T was required to serve all customers who demanded service at prescribed rates, file tariffs with the FCC whenever it offered a new service, and charge the same rates across broad customer segments.17 In contrast, competitors like MCI and Sprint faced no restrictions on the prices they charged, the services they supplied, or the markets in which they operated.18 The rationale for this asymmetric regulatory policy was AT&T’s perceived dominance. The FCC feared that if it were left unconstrained, AT&T might employ its dominant position to harm consumers. Even though AT&T’s share of the market declined both before and after the 1984 divestiture, AT&T continued to provide long-distance telephone service to more than half of residential customers for several years. During the latter half of the 1980s, the industry was characterized by one dominant firm (AT&T), two smaller competitors (MCI and Sprint), and many resellers, who, lacking their own physical network, leased lines from the three largest long-distance companies to provide interLATA services. By most measures, this was still a highly concentrated industry. At this time, policymakers worried that if AT&T were deregulated, it might engage in two types of undesirable behavior.19 First, AT&T might raise its rates substantially. Whether AT&T could raise rates profitably depended on the ease with which customers could switch to other providers and on the ability of competing providers to meet a large increase in the demand for their interLATA telecommunications services. If customers found that the services of MCI, Sprint, and other suppliers were comparable to those of AT&T and if these alternative suppliers had adequate capacity to handle a substantial increase in the number of customers they serve, then a substantial increase in rates would have proved unprofitable for AT&T, because it would have lost many of its customers to rival suppliers. In retrospect, the evidence suggests that substantial rate increases would not have been profitable for AT&T. The price elasticity of demand for AT&T’s service was estimated to be around −4 during the 1984– 1993 time period.20 This estimate means that the demand for AT&T’s service would decline by 4 percent for each 1 percent increase in price, so a price increase would reduce AT&T’s revenues substantially. The pronounced elasticity seems plausible in part because the residential long-distance telephone service provided by AT&T and its competitors was quite homogeneous. Consequently, residential consumers would likely purchase the service from the company that offered the lowest rates. Furthermore, the development of fiber-optic systems created considerable excess capacity, so it was likely that MCI and Sprint could have handled a large increase in the demand for their services. Second, policymakers were concerned that an unregulated AT&T might engage in a predatory policy of pricing below cost. The purpose of such pricing would be to force competitors to incur losses and eventually exit the industry. Following the exit of competitors, AT&T might be able to increase its rates substantially. The profitability of such a strategy is not apparent. Because AT&T held a large market share, it would have incurred a sizable financial loss if it priced its services below cost. Furthermore, MCI and Sprint were relatively well-financed companies, so they likely could have survived a price war for an extended period of time. In addition, because most of the cost of long-distance telecommunications networks is sunk, the incremental cost of ongoing operation is relatively low. Consequently, AT&T likely would have had to undertake an extremely intense price war to drive MCI and Sprint from the market. In sum, a retrospective analysis suggests that continued regulation of AT&T after the breakup of the Bell System may have been unwarranted. Regulated Competition to Unregulated Competition

As evidence continued to mount that AT&T had little ability to raise prices profitably or profitably engage in predatory pricing, the FCC gradually reduced its regulation of AT&T’s rates during the 1990s. Price ceilings on business service rates were removed in October 1991. Price ceilings on WATS services were eliminated in May 1993. AT&T was declared to be nondominant in the supply of domestic residential services as of November 1995. Effectively, the domestic ITM is now fully deregulated. The 1982 consent decree that broke up the Bell System prevented an RBOC from entering the ITM. This decree has since been obviated by the Telecommunications Act of 1996, which is discussed in greater detail below. The act permits an RBOC to offer interLATA telecommunications services to its local service customers if an adequate degree of competition has been achieved in the local service market.21 The FCC initially turned down most RBOC applications to supply interLATA services, but subsequently authorized all RBOCs to supply interLATA services to their local service customers. After its divestiture, AT&T’s once dominant position in the supply of interLATA services declined rapidly. In 1995, AT&T served nearly three-fourths of households that subscribed to long-distance telephone service. AT&T’s market share declined to less than one-fourth of these households by 2004. The early decline reflects inroads by Sprint and MCI. The later decline was fueled in large part by the success of the RBOCs in supplying long-distance service. By 2005, the RBOCs served more than 40 percent of the residential long-distance market.22 As industry concentration declined, so did long-distance telephone rates. The rates declined from an average rate of 30 cents per minute in 1984 to 7 cents per minute in 2007.23 This decline reflects increased price competition, but it also reflects a regulatory-mandated shifting of costs from suppliers of long-distance telephone service to suppliers of local telephone service. After the divestiture of AT&T, the FCC introduced a fixed monthly “subscriber line charge” for local telephone service. This charge allowed local telephone companies to collect from their subscribers a substantial portion of the cost of the local network that was formerly financed by access charges—payments by long-distance telephone companies to the local telephone company for calls originated or completed on the local company’s network. As the subscriber line charge was introduced and increased over time, access charges were reduced. Access charges declined steadily from more than 17 cents per minute on average in 1984 to less than 2 cents per minute in 2009.24 The charges were subsequently eliminated.25 This decline in the cost of supplying long-distance telephone service has contributed to the decline in long-distance rates. Telecommunications Act of 1996 Historically, cable television, local telephone, and long-distance telephone services have been viewed as distinct services supplied by distinct firms. This is no longer the case, in part due to technological change. The primary technological developments are fiber optics and digital electronics. Fiber-optic technology provides tremendous capacity and reliability compared with coaxial cable and microwave relay systems. Joined with digital electronic technology, the resulting network has the ability to transmit vast amounts of data and video at high speeds. Voice messages now move in digital form over the network, so the same physical network can move data, video, and voice. Augmenting these advances are innovations in signal compression that allow more information to be transported. A third important factor in the future development of this industry is the advent and advancement of wireless communications. By the 1990s, innovations had increasingly made the existing regulatory structure confining and out of sync. A major overhaul of telecommunications legislation had not taken place since the Telecommunications Act of 1934 created the FCC. More recently, the Modification of Final Judgment,

which resulted in the separation of local and long-distance telephone service, was written in 1982. With the pace of innovation, that decision had become prehistoric. This fragmented regulatory policy increasingly conflicted with technological trends. The culmination of these forces was the Telecommunications Act of 1996. Enacted by Congress on February 1, 1996, and signed by President Bill Clinton a week later, it went into effect immediately. It has radically altered the regulatory landscape. The stated purpose of the 1996 act is to “promote competition and reduce regulation in order to secure lower prices and higher quality services for American telecommunications consumers and encourage the rapid deployment of new telecommunications technologies.”26 The 1996 act preempts all state laws that limit competition in the markets for local and long-distance telephone services. It requires the RBOCs to provide equal access to their systems by interexchange carriers (that is, long-distance telephone companies). The act also permits each RBOC to provide long-distance telephone service outside its operating territory, which is the geographic region in which it provide local telephone service. In addition, RBOCs can offer long-distance service to customers in their operating territories if the FCC approves their application to do so. The act instructed the FCC to approve an RBOC’s application if and only if the RBOC has adequately facilitated the operation of competing suppliers in its operating territory (as described further below). The 1996 act also permits RBOCs to own cable television systems that operate outside of their operating territories (and to hold up to a 10 percent ownership stake in a cable system that operates in the RBOC’s operating territory). The act recognized that only a portion of a telecommunications network—the local loop—might be a natural monopoly. The local loop consists of the wires that connect customer premises with central switching offices. It would be prohibitively costly for several firms to dig up the streets and lay their own wires to all residences and business establishments in the country. However, several firms might be able to supply telecommunications services profitably if they were able to serve customers by employing a combination of their own equipment (including long-distance transmission facilities and local switching equipment, for example) and the local loops of incumbent suppliers. The 1996 act instructed the FCC to specify the terms on which incumbent telecommunications suppliers would be required to make their local networks available for use by competitors. The FCC specified a list of unbundled network elements (UNEs) that incumbent suppliers were required to make available to competitors. Use of the local loop was foremost among the list of UNEs. The FCC also directed state regulators to implement TELRIC pricing of UNEs. TELRIC is an acronym for “total element long-run incremental cost.” In essence, the TELRIC price of a service is the long-run average incremental cost that an efficient incumbent supplier would incur in providing the service, after appropriate allocation of common costs across all services. The task of determining a TELRIC price is a daunting one for several reasons, including that it requires knowledge of the most efficient production technology, not simply the actual technology of the incumbent supplier.27 The allocation of common costs across services also admits considerable discretion. Even the lowest TELRIC prices did not stimulate much lasting competition from competitive local exchange carriers (CLECs), though. By 2008, CLECs accounted for only about 14 percent of local service revenue.28 Some observers suggest that the limited success of CLECs may have stemmed in part from nonprice interactions between incumbent local exchange carriers (ILECs) and CLECs. State regulators often were inclined to establish relatively low TELRIC prices to stimulate industry competition. Low TELRIC prices implied that ILECs secured little, if any, profit from selling UNEs to CLECs. Furthermore, the sale of UNEs enabled CLECs to attract the ILECs’ former retail customers, thereby reducing the ILECs’ retail profit.

Consequently, ILECs often could increase their profit by limiting the sale of UNEs to CLECs. It is alleged that ILECs may have limited UNE sales by intentionally processing CLEC requests for UNEs as slowly as possible and perhaps by failing to work diligently to avoid complications when connecting ILEC and CLEC networks. State regulators devoted considerable attention to detecting and limiting such ILEC behavior,29 but some such behavior may nevertheless have limited successful CLEC operation. Although CLECs were unable to secure a large fraction of former ILEC customers in the early 2000s, cable companies had more success in this regard. As of 2014, cable companies provided telephone service to more than 28 million subscribers in the United States.30 This experience illustrates the more general conclusion that regulators and legislators often are unable to identify the most likely sources of industry competition. Whereas the Telecommunications Act of 1996 envisioned CLECs as the main competitors to ILECs, cable companies in fact served as the primary competitors in the early 2000s. ILECs also experienced increasing competition from suppliers of wireless telecommunications services during this period. As the quality of wireless communications services increased and prices declined, many former wireline customers decided to “cut the cord” and rely exclusively on wireless services for their voice communications needs. As of 2013, 41percent (more than two out of every five) U.S. households had a wireless, but no wireline, telephone subscription.31 Net Neutrality Use of the Internet has expanded dramatically in recent years. As table 14.1 reports, more than 3 billion individuals were Internet users as of 2016, an increase of more than 900 percent since 2000. Furthermore, Internet users are continually sending and receiving more and more content. To illustrate, consider the following prediction: Table 14.1 Internet Usage, as of June 2016 Region

Population (2016 estimate)

Number of Internet Users

Penetration (% Population)

Growth, 2000−2016 (%)

Africa Asia Europe Middle East North America Latin America/ Caribbean Oceania/Australia World total

1,185,529,578 4,052,652,889 832,073,224 246,700,900 359,492,293 626,119,788 37,590,820 7,340,159,492

340,783,342 1,846,212,654 614,979,903 141,489,765 320,067,193 384,751,302 27,540,654 3,675,824,813

28.7 45.6 73.9 57.4 89.0 61.5 73.3 50.1

7,448.8 1,515.2 485.2 4,207.4 196.1 2,029.4 261.4 918.3

Source: Internet World Stats, Usage and Population Statistics (available at http://www.internetworldstats.com/stats.htm).

It would take an individual over 5 million years to watch the amount of video that will cross global IP [Internet Protocol] networks each month in 2019. Every second, nearly a million minutes of video content will cross the network by 2019.32

The expanding use and ever-growing importance of the Internet raises two important questions about its regulation. First, is any regulation of the Internet warranted? Second, if so, what type of regulation is appropriate? To address these questions, it is helpful to understand the basic structure and operation of the Internet. Internet Structure

Figure 14.8 provides a simple illustration of the basic structure of the Internet.33 The Internet can be viewed as a network that permits information to flow among suppliers and consumers.34 Information suppliers are commonly referred to as content providers. Content providers include such varied entities as news organizations, retail merchants, government agencies, and individuals with personal websites. Consumers are individuals or entities that access the data made available by content providers.

Figure 14.8 Structure of the Internet

The information that content providers and consumers exchange typically traverses the Internet backbone. The backbone consists in part of fiber-optic cable that can transport huge amounts of data. Several different firms supply backbone transport services. Broadband Internet access service (BIAS) enables consumers to access data transported over the Internet backbone. BIAS suppliers (sometimes referred to as Internet service providers, or ISPs) include cable companies, wireline and wireless telephone companies, and satellite companies. Most wireline telephone companies provide Internet access over copper telephone wires via digital subscriber line (DSL) service. Some of these companies also offer (faster speed) Internet access over fiber-optic cable. Table 14.2 reports how U.S. households secured Internet access in 2013. Table 14.2

Percentage of U.S. Households by Type of Internet Subscription, 2013 Type of Internet Subscription

Share of U.S. Households (%)

Cable modem Mobile broadband No paid Internet subscription Digital subscriber line (DSL) Fiber optic Satellite Other

42.8 33.1 25.6 21.2 8.0 4.0 2.0

Source: Table 3 in Thom File and Camille Ryan, “Computer and Internet Use in the United States: 2013,” American Community Survey Reports, ACS-28, November 14, 2013 (https://www.census.gov/history/pdf/acs-internet2013.pdf).

Most U.S. households have access to two wireline BIAS suppliers (the incumbent telephone company and a cable company) and several wireless suppliers.35 However, the services offered by different types of suppliers differ in important respects. In particular, fiber-optic and cable modem service generally provide Internet access service that is considerably faster than DSL and wireless service. The fiber-optic Internet access service that telephone companies (and other entities) supply is not available in many geographic regions. Consequently, many U.S. consumers only have access to a single supplier of the fastest Internet access service. Information traverses the Internet backbone and the local networks of BIAS suppliers in the form of individual data packets. These packets presently are all treated symmetrically in the sense that no packets of a particular type or from a particular content provider are afforded priority over other packets. In essence, it is “first-come, first-served” on the Internet when it comes to the delivery of data packets. This is the case even though the impact of the speed and consistency with which packets are delivered on the quality of a consumer’s experience varies with the type of information that is being delivered. For example, the perceived quality of email communications does not vary substantially as latency or jitter increase within reasonable limits. (Latency is a measure of the delay in receiving packets. Jitter is a measure of the consistency or “steadiness” with which packets are delivered.) In contrast, the perceived quality of voice communications and video content can be quite sensitive to latency and jitter. Each data packet includes a header that identifies the type and source of the packet. Therefore, if BIAS suppliers wished to do so, they could afford priority to data packets for services that are sensitive to latency and jitter. Such prioritization is not presently exercised, but it is a central element of the “net neutrality” debate. The Meaning of Net Neutrality There is no single universally accepted definition of net neutrality. However, the general consensus is that net neutrality means that BIAS suppliers are not permitted to accelerate, slow, or block Internet traffic on the basis of the source, ownership, or destination of the traffic. Thus, at its core, net neutrality is a nondiscrimination requirement that encompasses both the type of information being transported and the particular supplier or consumer of the information. To illustrate, net neutrality precludes a BIAS supplier from affording priority to data packets associated with video content over data packets associated with email content. Net neutrality also precludes prioritization of the video services offered by one content provider (Netflix, for example) over the services offered by another content provider (Hulu, for instance). Some proponents of net neutrality advocate additional restrictions, as Robert Hahn and Scott Wallsten observe:

Net neutrality … usually means that broadband service providers charge consumers only once for Internet access, do not favor one content provider over another, and do not charge content providers for sending information over broadband lines to end users.36

Thus, net neutrality regulation sometimes is viewed as encompassing both price regulation and (nondiscrimination) quality regulation. The details of net neutrality regulation vary across countries. The net neutrality regulation that the FCC imposed in the United States in 2015 includes the following five key elements:37 1. No blocking. A BIAS supplier “shall not block lawful content, applications, services, or nonharmful devices, subject to reasonable network management.” 2. No throttling. A BIAS supplier “shall not impair or degrade lawful Internet traffic on the basis of Internet content, application, or service, or use of a non-harmful device, subject to reasonable network management.” 3. No priority lanes. A BIAS supplier “shall not engage in paid prioritization.” 4. No unreasonable interference. A BIAS supplier “shall not unreasonably interfere with or unreasonably disadvantage (i) end users’ ability to select, access, and use broadband Internet access service or the lawful Internet content, applications, services, or devices of their choice, or (ii) [content] providers’ ability to make lawful content, applications, services, or devices available to end users. Reasonable network management shall not be considered a violation of this rule.”38 5. Transparency. A BIAS supplier “shall publicly disclose accurate information regarding the network management practices, performance, and commercial terms of its broadband Internet access services sufficient for consumers to make informed choices regarding use of such services and for content, application, service, and device providers to develop, market, and maintain Internet offerings.” The prohibitions against blocking and throttling reflect the aforementioned bans on quality discrimination. The prohibition against priority lanes clarifies that BIAS suppliers cannot favor the delivery of data packets even if a content provider is willing to pay for such favorable treatment. The FCC’s restriction on unreasonable interference limits the ability of a BIAS supplier to reduce the quality of service delivered to consumers on the basis of, say, the amount of data they download. In the past, some BIAS suppliers limited the flow of data to consumers who were using so much of the network capacity that insufficient capacity remained to serve the needs of other customers. Such limitations imposed by BIAS suppliers are not unlawful if they constitute “reasonable network management.” This term remains to be defined in complete detail. The FCC’s transparency requirement might be viewed as “full disclosure” and “truth in advertising” mandates. The intent is to ensure that content providers and consumers alike fully understand the nature and quality of the services the BIAS supplier will provide. Rationale for Net Neutrality Advocates of net neutrality regulation argue that it is necessary to address the terminating monopoly problem that often prevails in the supply of Internet access services. Such a problem exists when a single firm controls the flow of data on the Internet to and from individual consumers. When a single BIAS supplier controls the flow of information to many consumers, it may be in a position to demand substantial

payments from content providers that rely on the supplier for delivery of their content to consumers. For example, a BIAS supplier like Comcast that has many broadband subscribers might conceivably demand large payments from a major content provider like Netflix in return for an assurance that Comcast’s subscribers will have unfettered access to Netflix’s content. Netflix may prefer to pay for this assurance rather than risk the substantial subscriber loss it could incur if Comcast’s subscribers were unable to receive reliable, high-quality transmission of Netflix programming. However, such payments reduce Netflix’s earnings, and thereby reduce the expected return to operating as a content provider.39 Proponents of net neutrality regulation argue that it is important to prevent such opportunistic holdup by BIAS suppliers, because the holdup can discourage innovation by content providers. If a content provider anticipates little financial return from innovative effort, it may decline to make the effort. The resulting reduction in “innovation at the edge of the Internet” can harm consumers by reducing the quality of the content that they can access on the Internet. It is noteworthy that the FCC’s net neutrality regulations are focused on BIAS suppliers and not on suppliers of Internet backbone services. As noted above, backbone services are supplied by several companies. Consequently, if one company attempts to impose unreasonable terms and conditions on a content provider, the content provider can readily secure data transport services from a different backbone company. Switching companies often is more difficult for consumers, so they may not do so even if their current BIAS supplier is degrading the quality of certain content. As explained above, many customers in the United States can only secure the higher-speed Internet access services from a single BIAS supplier. Furthermore, even when consumers have a meaningful choice among BIAS suppliers, changing suppliers can be costly for consumers. Relevant switching costs can include both financial penalties for early termination of subscription contracts and the time and expense associated with acquiring and installing the new equipment that often is required to receive service from a new supplier. Therefore, competition among BIAS suppliers may be insufficient to prevent them from degrading the quality of the offerings of certain content providers. Consequently, BIAS suppliers may be able to engage in opportunistic holdup of content providers, thereby inhibiting innovation at the edge. Some BIAS suppliers could conceivably be tempted to degrade the quality of selected content even if doing so did not facilitate the extraction of higher payments from content providers. The temptation could arise if the BIAS supplier also delivers programming (or other content) that is a substitute for the programming supplied by content providers.40 By degrading the perceived quality of the content provider’s programming, the BIAS supplier can increase the relative perceived quality of its own programming and thereby increase the demand for (and thus the profit secured by selling) this programming. The FCC’s prohibition on “priority lanes” may seem strange, given that many companies charge higher prices for higher-quality service. For example, the U.S. Postal Service charges more for Priority Mail than for first-class mail, just as Amazon charges more for two-day delivery than for nonexpedited delivery. Many amusement parks also allow patrons to pay extra to access shorter lines for the major attractions in the park. Proponents of net neutrality regulation offer two justifications for bans on priority lanes. First, such lanes can create incentives for a BIAS supplier to reduce the capacity of its network. Reduced network capacity can reduce the perceived quality of content that is not afforded priority on the network, thereby rendering content providers more willing to pay for priority treatment. If BIAS suppliers are not permitted to charge for priority treatment, then this potential incentive to reduce network capacity disappears.41 Second, content providers are not the only source of revenue for BIAS suppliers. BIAS suppliers operate in two-sided markets. (Recall the discussion of two-sided markets in chapter 12.) BIAS suppliers serve both

final consumers and content providers. Restrictions on the prices that a BIAS supplier can charge content providers do not restrict the prices the supplier can charge its Internet access subscribers. Therefore, even when they have little or no ability to impose charges on content providers, BIAS suppliers may be highly motivated to invest in their networks and deliver high-quality Internet access services. Such activities can be profitable if they substantially increase the amount that consumers are willing to pay for Internet access services. Of course, the fact that a BIAS supplier can charge its subscribers does not imply that it should necessarily be precluded from charging content providers. Recall from the discussion in chapter 12 that under an optimal pricing policy in two-sided markets, both sides of the market often face charges for access to the supplier’s services. A relatively small charge typically is imposed on the side of the market that is valued particularly highly by the other side of the market. The prevailing arrangement under which BIAS suppliers charge consumers for Internet access but typically do not charge content providers might be viewed as an approximation of the optimal pricing policy if the activities of content suppliers are valued particularly highly. However, the details of the optimal policy remain to be specified precisely. Regulation of the Internet is a relatively new phenomenon. Prevailing regulations are likely to change over time as regulatory experience increases and as new challenges emerge. Should more intense competition among BIAS suppliers develop, the need for regulation of the Internet may wane. More intense competition could arise if consumers come to view wireline and wireless Internet access as ready substitutes and if the costs that consumers must incur to switch BIAS suppliers decline. Summary This chapter has emphasized the importance of continually monitoring industry conditions and adopting regulatory policy as conditions change. When technological change and changes in demand and cost functions transform an industry from a natural monopoly to one in which multiple firms can compete to serve customers without raising industry costs, regulatory policy should change accordingly. Setting prices to reflect costs initially and then awarding incumbent suppliers some pricing flexibility while reducing industry entry barriers often can better serve consumers than can steadfast adherence to historic industry regulations. Of course, it is often difficult to determine precisely when competition has developed to the point where it can adequately protect consumers. Furthermore, some regulators may be reluctant to encourage major industry changes, in part because change typically fosters uncertainty and uncertainty can sometimes bring unpleasant surprises. In addition, industry deregulation can entail the loss of jobs for regulators, their staff, and their colleagues. Despite these personal costs of deregulation, it can bring enormous social benefits when an industry develops to the point where competition can adequately protect consumers. This is a general theme that will resurface in subsequent chapters of this book. Questions and Problems 1. A firm’s cost of producing Q units of output is C(Q) = 100 + Q + Q2. This cost function generates a U-shaped average cost curve with minimum efficient scale at Q = 10. Determine whether this industry is a natural monopoly when the market demand function is: a. Q(P) = 100 − 3P.

b. Q(P) = 10 − 0.1P. 2. When MCI originally entered the interstate telecommunications market, AT&T argued that MCI was cream-skimming. What is cream-skimming? What data would you need to assess the validity of AT&T’s claim? 3. By the early 1970s, the FCC had permitted entry into the market for private-line telecommunications services, but it did not permit entry into the message toll service market. Do you believe this was appropriate regulatory policy at the time? Why or why not? 4. A firm’s cost of producing x units of product X and y units of product Y is C(x, y) = 100 + 20x + 10y + xy. The firm always produces less than 10 units of product X and less than 10 units of product Y. Does this cost function exhibit economies of scope? 5. Do you think there are economies of scope in the provision of local telephone service and cable television service? What difference does the existence of economies of scope make for the optimal regulatory policy? 6. The Telecommunications Act of 1996 prohibited a regional Bell operating company from offering long-distance telephone service to its local telephone customers until the FCC deemed the company’s local telephone market to be sufficiently competitive. Do you think these entry restrictions constituted appropriate regulatory policy? 7. Does the local cable television company have a monopoly? Should its rates be regulated? 8. Should the market for local telephone service be fully deregulated? When answering this question, be sure to identify the relevant actual and potential suppliers of local telephone service. Also think about the different types of customers who purchase local telephone service and any relevant network externalities associated with local telephone service. 9. The retail prices of wireless telephone services are not regulated in the United States. Do you think this is appropriate regulatory policy? 10. Identify one possible advantage and one possible disadvantage of net neutrality regulation. On balance, do you believe net neutrality regulation constitutes appropriate regulatory policy?

Notes 1. Optimal regulatory policy to prevent cream-skimming by less efficient firms is analyzed in William Brock and David Evans, “Cream-Skimming,” in David Evans, ed., Breaking Up Bell (New York: North Holland, 1983). 2. This conclusion is proved in William Baumol, Elizabeth Bailey, and Robert Willig, “Weak Invisible Hand Theorems on the Sustainability of Prices in a Multiproduct Monopoly,” American Economic Review 67 (June 1977): 350–365. 3. The transformation of a natural monopoly transformation in the ITM is examined in Leonard Waverman, “The Regulation of Intercity Telecommunications,” in Almarin Phillips, ed., Promoting Competition in Regulated Markets (Washington, D.C.: Brookings Institution Press, 1975). For an early comprehensive study of this industry, see Gerald Brock, The Telecommunications Industry (Cambridge, MA: Harvard University Press, 1981). A more recent study is Jonathan E. Nuechterlein and Philip J. Weiser, Digital Crossroads: American Telecommunications Policy in the Internet Age (Cambridge, MA: MIT Press, 2005). 4. FCC Annual Report (Washington, D.C.: United States Government Printing Office), June 30, 1957. 5. Economic Report of the President (Washington, DC: United States Government Printing Office), February 1986, Table B26. In 1949, real per capita disposable personal income was $4,915 in 1982 dollars. By 1984, it had grown to $10,427. 6. Lester Taylor, Telecommunications Demand: A Survey and Critique (Cambridge, MA: Ballinger Publishing, 1980). 7. FCC Annual Report (Washington, DC: United States Government Printing Office), June 30, 1956, pp. 34. 8. FCC Reports, Vol. 38, January 22, 1965–July 9, 1965 (Washington, DC: United States Government Printing Office, 1965): 385–386. 9. Waverman, “Regulation of Intercity Telecommunications.” 10. David Evans and James Heckman, “Multiproduct Cost Function Estimates and Natural Monopoly Tests for the Bell System,” in David Evans, ed., Breaking Up Bell (New York: North Holland, 1983), pp. 253–291. 11. M. Ishaq Nadiri and Mark Schankerman, “The Structure of Production, Technological Change, and the Rate of Growth

of Total Factor Productivity in the U.S. Bell System,” in Thomas Cowing and Rodney Stevenson, eds., Productivity Measurement in Regulated Industries (New York: Academic Press, 1981), pp. 219–247. 12. Allocation of Microwave Frequencies Above 890 Mc., 27 FCC 359 (1959). 13. Specialized Common Carrier Services, 29 FCC2d 870, reconsideration denied, 31 FCC2d 1106 (1971). 14. Execunet I, 561 F.2d. 15. The fact that MCI eventually entered all MTS markets might be viewed as evidence against AT&T’s claim. 16. The divestiture took effect on January 1, 1984. 17. Richard H. K. Vietor, Contrived Competition: Regulation and Deregulation in America (Cambridge, MA: Harvard University Press, Belknap Press, 1994). 18. The courts eventually required all carriers to file tariffs for the services they offered. 19. This analysis is based in part on Roger G. Noll, “The Future of Telecommunications Regulation,” in Eli Noam, ed., Telecommunications Regulation: Today and Tomorrow (New York, NY: Harcourt Brace Jovanovich, 1982), pp. 41–66. 20. Simran K. Kahai, David L. Kaserman, and John W. Mayo, “Is the ‘Dominant Firm’ Dominant? An Empirical Analysis of AT&T’s Market Power,” Journal of Law and Economics 39 (October 1996): 499–517. 21. An RBOC can, however, offer interstate telecommunication services to customers outside its region. 22. Trends in Telephone Service, table 9.5, Industry Analysis and Technology Division, Wireline Competition Bureau, Federal Communications Commission, Washington, DC, September 2010. 23. Trends in Telephone Service, September 2010, table 13.4. 24. Trends in Telephone Service, September 2010, table 1.2. 25. See Federal Communications Commission, In the Matter of Access Charge Reform Price Cap Performance Review for Local Exchange Carriers, Sixth Report and Order in CC Docket Nos. 96-262 and 94-1, Report and Order in CC Docket No. 99-249, and Eleventh Report and Order in CC Docket No. 96-45, Washington, DC, adopted May 31, 2000. 26. Pub. L. No. 104-104, 110 Stat. 56 (1996) (codified as amended in scattered sections of 47 U.S.C.). 27. See, for example, Alfred E. Kahn, Timothy J. Tardiff, and Dennis L. Weisman, “The Telecommunications Act at Three Years: An Economic Evaluation of Its Implementation by the Federal Communications Commission,” Information Economics and Policy 11 (December 1999): 319–365. 28. Trends in Telephone Service, September 2010, table 8.9. 29. See, for example, Lisa Wood and David Sappington, “On the Design of Performance Measurement Plans in the Telecommunications Industry,” Telecommunications Policy 28 (December 2004): 801–820. 30. National Cable and Telecommunications Association, “America’s Cable Industry: Working for our Future,” 2015, p. 29 (available at https://www.ncta.com/sites/prod/files/Impact-of-Cable-2014-NCTA.pdf). 31. Drew DeSilver, “CDC: Two of Every Five U.S. Household Have Only Wireless Phones,” Pew Research Center RSS, Washington, DC, July 8, 2014 (available at http://www.pewresearch.org/fact-tank/2014/07/08/two-of-every-five-u-shouseholds-have-only-wireless-phones/). 32. Cisco Visual Networking Index: Forecast and Methodology, 2014–2019 White Paper (available at http://www.cisco.com /c/en/us/solutions/collateral/service-provider/ip-ngn-ip-next-generationnetwork/white_paper_c11-481360.html). 33. Figure 14.8 and portions of the ensuing discussion are drawn from Jan Krämer, Lukas Wiewiorra, and Christof Weinhardt, “Net Neutrality: A Progress Report,” Telecommunications Policy 37 (October 2013): 794–813. 34. For a discussion of how information traverses the Internet, see Rus Shuler, “How Does the Internet Work?” whitepaper, Pomeroy IT Solutions, Hebron, KY, 2002 (available at https://web.stanford.edu/class/msande91si/www-spr04/readings /week1/InternetWhitepaper.htm). 35. The situation differs across countries. In the European Union, for example, incumbent suppliers typically are required make all their key network elements available for use by competitors. Consequently, consumers often have a choice among several suppliers of DSL services. See J. Scott Marcus, “Network Neutrality Revisited: Challenges and Responses in the EU and the US,” Report for the European Parliament’s Committee on the Internal Market and Consumer Protection, IP/A/IMCO/2014-02, Brussels, December 2014.

36. Robert Hahn and Scott Wallsten, “The Economics of Net Neutrality,” Economists’ Voice 3 (June 2006): 1–7. 37. Federal Communications Commission, In the Matter of Protecting and Promoting the Open Internet, Report and Order on Remand, Declaratory Ruling, and Order, GN Docket No. 14-28, FCC 15-24, Washington, DC, adopted February 26, 2015 (available at http://transition.fcc.gov/Daily_Releases/Daily_Business/2015/db0312/ FCC-15-24A1.pdf). 38. The FCC defines a network management practice to be “a practice that has a primarily technical network management justification, but does not include other business practices. A network management practice is reasonable if it is primarily used for and tailored to achieving a legitimate network management purpose, taking into account the particular network architecture and technology of the broadband Internet access service.” 39. For a discussion of how BIAS suppliers may have slowed the delivery of Netflix traffic in any attempt to secure higher payments from Netflix, see Drew Fitzgerald and Shalini Ramachandran, “Netflix-Traffic Feud Leads to Video Showdown,” Wall Street Journal, February 18, 2014. 40. For instance, Comcast, a major supplier of Internet access services, owns NBCUniversal, which sells a variety of video programming. 41. Of course, the reduced quality associated with reduced network capacity can lower the amount that consumers are willing to pay for access to the BIAS supplier’s network. Consequently, a BIAS supplier that is permitted to charge for data prioritization may not always wish to reduce its network capacity. For additional thoughts on this issue and additional discussion of other advantages and disadvantages of net neutrality regulations, see Shane Greenstein, Martin Peitz, and Tommaso Valletti, “Net Neutrality: A Fast Lane to Understanding the Trade-offs,” Journal of Economic Perspectives 30 (Spring 2016): 127–150.

15 Regulation of Potentially Competitive Markets: Theory and Estimation Methods

We have examined why regulation can enhance welfare in natural monopoly settings. Yet regulation is often observed in other settings. The purpose of this chapter is to assess the role of regulation in potentially competitive markets. Specifically, this chapter provides an introductory theoretical analysis of the implications of price and entry/exit regulation for firm behavior and social welfare. This theory is relevant to many forms of economic regulation, but is of particular relevance to the regulation of the transportation industry, which is considered in chapter 16. Theory of Price and Entry/Exit Regulation The ensuing discussion focuses on how price regulation and entry/exit regulation affect the decisions of industry suppliers and thereby affect welfare. Two cases are considered. In the first case, price is set above the prevailing production cost, and entry is prohibited. In the second case, price is set below cost, and exit is prohibited. The first case characterizes historic regulation in the trucking and airline industries. The second case arose in the airline and railroad sectors. We first review the basic economic principles that underlie any thorough analysis of the effects of price and entry/exit regulation. Then we consider the different empirical methodologies that can be employed to actually measure the effects of such regulation. Direct Effects of Price and Entry/Exit Regulation: The Competitive Model The welfare effects of regulation are derived by comparing the industry equilibrium under regulation with the equilibrium that would have occurred in the absence of regulation. This comparison typically relies on conjectures about the equilibrium outcomes in the absence of regulation. For simplicity, we assume initially that a competitive equilibrium would arise in the absence of regulation. This assumption is not appropriate in all settings, and so it will be relaxed. In particular, when the minimum efficient scale of a firm is large relative to market demand, only a few firms are likely to operate in equilibrium in the absence of regulation. First-best effects Recall that social welfare is maximized in a competitive equilibrium. Therefore, if price regulation causes price to deviate from marginal cost in an economy that is currently at a competitive equilibrium, then regulation must reduce welfare. Specifically, if price is set above marginal cost, then an inefficiently small amount of the regulated service will be produced and consumed. In contrast, if the price is set below marginal cost, then either too much of the service is consumed (if firms are required to meet demand) or too little is consumed and shortages will prevail (if firms are not required to serve all prevailing demand).

Generally, the greater the divergence is between price and marginal cost, the larger the welfare loss to society will be. To consider the effects of price and entry regulation, suppose the market demand curve and the average cost curve of industry producers are as depicted in figure 15.1. The competitive equilibrium price, denoted by P*, is equal to the minimum average cost. In the setting of figure 15.1, twenty firms will operate at the competitive equilibrium, each producing at minimum efficient scale where Recall that the minimum efficient scale is the smallest level of output at which average cost is minimized. Social welfare is maximized at the competitive equilibrium, because price equals marginal cost (so allocative efficiency is achieved); and the total cost of producing Q* is minimized, because the efficient market structure prevails (so productive efficiency is achieved).

Figure 15.1 Effects of Price and Entry Regulation: Competitive Model

Now consider regulation that requires industry suppliers to set price which exceeds P*. The reduction in consumer surplus resulting from the higher price is the area of trapezoid A portion of this reduction in consumer surplus accrues to firms in the form of increased profit. Suppose the regulatory agency prohibits any entry into the industry. In that case, each of the twenty firms will produce so average cost will be and total industry profit will be which is the area of rectangle Subtracting total industry profit from the loss in consumer surplus provides the welfare loss from regulation, which is the shaded area in figure 15.1.

The welfare reduction stems from two sources. First, welfare declines because output declines from Q* to The resulting welfare loss is the area of triangle abc. Second, welfare declines because an inefficient market structure prevails. Each firm produces units of output. This quantity is less than the minimum efficient scale, so regulation increases each firm’s average cost. The area of rectangle measures the increase in the cost of producing units of output relative to the preregulation equilibrium. Given the prevailing price regulation, entry regulation increases social welfare. Because the regulated price exceeds average cost, firms would have an incentive to enter the industry. If one firm were to enter the industry, each firm would produce units of output, so average cost would increase from to AC ′.1 Consequently, the total industry cost of supplying units of output would increase by To avoid this additional welfare loss, the regulator would be wise to prohibit entry, given that it has set price Indeed, given that it has set price the regulator could increase welfare by reducing the number of firms that serve consumers. Doing so could secure a more efficient market structure. If, for example, the number of active firms were reduced from twenty to nineteen, each firm’s output would increase from to If is less than the minimum efficient scale, then each firm’s average cost would be lower when it produces rather than units of output. Therefore, the total cost of supplying output would be reduced by eliminating one of the firms. The potential benefit from restricting entry here takes as given the prevailing price regulation. Entry regulation and price regulation together reduce the level of welfare that is achieved below the level that would be achieved in the absence of regulation in the present setting. Second-best effects If all markets in an economy are competitive, then regulation that causes price to deviate from marginal cost in a market would reduce social welfare. In contrast, if price initially differs from marginal cost in one market, then regulation that sets a price different from marginal cost in another market can increase welfare. This conclusion is an implication of the theorem of second best:2 when one product is regulated, extending regulation to cover substitute products can enhance welfare in some settings. To illustrate this point, consider the case of two products, A and B. Two types of firms supply product A. Type 1 firms specialize in supplying product A and produce it at a constant marginal cost, c1. Type 2 firms concentrate on supplying the substitute product, B, in a different market but can also supply product A at unit cost c2 > c1. Figure 15.2 depicts the marginal cost curves for the two different types of firms and the market demand curve for good A. In the absence of regulation, a competitive equilibrium is achieved in which only type 1 firms supply product A at price c1. Now suppose a regulator requires type 1 firms to set a price no lower than where In this case, (unregulated) type 2 firms will supply Q1 units of product A at price c2 in equilibrium.

Figure 15.2 Second-Best Effects of Price Regulation on Productive Efficiency

Now suppose that additional regulation requires all firms to set a price for product A that is no lower than Further suppose that consumers slightly prefer the product supplied by type 1 firms, so all consumers will purchase product A from type 1 firms if both types set the same price. Under the expanded regulation, all firms will set price but only type 1 firms will serve consumers in equilibrium. The expanded regulation affects welfare in two ways. First, price increases from c2 to which reduces consumers surplus by Second, because more efficient firms are serving consumers, industry profit increases by If the area of rectangle c2 edc1 exceeds the area of triangle abe, then the expansion of regulation increases social welfare. Welfare increases because the expanded regulation enables the more efficient type 1 suppliers to drive the less efficient type 2 suppliers from the market, and the cost savings exceed the reduction in consumer surplus that arises from the higher equilibrium price. We will return to this conclusion when we examine the effects of simultaneous price regulation in the railroad and trucking industries. Direct Effects of Price and Entry/Exit Regulation: The Imperfectly Competitive Model Now consider a setting where the minimum efficient scale is large relative to market demand, so only a few firms serve consumers. In this setting, each firm will supply a significant share of the market and so presumably anticipates that its output decision will affect the market price. We employ the Cournot model of oligopoly (which was presented in chapter 5) to examine the impact of regulation in this setting. In the Cournot model, each firm chooses its quantity to maximize its profit, taking as given the quantity decisions of the other industry suppliers. Figure 15.3 illustrates a Cournot equilibrium with three firms, where each firm produces q′ units of output. The resulting market price is P′, which is the price at which consumer demand D(P′) is 3q′. Recall that firms typically earn positive profit at a Cournot equilibrium, because firms restrict supply to keep the market price above the average cost of production.

Thus, P′ exceeds each firm’s average cost of producing q′ units of output in figure 15.3.

Figure 15.3 Effects of Price and Entry Regulation: Imperfectly Competitive Model

First consider the effect of price regulation, holding constant the number of industry suppliers. Because the equilibrium price (P′) exceeds the competitive price (P*), regulation that establishes a market price above the Cournot equilibrium price will reduce welfare. In contrast, regulation that reduces the market price toward P* will increase welfare. Before we consider the impact of entry regulation on this imperfectly competitive market, recall that in a competitive market, firms are small, so their output decisions do not affect the market price. Consequently, the entry of a single new supplier does not affect the profit of incumbent suppliers, nor does it affect consumer surplus (because it does not affect the market price). Therefore, the change in welfare caused by the entry of a firm into a competitive market is precisely the amount by which the profit of the new industry supplier changes. Thus, entry into a competitive industry is profitable (and therefore occurs) if and only if it increases welfare. The interests of society and the interests of individual firms coincide perfectly in a competitive industry, so entry occurs to the point where social welfare is maximized. A corresponding harmony of interests does not prevail in the presence of imperfect competition. When the output of an industry supplier is large relative to market demand, entry typically will affect the market price, and so it will affect consumer surplus and the profit of incumbent suppliers. Consequently, even though a new supplier will enter the industry whenever it can secure positive (extranormal) profit by doing so, the supplier’s entry may reduce welfare. Welfare can decline, for example, if the profit of the new supplier and the increase in consumer surplus that arises as entry reduces the market are outweighed by the reduction in the profit of the incumbent suppliers. (Entry reduces the profit of incumbent suppliers by reducing the market price.) It is also possible that entry into an imperfectly competitive market will not occur (because it is not profitable for the potential entrant) even when it would increase welfare. For example, if entry would cause

a substantial reduction in the industry price, entry is unlikely to be profitable, even though it would generate a substantial increase in consumer surplus. In this case, fewer than the welfare-maximizing number of industry suppliers will be active in equilibrium in the absence of any price or entry regulation. To illustrate these conclusions more concretely, consider the Cournot model in which industry suppliers sell a homogeneous product. Suppose the market demand function is D(P) = 100 − P (where P denotes price), each supplier’s total cost of producing q units of output is C(q) = 10q, and a firm that wishes to enter the industry incurs an additional cost of 150. This entry cost might represent the cost of constructing a production facility or advertising one’s product, for example. Given a fixed number of active firms, we can calculate the Cournot equilibrium. These calculations have been performed for settings where the number of active firms varies between one and seven. Table 15.1 records the equilibrium levels of price, firm profit, consumer surplus, and social welfare. Table 15.1 Profit and Welfare under Cournot Competition Number of Firms 1 2 3 4 5 6 7

Price

Firm Profit

Consumer Surplus

Social Welfare

55 40 33 28 25 23 21

1,875 750 356 174 75 15 −23

1,013 1,800 2,278 2,592 2,813 2,976 3,101

2,888 3,300 3,347 3,288 3,188 3,067 2,937

Observe that six firms will operate in equilibrium in the absence of any entry or exit restrictions. If a seventh firm were to enter the industry, the new supplier’s profit (like the profit of the six incumbent suppliers) would be negative. Such entry is unprofitable, and so will not arise in equilibrium. Also observe that when six firms are active in the market, each earns positive profit, so no firm has an incentive to exit the industry. Consequently, this is a free-entry equilibrium. Now observe that social welfare is maximized when only three, not six, firms are active. The industry price is higher (so consumer surplus is lower) when three firms operate than when six firms operate. However, industry costs are substantially lower when only three firms operate. Specifically, entry costs of 450 are avoided when three fewer firms enter the market. The sum of consumer surplus and industry profit is higher when three firms operate than when six firms operate. In this setting, then, more than the welfaremaximizing number of firms will operate in the absence of any price or entry regulation. It can be shown that for Cournot competition with homogeneous products, the number of firms that will be active in a free-entry equilibrium always exceeds the welfare-maximizing number of industry suppliers. In contrast, if the firms supply differentiated products and consumers value product diversity sufficiently highly, then free entry may produce fewer than the welfare-maximizing number of industry suppliers (and thus too little of the product diversity that consumers value highly).3 These observations imply that the private interests of industry suppliers typically differ from the interests of society in the presence of imperfect competition. Because free entry can lead to either an excessive or an insufficient number of suppliers, it is not apparent whether entry/exit regulation would increase or reduce welfare. Since no general conclusions can be drawn, each individual case must be analyzed separately. To illustrate such an analysis, return to the setting depicted in figure 15.3. In the absence of regulation, the equilibrium price is P′, and each firm produces q′

and so operates below the minimum efficient scale. Consequently, price is too high, and too few firms operate in the industry. Now suppose regulation mandates that all firms set which is less than P′, and prohibits both entry and exit. Each firm now supplies instead of q′. Because is closer to the minimum efficient scale, the average cost of production declines, and welfare increases. That price and entry/exit regulation can, in principle, increase welfare does not imply that such regulation is appropriate. Considerable information about industry demand and cost conditions is required to determine how to design regulations that will enhance welfare. In practice, such information often is not readily available, in part because cost and demand functions are difficult to quantify and can change considerably over time. That only a small number of firms are active in an industry does not imply that industry regulation will increase welfare, in part because the threat of industry entry may compel the active firms to serve consumers well. Regulation in such settings is likely to reduce welfare. Attempting to fine-tune imperfectly competitive markets through price and entry regulation is a perilous task that historically has not met with much success. Reliance on unfettered competition often is preferable in markets that are not natural monopolies. Indirect Effects of Price and Entry Regulation We now illustrate some potential drawbacks to price and entry/exit regulation by analyzing settings where a regulator sets a price above cost and limits the entry of new industry suppliers, as regulators have done in the trucking and airline industries, for example.4 Excessive nonprice competition By specifying the prices at which firms must sell their products, regulation eliminates price as an instrument through which firms compete. To increase the demand for their product, firms then naturally engage in nonprice competition. Such competition can take many forms, including increasing product quality, changing product features, increasing warranty coverage, and advertising to enhance consumer perception of the product. The intensity of nonprice competition varies from industry to industry, depending on such factors as the potential for product differentiation. Some products, like automobiles, have many features that can be differentiated fairly readily. Other products, like natural gas, are inherently similar and thus difficult to differentiate. Even in the latter case, though, firms can compete by providing better customer service to accompany the product. A second factor that influences the degree of nonprice competition is the ability of firms to collude. A regulated industry can be fertile ground for collusion, because the same firms often interact repeatedly over time without the threat of entry that might disrupt collusive arrangements. If firms can cooperate and prevent excessive nonprice competition, then they may be able to secure substantial profit. In the absence of effective cooperation, the firms may compete away extranormal profits through nonprice competition. To illustrate the impact of nonprice competition on welfare, consider a simple setting where a product can be produced at either high quality or low quality. All consumers prefer the high-quality product (good h) to the low-quality product (good l). The two products are imperfect substitutes, so the demand for product h depends on both the price of product h (denoted Ph) and the price of product l (denoted Pl). Similarly, the demand for product l depends on both Pl and Ph. Because product h is of higher quality than product l, consumers will purchase product l only if its price is sufficiently far below the price of product h. To reflect the fact that products of higher quality typically are more costly to produce, assume that the unit cost of

supplying product h (denoted ch) exceeds the unit cost of supplying product l (denoted cl). The industry is assumed to achieve a competitive equilibrium in the absence of regulation, so the price of each product is equal to its marginal (and average) cost of production. Formally, Ph = ch, and Pl = cl. The associated quantities are and for products h and l, respectively. As illustrated in figure 15.4a, is the amount of product h demanded when its price is ch and the demand curve is Dh(Ph; Pl = cl). This notation reflects the fact that the demand for product h varies with the price of product l. The value of the amount of product l demanded when its price is cl, is determined in corresponding fashion (see figure 15.4b). Notice that the demand for product l is zero when the price of both products is ch. This lack of demand arises because with identical product prices, every consumer prefers the high-quality product, so no consumer will purchase the low-quality product.

Figure 15.4 Effects of Price Regulation on Nonprice Competition

Now consider a regulatory policy that requires each firm to charge price ch for its product, regardless of the product’s quality (perhaps because product quality is difficult for regulators to measure accurately). Under this policy, suppliers will compete for customers by increasing the quality of their products. In particular, no supplier will market a low-quality product, because no consumer will purchase the low-quality product when a high-quality product is available at the same price. When the price of product l is set at ch, the demand curve for product h shifts out to Dh(Ph; Pl = ch), because the price of a substitute product has increased. At the equilibrium that prevails under regulation, units of product h are produced and consumed, and product l is not produced. The welfare loss from regulation is measured by the area of the shaded triangle under the demand curve Dl(Pl; Ph = ch) in figure 15.4b. This area reflects the amount of consumer surplus forgone when the option to buy the low-quality, low-priced product is no longer available. Notice that the increased area under the demand curve for good h that arises when the curve shifts out to Dh(Ph; Pl = ch) does not represent a welfare gain. Instead, this area measures the increased willingness to pay for product h that arises when product l is no longer available. The increased area under the demand curve for product h implies that the welfare loss from eliminating product h is more pronounced when product l is not available than when product l is available. Notice also that the increase in industry production costs, is not a measure of the loss that regulation imposes on consumers. It is true that consumers who previously purchased the low-quality product are now paying a higher price, ch rather than cl. However, these consumers are receiving a higher

quality product than they did in the absence of regulation. The increased value that these consumers derive from the increased product quality they receive partially offsets the increase in production costs. Finally, observe that if the regulated price is set above ch, say at in figure 15.4a, then the welfare loss is the sum of the two shaded triangles in both panels in the figure. In addition to the loss that consumers suffer because product l is no longer available at price cl, welfare declines further when the price of product h exceeds its marginal cost of production. The key conclusion here is that although regulation can limit some avenues through which firms compete, it often is unable to restrict all such avenues. To attract consumers, firms will focus their activities on avenues that are unimpeded by regulation. The resulting competition typically will reduce the extranormal profit generated by price and entry regulation. Although some competition could increase welfare, the competition that arises in equilibrium can be excessive and so can reduce welfare. Nonprice competition induced by regulatory price controls played a central role in the airline industry, as we discuss in chapter 16. Productive inefficiency Price and entry regulation also can induce productive inefficiency. When regulation enables firms to secure extranormal profit, workers (especially unionized workers) often will attempt to extract a portion of the prevailing surplus. They may do so, for example, by demanding higher wages. A simple transfer of rent from shareholders to workers does not reduce social welfare. However, higher wage rates induce suppliers to replace labor with other inputs, like plant and equipment. Suppliers will continue to operate with the capital-labor ratio that minimizes production costs. However, the ratio will not be optimal from a social perspective if the cost of labor to the firm (that is, the wage rate) exceeds the opportunity cost of labor to society. A second source of productive inefficiency from price and entry regulation is the continued operation of inefficient firms that would have exited the industry in the absence of regulation. In an unregulated environment, new firms replace inefficient incumbent suppliers. Entry regulation neutralizes the mechanism by which efficient firms thrive and inefficient firms are driven from the market. These observations suggest that industry deregulation may be characterized by simultaneous exit (by less efficient firms) and entry (by more efficient firms). To illustrate how entry restrictions can impede industry performance, consider the effects restrictions on bank branching. Until the 1970s, almost all states in the United States limited the number of branches that a bank could have in the state. As states began to eliminate these restrictions, banks’ operating cost losses from the loans they made declined sharply.5 This evidence is consistent with the hypothesis that restrictions on bank branches enabled less efficient banks to survive in the marketplace. By interfering with the natural process by which more efficient suppliers replace less efficient suppliers, regulation raised costs and impeded performance in the banking sector. Indirect Effects of Price and Exit Regulation Now consider a setting where the price of a regulated service is set below cost, and the regulated supplier is required to supply all demand at the established price, even though the firm incurs a financial loss in doing so. In practice, regulators sometimes implement cross-subsidization. That is, they set the price for one service below cost and the price of another service above cost to ensure that, in aggregate, the regulated supplier earns a normal profit. They may do so, for example, by setting a uniform price for a service in all geographic regions, even though the cost of delivering the service differs substantially across geographic

regions. To illustrate, the unit cost of supplying local telephone service typically is much higher in rural regions than in urban regions due to differences in population density. However, regulators often set the similar prices across geographic regions to promote universal subscription to the telephone network. Cross-subsidization To illustrate the impact of cross-subsidization on social welfare, consider a regulated industry that offers products 1 and 2. Suppose the products are independent, so the demand for each product is not affected by the price of the other product. The unit cost of supplying product 1 is c1, and the unit cost of supplying product 2 is c2, which exceeds c1. Suppose the regulator sets the price of product 2 at which is less than c2 (see figure 15.5b). This below-cost price generates a welfare loss equal to the area of triangle abh. The below-cost price also forces the firm to incur a financial loss equal to which is the area of rectangle If the regulator is to ensure that the firm earns a normal profit so that it can attract capital and avoid bankruptcy, she must increase the price of product 1 from the socially efficient level c1 to where

Figure 15.5 Cross-Subsidization

Relative to marginal cost pricing of both products, this policy creates a welfare loss equal to the sum of the areas of triangles abh and def. Thus, in attempting to increase consumption of product 2 by reducing its price below cost, cross-subsidization creates welfare losses in both the market for product 2 and the market for product 1. Reduced capital formation If a firm is forced to serve unprofitable markets for a considerable period of time, it may have difficulty attracting investment capital. If meager earnings or even bankruptcy are deemed likely, investors will only supply capital to the firm if they are promised relatively high returns. Such promises increase the firm’s cost of capital, which typically induces the firm to reduce investment. The reduced investment, in turn, can lead to reduced capacity, productivity, and product quality. Limited earnings and an associated reduction in industry investment played a central role in the railroad industry, as we will discuss further in chapter 16.

Regulation and Innovation The discussion to this point has focused on the static effects of price and entry/exit regulation. Such regulation also can affect dynamic efficiency by affecting incentives to undertake research and development (R&D). The importance of technological innovation in the modern economy cannot be overestimated. In a famous study, Nobel laureate Robert Solow concluded that technological innovation accounted for 90 percent of the doubling of per capita nonfarm output in the United States between 1909 and 1949.6 Given the importance of technological innovation in the economy, it is important to understand how regulation can affect the pace of technological progress. Two points warrant emphasis at the outset. First, dynamic efficiency does not imply that firms invest at the fastest possible rate. More innovation is not always better, because innovation is costly. Ideally, innovation should be pursued to the point where the marginal expected increase in welfare from innovation is equal to the marginal cost of innovation. Second, even though perfect competition ensures static efficiency, it does not necessarily ensure dynamic efficiency. The limited profit that firms earn in a competitive equilibrium may stifle industry investment and R&D. If regulation elevates price above cost and allows firms to earn extranormal profit, regulation might increase the rate of innovation above the level that would arise in a competitive setting. Retained earnings can be an important source of funds for R&D expenditure, and so regulation that increases the level of industry profit can promote investment and innovation. Price regulation also can affect innovation through its impact on nonprice competition. Recall that when regulation limits price competition, firms often will attempt to increase product quality to secure customer patronage. R&D can be instrumental in discovering ways to enhance product quality. Effect of regulatory lags on innovation If a regulator always sets price to eliminate extranormal profit, then the regulated firm will have little or no incentive to innovate. In contrast, delays in resetting price to match cost can provide some incentive for regulated firms to pursue cost-reducing innovation. If a firm knows that it can retain the extra profit generated by a cost reduction for considerable period of time, then the firm will have some incentive to pursue cost-reducing innovations. Lags in the process of setting price to match realized cost also can affect the rate at which a regulated firm adopts cost-reducing innovations.7 To see why, suppose that the regulator always sets price equal to average cost and the firm’s unit cost of production is initially c. Now suppose an innovation arises that would allow the firm to reduce its average cost to c″ (see figure 15.6). The regulated firm can do one of three things. First, it can choose not to adopt the innovation. Second, it can fully adopt the innovation immediately and thereby reduce cost to c″. By doing so, the firm would secure profit equal to the area of rectangle cabc″ for a period, where a period is length of time between regulatory reviews. Once a review takes place, the regulator will reduce price to c″, thereby eliminating the firm’s profit from the innovation. As long as the cost of adopting the innovation (which may entail modifying its operating procedures) is less than cabc″, the existence of a regulatory lag provides the regulated firm with the necessary incentive to adopt the innovation.

Figure 15.6 Effects of Regulatory Lag on Adoption of Innovations

Third, the firm may adopt the innovation gradually. Specifically, suppose that the firm can: (1) initially adopt the innovation partially and thereby reduce its cost to c′; and (2) after the next regulatory review (which reduces price to c′), adopt the innovation fully and thereby reduce its cost to c″. This gradual adoption strategy will enable the firm to secure profit cadc′ in the first period and c′efc″ in the second period. If it is equally costly to undertake immediate and gradual adoption, the firm will increase its (undiscounted) profit by defb if it adopts the innovation gradually rather than immediately. Thus, regulatory lag can influence not only the incentive to pursue cost saving innovations but also the speed at which innovations are adopted. Methods for Estimating the Effects of Regulation Having examined the predictions of economic theory about the effects of price and entry regulation, we now consider how the quantitative effects of such regulation can be estimated. Such estimation can help assess the validity of the foregoing theoretical analysis of the effects of regulation by determining whether actual industry experience is consistent with the predictions of the theory. For example, do price and entry regulation induce excessive nonprice competition, and do price and exit regulations that maintain price above cost impede capital formation? Such estimations also can help assess the merits of alternative policies, such as deregulation.8 Overview of Estimation Methods

Suppose that an industry is or has been subject to price and entry/exit regulation and we seek to estimate the impact that regulation has had on important economic variables, such as price, cost, product quality, capital investment, wages, and technological innovation. To assess the impact of regulation, we must compare the values that these variables would have taken in the absence of regulation with the values they actually did take under regulation. Because the industry is or has been regulated, in principle it is not difficult to collect data on these variables under regulation.9 The more challenging task is to derive a nonregulatory benchmark —that is, to determine the values that these variables would have taken in the absence of regulation. Three basic methods have been used to estimate a nonregulatory benchmark. We discuss each of these methods in turn. Intertemporal Approach The intertemporal (or time-series) approach compares the industry under study during years for which it was regulated with years for which it was not regulated. The nonregulatory benchmark is then the industry under study at a different time. This method requires that the sample period for which one has data includes years for which the industry was regulated and years for which it was not regulated. A simple comparison of values for the relevant economic variables in years with regulation with values of the same variables in years without regulation can be misleading. Because many factors other than prevailing regulations can change over time, observed differences over time may reflect factors other than the prevailing regulatory regime. For example, suppose one observes that productivity increases substantially after an industry is deregulated. The increased productivity could arise because regulation stifled innovation. Alternatively, though, the increased productivity might reflect reductions in input prices or the exogenous development of worldwide technologies (such as the Internet) that affect industry costs. Thus, to accurately assess the impact of regulation on variables of interest using the intertemporal approach, one must control carefully for changes in all other factors that may have affected the variables. When employing the intertemporal approach, the stock price of a firm can provide valuable information about the effect of regulation on the firm’s profit. To assess the impact of regulation on a firm’s profit, event studies examine how the firm’s share price changes when deregulation is announced.10 Because a firm’s stock price is a measure of the firm’s anticipated future earnings, a decline in a firm’s stock price when deregulation is announced suggests that deregulation is likely to reduce future earnings, and so regulation may have permitted inflated earnings, perhaps by limiting entry. In contrast, an increase in the firm’s stock price when industry deregulation is announced may suggest that regulation constrained the firm’s earnings, by restricting prices or stifling innovation, for example. Event studies require considerable care, because stock prices reflect all relevant information that is available to buyers and sellers of stocks. Consequently, it is important to determine the extent to which an observed change in a firm’s stock price reflects the announcement of deregulation rather than some other change in the firm’s environment. Application: New York stock exchange Since its inception in 1792 until major deregulatory legislation in 1975, the New York Stock Exchange (NYSE) set minimum commission rates on transactions (for example, buying or selling stock) conducted by its members.11 Because its members always chose to set their rates at the prescribed minimum, the NYSE effectively set commission rates. The NYSE also required commission rates to be independent of the size of the order. Its members were not allowed to offer quantity discounts, despite the presence of substantial scale

economies in performing securities transactions. Although the NYSE set standards for the behavior of its members, these standards were enforced by the industry’s regulatory arm, the Securities and Exchange Commission (SEC). Regulation produced considerable discrepancies between commission rates and cost. In December 1968, the commission rate set by the NYSE was $0.39 per share. Table 15.2 compares the transaction cost per share traded and the established transaction price. The table reports that consumers with large orders paid fees well above cost, whereas consumers with small orders paid fees below cost. Table 15.2 Commission Rate, Cost, and Profit on $40 Stock by Order Size, 1968 Shares per Order 100 200 300 400 500 1,000 5,000 10,000 100,000

Commission per Share ($)

Estimated Cost per Share ($)

Profit per Share ($)

0.39 0.39 0.39 0.39 0.39 0.39 0.39 0.39 0.39

0.55 0.45 0.41 0.39 0.37 0.32 0.24 0.23 0.21

−0.16 −0.06 −0.02 0.00 0.02 0.07 0.15 0.16 0.18

The deregulation of commission rates on the NYSE began in the early 1970s. In 1971 the SEC ordered the NYSE to allow its members and their clients to freely negotiate commission rates on large orders, specifically, on the portion of an order in excess of $500,000. This deregulation largely applied to institutional investors, such as managers of pension funds. The SEC continued to deregulate throughout the early 1970s by reducing the minimum order size for which negotiation was allowed. The legislative branch of the government entered the deregulatory process by passing the Securities Act Amendments of 1975. This legislation mandated that the SEC prohibit the NYSE from fixing commission rates. Figure 15.7 presents the average commission rates for individual and institutional investors during the first five years of deregulation. Rates fell dramatically. Rates dropped by about 25 percent on average when deregulation was first introduced. Rates for large orders declined, while rates for small orders (at least for noninstitutional transactions) increased. Rates declined by more than 50 percent for orders in excess of 10,000 shares.12

Figure 15.7 Average Commission Rates on the New York Stock Exchange, 1975–1980 Source: Gregg A. Jarrell, Journal of Law and Economics 27 (October 1984): 273–312. Reprinted by permission of the University of Chicago Press. Data from the Security and Exchange Commission.

Intermarket Approach The intertemporal approach to measuring the impact of regulation often is not very useful if an industry is currently regulated and has been regulated for a long time. In such settings, relevant data that precede the onset of regulation may not be available. Even when the data are available, though, many aspects of an industry can change substantially over long periods of time. Consequently, it can be difficult to distinguish between the effects of regulation and the effects of other factors in such settings. The substantial deregulation that occurred during the 1980s provided data that allowed economists to employ the intertemporal approach to estimate the impact of price and entry regulation. Before that time, though, industries generally had been regulated for decades, so economists looked to other markets for a nonregulatory benchmark. Specifically, they employed the intermarket (or cross-sectional) approach to compare outcomes in two markets that offered similar products and had similar demand and cost functions but that differed in one essential respect: one market was regulated and the other was not. A comparison of outcomes in two such markets sometimes can provide reasonable estimates of the effects of regulation. The intermarket approach often compares outcomes where firms employ similar technologies to supply similar products in distinct geographic regions (such as different states). To attribute observed differences in outcomes to regulation, one must control for other differences between the geographic regions. For instance, input prices or demand elasticities may differ across regions. Implementing effective controls is complicated by the fact that regulation is endogenous. Because a state chooses its regulatory policy, a state that adopts substantial regulation may differ from one that chooses to adopt little or no regulation. For example, states like California and New York often lead the way in implementing new regulatory policy, and these states are atypical on several dimensions. Suppose that the same traits that explain the presence of different regulations across states also directly influence price and other economic variables that regulation may affect. Further suppose that price is, say, higher in states with

more regulation. This correlation may not reflect a causal relationship between regulation and price. Rather, it might arise because, for instance, states with higher per capita income prefer to have more regulation, and in addition have higher prices, because demand is higher due to higher income. In this example, higher income is the underlying cause of both higher prices and more regulation. Consequently, failing to control adequately for income differences across states may lead to the inappropriate conclusions that regulation causes higher prices. Application: Advertising of eyeglasses Students of regulation often are struck by how pervasive and idiosyncratic it can be. It is not surprising that electric utilities and local telephone companies are regulated. But why should state regulatory agencies control the advertising of eyeglasses and eye examinations? Yet in the 1960s, approximately three-quarters of states did just that. Some states only outlawed advertising that conveyed the price of eye exams. Other states prohibited advertising that conveyed any information concerning eyeglasses and eye examinations. A ban on advertising can either increase or reduce price. Through its advertising, a firm may be able to increase consumer awareness of its product and thereby increase the demand for its product, which supports a higher sales price. This effect suggests that a ban on advertising would tend to reduce price. However, advertising can also reduce consumer search costs and encourage consumers to engage in comparison shopping. More intense price competition may ensue, which produces lower prices in equilibrium. This effect suggests that an advertising ban would raise prices. In light of these two countervailing forces, it is unclear whether regulation that restricts advertising would lead to higher or lower prices. To estimate the effect of advertising regulation on price, Lee Benham compared the prices of eyeglasses in states with no advertising restriction with the corresponding prices in states that imposed advertising restrictions.13 His data consist of a 1963 national survey of individuals who had purchased eyeglasses, so his data set is a cross-section of individuals in different states at a particular point in time. He found that the average price paid for eyeglasses in states without advertising restrictions was $26.34, whereas the corresponding price in states with advertising restrictions was $33.04. This evidence suggests that advertising restrictions reduced the intensity of price competition by raising consumer search costs and thereby increased the equilibrium prices of eyeglasses. In assessing any empirical study, it is important to ask whether other factors might underlie the observed outcomes. In the present case, could the observed price differential reflect factors other than state regulations? Not all eyeglasses are the same. Suppose that consumers with higher income tend to buy eyeglasses of higher quality. If states with advertising regulations also tend to have higher per capita income, then it is possible that the observed price differential is due not to advertising restrictions but rather to differences in the quality of the eyeglasses that are purchased. To attempt to control for this and other factors, Benham estimated the price paid for eyeglasses as a function of family income, gender, age, family size, and, of course, whether the state restricted advertising. He found that, when controlling for other factors, state regulations resulted in a $7.48 increase in the price paid for eyeglasses. Application: 44 liquormart decision Some studies employ a blend of the intertemporal and intermarket approaches to examine the impact of regulation on industry performance. In its 44 Liquormart decision in May 1996,14 the U.S. Supreme Court struck down a Rhode Island state law banning the advertising of liquor prices. Jeffrey Milyo and Joel Waldfogel studied the effect of bans on advertising by comparing liquor prices before and after this judicial

decision.15 However, rather than try to control for all factors that could also cause retail liquor prices to change over time (factors such as wholesale liquor prices and the wages of retail clerks), the authors compared the change in retail liquor prices in Rhode Island between June 1995 and June 1997 with the corresponding change in prices for the same products in the neighboring state of Massachusetts. Price advertising was legal in Massachusetts during this time. Therefore, if liquor stores in Rhode Island and Massachusetts were subject to the same factors that influence costs other than advertising regulation, then any difference between the observed price changes in Rhode Island and those in Massachusetts can reasonably be attributed to differences in regulation. The authors found that the prices for advertised liquor products in Rhode Island fell by 20 percent more than the corresponding change in prices for the same products in Massachusetts. Thus, this study provides additional evidence that bans on advertising tend to increase prices. Counterfactual Approach The counterfactual approach to estimating the impact of regulation attempts to estimate the industry outcomes that would have arisen had the industry not been regulated. This approach has been used, for example, to estimate the magnitude of the allocative inefficiencies caused by controls on the price of crude oil. A typical application of the counterfactual approach proceeds as follows. First, the market demand curve and the firm’s marginal cost curve are estimated. Then the amount of output that is produced under regulation is compared with the quantity that would be produced if price and marginal cost were equated, using the estimated demand and marginal cost functions. Finally, the levels of consumer surplus associated with these two output levels can then be compared to provide an estimate of the impact of regulation on consumer welfare if, absent regulation, the competitive outcome had prevailed. The counterfactual approach is not well-suited to estimate the impact of regulation on industry outcomes, in part because the approach requires accurate estimates of demand and cost functions, which can be difficult to derive in practice. Furthermore, the approach requires numerous assumptions about the industry outcomes that would arise in the absence of regulation. It is often assumed that regulation does not affect production costs. If, in fact, regulation actually reduces productivity, then the assumption that regulation does not affect the firm’s cost function will tend to underestimate the benefits from deregulation. In practice, it is generally very difficult to determine the cost function that would have prevailed in the absence of regulation. For example, even industry experts did not predict that airlines would adopt the cost-saving huband-spoke system following industry deregulation (see chapter 16). Furthermore, if imperfect competition actually occurs postregulation, then the assumption that a competitive equilibrium prevails will overestimate the gains from deregulation. The counterfactual approach is sometimes employed in conjunction with either the intertemporal or intermarket approach. Recall that the intertemporal approach compares outcomes in periods where regulation is imposed with outcomes in periods when it is not imposed. Alternatively, one might try to predict the outcomes that would have arisen during the period of regulation if regulation had not been imposed. To do so, one can use data from the unregulated period to estimate how exogenous variables like input prices, prices of substitutes, and the business cycle affect such key variables as price, the number of firms, and other endogenous variables. With this estimated relationship, one can then employ the values for the exogenous variables from the regulated period to predict the outcomes that would arise in the absence of regulation. One can then compare these predicted values with the actual observed values during the period

of regulation to derive a measure of the effects of regulation. This approach can be viewed as a combination of the counterfactual and intertemporal approaches. If instead data are available from regulated and unregulated markets for only one year, one can perform an analogous experiment by using the intermarket approach in conjunction with the counterfactual approach. Application: State usury laws Regulations that specify a maximum rate of interest that an institution can charge for lending money are known as usury laws. In the 1970s, most states had some form of usury law. For example, only eight states did not impose a ceiling on the interest rate for conventionally financed residential mortgages. Fifteen states limited this rate to 10 percent or lower, so banks could not charge homeowners more than 10 percent of the money they borrowed to purchase a house. Even though many of these laws had been in place for decades, they had little or no impact because, for much of that time, market-clearing interest rates fell below the legal maximum. However, the rampant inflation of the 1970s increased interest rates substantially, and so the usury ceilings became a binding constraint on lending institutions. To understand the implications of usury ceilings, it is important to note that borrowers and lenders care primarily about the real rate of interest, not the nominal rate of interest. The real rate of interest is the difference between the interest rate actually charged (which is the nominal rate of interest) and the prevailing rate of inflation. For example, if the nominal interest rate is 9 percent and the inflation rate is 5 percent, then the real interest rate is 4 percent. To show why the real interest rate is of primary concern, suppose an individual borrows $100,000 from a bank on January 1, 2017, knowing that she must repay this amount plus interest on December 31, 2017. If the nominal rate of interest on the loan is 5 percent, then the borrower must pay back $105,000 at the end of the year. If the inflation rate is also 5 percent in 2017, then the $105,000 the bank receives at the end of the year buys the same amount of goods that $100,000 purchased at the beginning of the year (because prices have increased by 5 percent due to inflation). In this event, the bank essentially gains nothing despite having forgone the use of $100,000 for a year. If instead the inflation rate had been 0 percent in 2017, then the $105,000 received on December 31, 2017, would have been the equivalent of $5,000 more in goods than the bank could have bought at the beginning of the year. In this case, the real interest rate is 5 percent, whereas in the former case it was 0 percent. This is why the real interest is what matters for lending and borrowing decisions. To consider the effects of a usury ceiling, figure 15.8 plots the demand curve for real loans (that is, after controlling for the inflation rate), denoted D(r), and the supply curve, S(r), where r is the nominal interest rate (the rate actually observed in the market). Let i denote the inflation rate associated with these demand and supply curves. In an unconstrained market, equating the supply and demand curves yields a marketclearing interest rate of r′. Notice that the associated real interest rate is then r′ − i. With a usury ceiling of ru, regulation is not binding, because the market-clearing rate r′ is less than ru. Now suppose the rate of inflation increases to i + d. Holding the nominal rate fixed, the increase in inflation lowers the real interest rate. As a result, consumers demand more loans at a given nominal interest rate, so the market demand curve shifts out to D′(r). Of course, a higher rate of inflation means that lending institutions are less willing to supply loans at a given nominal rate, so the market supply curve shifts in to S′(r). In the absence of a usury ceiling, the new market-clearing rate would be r′ + d, which is just the original nominal rate r′ plus the change in the rate of inflation. Notice that the real amount of loans remains at L′. As long as the nominal rate of interest can adjust freely as the rate of inflation changes, only the nominal numbers change. Real economic activity remains the same.

Figure 15.8 Effects of a Usury Ceiling

Now consider the implications of having a usury ceiling, labeled ru in figure 15.8. When inflation rises to i + d, the usury ceiling prevents the nominal interest rate from fully adjusting to r′ + d. It can only rise to ru. Because the real interest rate has fallen from r′ − i to ru − i − d (recall that the real interest rate remains the same if the nominal rate is allowed to rise from r′ to r′ + d), the demand for loans exceeds the supply of loans. When the inflation rate is i + d and the nominal interest rate is ru, consumers demand L0 loans, but only L″ are supplied. The excess demand for loans is LO − L″. A situation like this one arose in the market for residential loans in the mid-1970s. In states with usury ceilings, the maximum allowed nominal interest rate often was less than the rate that would clear the market by equating supply and demand. Homeowners demanded more loans than lending institutions were willing to supply. To secure one of the loans for which supply was scarce, borrowers were willing to accept loan terms that were less attractive to them and more attractive to lenders. For instance, a borrower might agree to repay the loan more rapidly. Alternatively or in addition, the borrower might agree to make a larger down payment on the home he is purchasing. A larger down payment makes the loan less risky for the bank, because if the borrower fails to repay the loan, the bank can assume ownership of the home and thereby effectively keep the borrower’s down payment. Formally, let denote the market-clearing nominal interest rate for residential mortgages. If then the usury ceiling binds. The greater the disparity between the market clearing rate and the usury ceiling the greater excess demand for loans will be and so the more attractive actual loan terms are likely to be for lenders. Specifically, on average, loans are likely to have a shorter average loan maturity and a higher ratio of property value to loan (because borrowers have made larger down payments). The difficulty in examining this relationship empirically is that the market-clearing rate is not observed when the usury

ceiling binds. Some researchers were able to overcome this difficulty by employing a blend of the counterfactual and intertemporal approaches. To estimate the effects of usury ceilings on loan terms, Steven Crafton examined quarterly data for residential mortgages between 1971 and 1975.16 During this time, usury ceilings were binding in some but not all quarters. The author employed data from quarters in which the market-clearing interest rate for residential mortgages was observable (because this rate was less than the stipulated usury ceiling) to estimate the relationship between exogenous variables and the market-clearing rate. The estimated relationship was

where the variables on the right hand side of equation 15.1 are the exogenous variables. The variable rAAA is the interest rate on AAA-rated bonds, radv is the interest rate paid by lending institutions to borrow funds from the Federal Home Loan Bank Board, and (rm)−1 is the mortgage rate from the previous quarter. This relationship can be used to estimate the (unobserved) value of when the usury ceiling binds. For a quarter in which the ceiling binds, one can insert the observed values of rAAA, radv, and (rm)−1 into equation 15.1. The resulting number is the estimated value for the unobserved Denote this estimate by The estimated discrepancy between the market-clearing interest rate and the usury ceiling is then taken to The remaining step in this analysis is to examine how the observed terms of loans made during periods when the usury ceiling was binding varied with As the theory predicts, the study found that as increased (so the excess demand for loans increased), the terms of the loan became more favorable to lenders. Specifically, the ratio of property value to loan increased (because lenders demanded higher down payments) and the maturity of the loan declined (because lenders required more rapid repayment of the loan). Measuring the Effects of Price and Entry Restrictions: Taxicab Regulation Regulatory History The taxicab industry was largely unregulated prior to the 1920s. However, as the automobile became an integral part of transportation, the demand for and the supply of taxicab services grew, as did taxicab regulations. Fare regulation became increasingly common in the 1920s, as did other forms of regulation, such as the requirement that taxicabs be insured. However, entry into the taxicab business was largely unregulated. Massive entry into the taxicab industry occurred after the onset of the Great Depression. This entry reflected in part the sharp rise in the unemployment rate. With few other job opportunities available, many individuals took to driving taxicabs. With the increased number of competitors, fare competition intensified, and taxicab drivers and owners experienced declining profit. The drivers and owners sought relief in the form of regulation, and they often were able to convince local governments to restrict entry into the taxicab industry: The regulation movement spread throughout the country. In Massachusetts, Frank Sawyer [owner of Checker Taxi Company] urged the state to regulate taxis, and in 1930 the legislature limited the number of cabs in Boston to 1,525 (the same as in 1980). New York City first limited the number of cabs in 1932 under the sponsorship of Mayor Jimmy Walker, but when Walker was forced to resign when it was discovered that he had been bribed by one of the taxi companies, the

attempt at regulation failed. Five years later, however, the Haas Act in New York City froze the number of taxi medallions at 13,500.17

This outcome is consistent with the predictions of the economic theory of regulation (see chapter 10). Each taxicab company stood to gain considerably from entry regulation, whereas the regulation would impose relatively little harm on each consumer. Furthermore, there are far fewer taxicab companies than consumers, so the companies found it less costly than consumers to organize effectively. Consequently, the taxicab companies were more effective in securing regulation than consumers were in preventing it. The regulations instituted in the 1930s have had an enduring impact on the industry. As we explain below, regulation prevented entry in most taxicab markets (with the notable exception of Washington, DC) for many years. After 1979, a few cities reduced restrictions on fares and the number of taxicabs. However, the taxicab industry generally did not experience the widespread deregulation that has occurred in many other industries since the 1970s. Entry Restrictions Taxicab regulation generally encompasses control over price, the number of competitors, and certain practices. Cities typically set fares or ceilings for fares. Entry restrictions take different forms in different cities. Many cities require a taxicab to own a medallion before it can offer its services to the public. Medallions are issued by the city and are limited in supply. The number of medallions places an upper bound on the number of taxicabs that operate. (The number of taxicabs can be less than the number of medallions if a taxi company chooses not to use a medallion it owns.) Medallions can be sold and their ownership transferred in many cities. Cities that have pursued this type of entry regulation include Baltimore, Boston, Chicago, Detroit, New York, and San Francisco. An alternative method of limiting the number of competitors is to limit the number of taxi companies, and possibly the number of taxicabs. Such limits have been imposed in Cleveland, Dallas, Houston, Los Angeles, Philadelphia, Pittsburgh, and Seattle.18 Many cities have severely constrained the number of taxicabs. New York City issued 13,566 medallions in 1937. After nearly 2,000 of these medallions were returned to the city during the World War II era, New York sold some additional medallions toward the end of the twentieth century. As of 2014, taxicab owners in New York City held 13,437 medallions, roughly the same number as in 1937.19 As the city’s population expanded over the years, residents have had to suffer with a declining number of taxicabs per capita. Similar experiences have occurred in other major cities. In Boston, for example, the number of taxicabs has been fixed at 1,525 since 1930. In Detroit, the number has been 1,310 for more than forty years.20 Value of a Medallion One method to assess the value of price and entry restrictions is to determine how much a firm is willing to pay to operate in the industry. This information can be difficult to obtain in many industries, but not in the taxicab industry. Because a taxicab must have a medallion to operate in many cities, and because the number of medallions is fixed, prospective taxicab operators must purchase a medallion in a secondary market. If the prevailing price of a medallion is positive, then the number of competitors must be less than the number that would exist under free entry (where, effectively, the price of a medallion is zero). The market value of a medallion is more than just a rough indicator of the effectiveness of entry restrictions. A medallion’s price tells us exactly what the most informed agents believe to be the discounted stream of extranormal profits from economic regulation. To see why this is the case, consider an individual

with a taxicab who is faced with two alternatives. He can freely enter and operate in Washington, DC, or he can buy a New York City medallion and operate there. The equilibrium price of a medallion is set by the market, so a taxicab owner who buys a medallion must be indifferent between the two alternatives. If this were not the case, then the market for medallions would not be in equilibrium. Specifically, if a taxicab owner could expect to earn more by buying a medallion and operating in New York City, then this expectation would increase the demand for New York City medallions and drive the price up until the price made new taxicab owners indifferent between the two alternatives. Because the Washington, DC, market is subject to free entry, it is reasonable to assume that taxicab owners there earn a normal profit. Consequently, the price of a medallion in New York City must reflect the additional profit that can be earned by operating in a regulated market rather than an unregulated one. Specifically, it is equal to the discounted sum of future excess profit that a taxicab can reasonably expect to earn by operating in New York City. For example, suppose operating a taxicab in a regulated market yields extranormal profits of $10,000 per year for the infinite future. If the interest rate is 4 percent, then the market value of a medallion is $10,000/0.04, or $250,000. Competition for medallions should then drive the price of a medallion up to $250,000. Figure 15.9 presents the market price of a New York City taxicab medallion between 1960 and 2014. Notice that the price of a medallion was nearly $250,000 in 2003. The price climbed to almost $1 million in 2013 before declining to nearly $800,000 in 2014. These figures represent extranormal profit that fare and entry regulation provide. With these levels of profit at stake, it is not surprising that entry restrictions persist. The typical medallion owner in New York City would have stood to lose as much as $800,000 in 2014 if free entry into the city’s taxicab industry were permitted. In contrast, the value of deregulation to the typical consumer of taxicab services would be far less than this amount. Furthermore, a few large taxicab companies hold many of the city’s medallions, which tends to make this interest group relatively effective at providing political support in exchange for continued entry restrictions. Nevertheless, the New York City Council passed a law in July 2003 that authorized the sale of up to 900 additional medallions, in part to raise revenue for the city. At the same time, though, regulated taxi fares were increased by 25 percent, with the express purpose of preserving the value of a taxi medallion. Thus, the political compromise was to strengthen price regulation while relaxing entry regulation.

Figure 15.9 Price of a New York City Taxicab Medallion, 1960–2014 Source: The New York City Taxicab Fact Book 2006 (New York: Schaller Consulting, 2006); the New York City Taxi & Limousine Commission 2010, and The New York City Taxi & Limousine Commission Fact Book 2014 (New York: New York City Taxi & Limousine Commission, 2014).

The Rise of Ride Sharing Attempts to preserve the value of taxi medallions have been complicated in recent years by the rise of ride sharing services, including the services offered by Uber and Lyft. These companies allow individuals to join a ride sharing service by downloading the service’s app on their smartphones. Service members can then use their smartphone to secure a ride from a service driver. When a member requests a ride, she receives an estimate of the time at which she will be picked up and the cost of the specific trip she requests. The actual cost of the trip is billed automatically to the member’s credit card. Uber began its commercial operations in San Francisco in 2010. The company expanded its U.S. operations to New York City in early 2011, and began to serve Boston, Chicago, Seattle, and Washington, DC, later that year. Uber also has initiated service in Paris in 2011. By 2014, Uber operated in more than 100 cities worldwide. Uber was serving more than 300 cities in fifty-eight counties by 2015.21 Although Uber and other ride sharing services have expanded rapidly since 2010, their expansion has met considerable resistance. To illustrate, soon after Uber began to serve San Francisco, the city’s Metropolitan Transit Authority and the California Public Utilities Commission ordered Uber to cease its operations on the grounds that the company was providing taxi services without proper licensing. Uber contested the orders, arguing that its ride sharing service was not a standard taxi service, but rather a service that matched its members’ demand for and supply of local transportation services. Uber eventually agreed to some regulation of its operations by the commission, but Uber is not subject to the same fare regulations that are imposed on

taxi companies in San Francisco. Government agencies in other countries—including Spain, Thailand, and the Netherlands—have also challenged Uber’s right to operate.22 Such government action is commonly advocated by taxicab companies, who view ride sharing services as a threat to their livelihood.23 Indeed, recent declines in the value of taxi medallions may stem in part from the growing popularity of ride sharing services. Between 2013 and 2015, the price of a taxi medallion in New York City declined from almost $1 million to nearly $800,000. The corresponding decline in Chicago was from $360,000 to $240,000.24 The pricing of ride sharing services differs from the pricing of taxi services. Most taxicabs charge a fixed rate that does not change during the day or even throughout the typical year. In contrast, the prices charged by ride sharing services can vary from one hour to the next, depending on the prevailing demand and supply conditions. For instance, prices tend to rise during inclement weather, when individuals prefer not to walk to their destination or wait for public transportation at a bus stop that is exposed to the elements. Prices also can rise at the conclusion of a sporting event or concert when many individuals simultaneously seek rides home from the venue. Prices also often increase in major cities after the clock strikes midnight on New Year’s Eve, when many individuals who have been out toasting the arrival of the new year are in no condition to drive home. During the early hours of January 1, 2016, Uber’s prices were reported to have surged to nearly ten times their normal levels.25 Such surge pricing is controversial. Ride sharing companies have been accused of price gouging and unfair pricing practices, and these accusations have led to calls for stringent regulation of the prices charged for ride sharing services. However, these calls have not been heeded in most jurisdictions.26 This may be the case in part because, even though consumers always prefer lower prices, prices that reflect prevailing market conditions often can increase consumer welfare by increasing the supply of rides at times when they are most highly valued. Figure 15.10 illustrates the welfare gains that can arise when the price of local transportation (“rides”) changes to reflect prevailing demand conditions. In the figure, D0 denotes the demand for rides under normal conditions, and D1 denotes the corresponding demand when rides are valued particularly highly, because of inclement weather, for instance. The S curve in figure 15.10 represents the supply of rides. The positive slope of the curve reflects the fact that drivers will supply more rides when the price of rides increases.

Figure 15.10 Welfare Gains from Surge Pricing

The value P0 in figure 15.10 denotes the price at which the demand and supply of rides are equated under normal conditions. Suppose initially that regulation precludes any increase in price above P0. Then when demand increases to D1, consumers will demand Q1 rides (at price P0). However, only Q0 rides will be supplied, so an excess demand of Q1 − Q0 will prevail. Now suppose the price of rides is permitted to increase to the level that equates demand and supply. Then when demand increases to D1, the price of rides will increase to P*, and Q* rides will be supplied at this price. The shaded region in figure 15.10 (the area of region abd) represents the increase in welfare that arises when the price of a ride is permitted to increase from P0 to P* following an increase in demand from D0 to D1. This region reflects the difference between the value that consumers derive from the additional rides induced by the higher price (as reflected in demand curve D1) and the cost of providing these rides (as reflected in supply curve S).27 The welfare gains from surge pricing are further illustrated by comparing the outcomes in two actual settings: one where surge pricing prevailed and one where it was not operational.28 Uber’s surge pricing was operational on the evening of March 21, 2015, when Ariana Grande performed a sold-out concert at Madison Square Garden in New York City. At the conclusion of the concert shortly after 10:30 P.M., requests for rides from Uber drivers increased substantially above normal levels in the neighborhood of Madison

Square Garden, and the prevailing price of rides increased. A substantial increase in driver supply was subsequently observed, as figure 15.11 illustrates.

Figure 15.11 Driver Supply Increases to Match Spike in Demand Source: Jonathan Hall, Cory Kendrick, Chris Nosko, “The Effects of Uber’s Surge Pricing: A Case Study,” University of Chicago’s Booth School of Business discussion paper, Chicago, 2016 (http://faculty.chicagobooth.edu/chris.nosko/research /effects_of_uber’s_surge_pricing.pdf).

Figure 15.12 presents the requests for rides (“Requests”), the average number of minutes between the time a ride was requested and the time a driver arrived “(ETA”), and the ride completion rate near Madison Square Garden on the night of the concert. The ride completion rate is the ratio of the number ride requests that were fulfilled to the total number of ride requests. Figure 15.12 indicates that, despite the substantial increase in the demand for rides at the conclusion of the concert, the time required to secure a ride remained relatively short, and virtually all requests for rides were fulfilled.

Figure 15.12 Signs of Surge Pricing in Action, March 21, 2015 Source: Jonathan Hall, Cory Kendrick, Chris Nosko, “The Effects of Uber’s Surge Pricing: A Case Study,” University of Chicago’s Booth School of Business discussion paper, Chicago, 2016 (http://faculty.chicagobooth.edu/chris.nosko/research /effects_of_uber’s_surge_pricing.pdf).

Outcomes differed sharply in New York City during the early morning hours of New Year’s Day in 2015,

when a technical glitch rendered Uber’s surge pricing inoperable for twenty-six minutes, beginning at 1:15 A.M. As Figure 15.13 illustrates, the average time before a requested ride arrived increased substantially, and the completion rate declined dramatically when surge pricing was not implemented during this period of unusually high demand for rides. Many inebriated riders may not have remembered the difficulties they encountered in securing rides home early that New Year’s Day, but they were inconvenienced nevertheless.

Figure 15.13 Signs of a Surge Pricing Disruption on New Year’s Eve, January 1, 2015 Source: Jonathan Hall, Cory Kendrick, Chris Nosko, “The Effects of Uber’s Surge Pricing: A Case Study,” University of Chicago’s Booth School of Business discussion paper, Chicago, 2016 (http://faculty.chicagobooth.edu/chris.nosko/research /effects_of_uber’s_surge_pricing.pdf).

Because ride sharing is a relatively recent phenomenon, our knowledge of the impact of ride sharing on the welfare of producers and consumers of taxi services presently is limited. However, researchers will soon employ a variety of methodologies, including those discussed in this chapter, to shed more light on this important issue.29 Summary This chapter has provided two main analyses. First, it provided an introductory analysis of the effects of price and entry/exit regulation on industry performance. We found that the static welfare effects of price regulation depend on whether entry is also regulated, on the presence of unregulated substitutes for the regulated product, and on the nature of the prevailing industry competition. We also identified some indirect effects of price and entry/exit regulation. Setting price above cost can create excessive nonprice competition and productive inefficiencies. Setting price below cost can spread welfare losses to other markets and reduce capital investment. Although dynamic welfare effects are more difficult to classify, we noted that they could be substantial. Price regulation can reduce the incentive to innovate by limiting the return to innovation, but lags in matching price to realized cost can restore some of this incentive. Entry regulation can limit innovation by excluding potentially innovative entrepreneurs. Second, this chapter discussed alternative methods for estimating the quantitative effects of regulation. We found that one can measure the impact of regulation by comparing regulated and unregulated markets at a point in time (intermarket approach), by comparing a market before and after regulation (intertemporal approach), and by comparing actual outcomes in a regulated market with projections of outcomes that would have arisen if the market were deregulated (counterfactual approach). The chapters that follow apply the theory and the methods for measuring the effects of regulation discussed in this chapter. The applications are most direct in our discussion of regulation in the railroad,

trucking, and airline industries in chapter 16. Questions and Problems 1. For the setting described in table 15.1, find the per-firm subsidy or tax that would induce the socially optimal number of firms to operate in the industry. 2. Suppose that the market demand function is Q(P) = 1,000 − 20P (where P denotes price). Also suppose that twenty firms operate in the industry, each firm produces the same amount of (homogeneous) output, and each firm’s cost of supplying q units of output is C(q) = 10q. Regulation requires each firm to set price 20 for its product. Suppose an innovation becomes available to all firms that reduces its unit cost from 10 to 5. a. Derive the value of this innovation under the identified form of regulation. b. Derive the value of this innovation if the market were not regulated, and competition drove the market price to the level of the firms’ unit cost of production. 3. Suppose that the government sets the price for a differentiated product above the cost of producing the product. What do you think will happen to the quality of the products that are sold in the market? 4. In 1979, New Jersey had a usury law that precluded any interest rate on conventionally financed residential mortgages above 9.5 percent. This usury ceiling was more restrictive than the one in the state of New York. People who work in New York City can choose to live in suburbs located in either New Jersey or New York. Suppose that homeowners must secure a mortgage from a bank in the state in which they live. What effect do you think the more restrictive usury ceiling in New Jersey had on housing prices in the New Jersey suburbs relative to housing prices in the New York suburbs? 5. Until the 1977 Bates v. State Bar of Arizona Supreme Court decision, attorneys were not permitted to advertise their services. The 1977 decision allowed attorneys to advertise the availability of their services and the fees they charged to perform routine legal services. What do you think were the likely economic effects of the 1977 Supreme Court decision? 6. The market value of a New York City taxicab medallion was approximately $800,000 in 2016. If the annual interest rate were 5 percent and if entry restrictions were not expected to change in the future, what is the amount of extranormal profit that, as of 2016, a taxicab owner in New York City was expected to earn annually? 7. The market value of a New York City taxicab medallion was approximately $1.3 million before ride sharing services (e.g., Uber and Lyft) began operating in the city. The market value of a medallion declined to approximately $800,000 afterward. What is the implied impact of ride sharing services on the annual expected extranormal earnings of a taxicab driver in New York City? (Assume that the relevant annual interest rate is 5 percent.) 8. How would you attempt to measure the impact of ride sharing services on the welfare of producers and consumers of taxi services in New York City? What measures of welfare would you employ? What data and methodology would you employ?

Notes 1. The firm would enter the industry if permitted to do so because its profit would be

which is positive.

2. See Richard Lipsey and Kelvin Lancaster, “The General Theory of Second Best,” Review of Economic Studies 24 (December 1956): 11–32. 3. See N. Gregory Mankiw and Michael D. Whinston, “Free Entry and Social Inefficiency,” RAND Journal of Economics 17 (Spring 1986): 48–58; and Martin K. Perry, “Scale Economies, Imperfect Competition, and Public Policy,” Journal of Industrial Economics 32 (1984): 313–333. 4. Even when entry is not expressly forbidden, a new supplier may be required to meet very stringent conditions before it is authorized to serve consumers. Such requirements can effectively amount to a prohibition on entry. 5. See Jith Jayaratne and Philip E. Strahan, “Entry Restrictions, Industry Evolution, and Dynamic Efficiency: Evidence from Commercial Banking,” Journal of Law and Economics 46 (April 1998): 239–274.

6. Robert Solow, “Technical Change and the Aggregate Production Function,” Review of Economic Studies 39 (August 1957): 312–320. 7. This analysis is from George Sweeney, “Adoption of Cost-Saving Innovations by a Regulated Firm,” American Economic Review 71 (June 1981): 437–447. 8. Also see Paul L. Joskow and Nancy L. Rose, “The Effects of Economic Regulation,” in Richard Schmalensee and Robert D. Willig, eds., Handbook of Industrial Organization, vol. 2 (Amsterdam: North-Holland, 1989), pp. 1449–1506; and Robert W. Hahn and John A. Hird, “The Costs and Benefits of Regulation: Review and Synthesis,” Yale Journal on Regulation 8 (Winter 1991): 233–278. 9. In practice, some variables (such as product quality) can be difficult to measure. 10. See G. William Schwert, “Using Financial Data to Measure Effects of Regulation,” Journal of Law and Economics 24 (April 1981): 121–158. The usefulness of this methodology is questioned in John J. Binder, “Measuring the Effects of Regulation with Stock Price Data,” RAND Journal of Economics 16 (Summer 1985): 167–183. 11. This section is based on Susan M. Phillips and J. Richard Zecher, The SEC and the Public Interest (Cambridge, MA: MIT Press, 1981); and Gregg A. Jarrell, “Change at the Exchange: The Causes and Effects of Regulation,” Journal of Law and Economics 27 (October 1984): 273–312. 12. For another example of the intertemporal approach to measuring the effects of regulation, see the discussion of the 44 Liquormart decision below. 13. Lee Benham, “The Effect of Advertising on the Price of Eyeglasses,” Journal of Law and Economics 15 (October 1972): 337–352. 14. 44 Liquormart Inc. v. Rhode Island, 517 US 484 (1996). 15. Jeffrey Milyo and Joel Waldfogel, “The Effect of Price Advertising on Prices: Evidence in the Wake of 44 Liquormart,” American Economic Review 89 (December 1999): 1081–1096. 16. Steven M. Crafton, “An Empirical Test of the Effect of Usury Laws,” Journal of Law and Economics 23 (April 1980): 135–145. See also James R. Ostas, “Effects of Usury Ceilings in the Mortgage Market,” Journal of Finance 31 (June 1976): 821–834. 17. Gorman Gilbert and Robert E. Samuels, The Taxicab (Chapel Hill: University of North Carolina Press, 1982), pp. 70– 71. 18. Mark W. Frankena and Paul A. Pautler, “An Economic Analysis of Taxicab Regulation,” Bureau of Economics Staff Report, Federal Trade Commission, Washington, DC, May 1984. 19. NYC Taxi and Limousine Commission, 2014 Taxicab Fact Book, p. 12 (available at http://www.nyc.gov/html/tlc /downloads/ pdf/2014_taxicab_fact_book.pdf). 20. The city of Chicago has taken a different path. In 1934, the city authorized the operation of 4,108 taxicabs. After reducing this number to 3,000 in 1937, Chicago has allowed more taxicabs to operate in recent years. As of 2014, there were 6,735 active taxicab medallions in Chicago. See Gwyneed Stuart, “Can Chicago’s Taxi Industry Survive the Rideshare Revolution?” Chicago Reader, October 1, 2014 (available at http://www.chicagoreader.com/ chicago/rideshare-chicagouber-lyft-uberx-taxi-industry-cab-drivers-extinct/Content?oid=15165161). 21. For additional details on Uber’s development, see Julian Chokkattu and Jordan Crook, “A Brief History of Uber,” TechCrunch, August 14, 2014 (http://techcrunch.com/gallery/a-brief-history-of-uber); and Adam Lashinsky, “Uber: An Oral History,” Fortune, June 3, 2015 (available at http://fortune.com/2015/ 06/03/uber-an-oral-history/). 22. Charles J. Johnson, “Timeline: History of Uber,” Chicago Tribune, March 11, 2015 (available at www.chicagotribune .com/ bluesky/technology/chi-timeline-ubers-controversial-rise-20150205-htmlstory.html). 23. Taxi drivers have protested the operation ride sharing services in many countries. For example, in January 2016, more than 2,000 taxi drivers blocked busy roads in Paris and burned tires. See Angelique Chrisafis, “France Hit by Day of Protest as Security Forces Fire Teargas at Taxi Strike,” The Guardian, January 26, 2016 (available at http://www.theguardian.com/ world/2016/jan/26/french-taxi-drivers-block-paris-roads-in-uber-protest). In June 2016, Uber was fined 800,000 euros for operating UberPop, which was deemed to be an illegal transport service. UberPop effectively allowed anyone with a car to serve as a driver for the ride sharing service. Uber was also forced to discontinue this service in Madrid in 2014, but the company later introduced a corresponding UberX service that employs professional drivers. See Paul Sawers, “France Fines

Uber $900,000 over ‘Illegal’ UberPop Service, 2 Execs Spared Jail,” Venture Beat, June 9, 2016 (http://venturebeat.com /2016/06/09/france-fines-uber-900000-over-illegal-uberpop-service-2-execs-spared-jail/). 24. Robin Sidel, “Uber’s Rise Presses Taxi Lenders,” Wall Street Journal, October 21, 2015 (available at www.wsj.com /articles/ ubers-rise-presses-taxi-lenders-1445471757). 25. Daniel White, “Uber Users Are Complaining about Pricey New Year’s Eve Rides,” Time, January 1, 2016 (http://time .com/4165410/uber-new-years-eve-price-surge-rides). 26. In January 2016, New York City explicitly declined to limit surge pricing. See Josh Dawsey and Andrew Tangel, “Uber Won’t Face Limits on Surge Pricing Under NYC Council Legislation,” Wall Street Journal, January 15, 2016 (available at http://www.wsj.com/articles/uber-wont-face-limits-on-surge-pricing-under-nyc-council-legislation-1452880443). A bill to regulate surge pricing and to increase required background checks of ride sharing drivers failed in committee in the California state legislature in April 2016. 27. Actually, the area of region abd is a lower bound on the relevant reduction in welfare. Further reductions in welfare arise if some consumers who secure rides when the price is restricted to be P0 are not those who value the rides most highly. This issue is addressed more fully in chapter 17, which identifies the welfare losses caused by ceilings on the price of oil. 28. The ensuing discussion summarizes the findings reported in Jonathan Hall, Cory Kendrick, and Chris Nosko, “The Effects of Uber’s Surge Pricing: A Case Study,” University of Chicago’s Booth School of Business discussion paper, Chicago, 2016 (http://faculty.chicagobooth.edu/chris.nosko/research/effects_of_uber’s_surge_pricing.pdf). 29. One early study finds evidence of more efficient capacity utilization by drivers of ride sharing services than by drivers of taxicabs, where capacity utilization is the ratio of miles driven with a paying passenger in the car to the total number of miles driven. See Judd Cramer and Alan B. Krueger, “Disruptive Change in the Taxi Business: The Case of Uber,” NBER Working Paper 22083, National Bureau of Economic Research, Cambridge, MA, March 2016.

16 Economic Regulation of Transportation: Surface Freight and Airlines

From the mid-1970s to the early 1980s, the United States witnessed an unprecedented program of deregulation. A variety of restrictions were removed from industries that had long been under government control. This period of deregulation is important both because it generated considerable welfare gains and because it provided economists with natural experiments for testing theories about the effects of regulation. By investigating how prices, product quality, product variety, productive efficiency, and other important economic variables respond to deregulation, we can learn about the effects of regulatory policies. In other words, we can gain information as to what would have taken place in the absence of government regulation. The objective of this chapter is to employ the principles developed in chapter 15 to explore the impact of regulatory policies in several important markets of the transportation sector. We focus on the railroad, trucking, and airline sectors.1 Transportation Industry The service provided by a transport firm can be viewed as the physical movement of a good from one point to another point in geographic space. The transportation sector entails the activities of many different types of transport firms. Taxicabs transport travelers and small packages within metropolitan areas. Airlines move travelers and small packages (as well as some larger ones) across metropolitan areas. Railroads carry large loads of coal and grain, often across long distances. Pipelines also provide transportation services, although pipelines transport only a small number of raw materials like natural gas and oil. Even intercity telecommunication companies provide transportation services of a sort in that they move information between local exchanges. All these firms provide some form of transportation service, but it should be obvious that they do not all serve the same market. When it comes to the transportation of manufactured goods from the plant to the wholesaler, trucks and railroads can effectively compete, whereas taxicabs typically cannot do so. Airlines, railroads, and buses compete to transport travelers long distance, but pipelines are hardly capable of providing such a service.2 The transportation sector, broadly defined, comprises many varied markets. We can view a market in the transportation industry as comprising those consumers who demand a particular type of product to be transported from one geographic location to a second geographic location and those firms that can effectively compete to provide the service. To illustrate, a market might entail the transportation of travelers from Boston to Dallas or the transportation of steel from Pittsburgh to Chicago. How it is done is not important as long as consumers perceive firms as providing services that are reasonable substitutes. Firms can offer services that are good substitutes for one another yet use very dissimilar technologies to provide the services. For instance, in transporting travelers between geographic

certain locations (Philadelphia and Washington, DC, for example), airlines and railroads can sometimes be effective substitutes. For the purpose of exploring the effects of economic regulation, our interest is not so much in any particular market but rather in classes of markets. A class of markets refers to markets that have some key properties in common, specifically, those properties that are essential in assessing the impact of regulation. One such property is the distance over which a good is transported. It is important to differentiate between local transportation (for example, within a metropolitan area) and long-distance transportation (across metropolitan areas). A second property is the type of good being transported. At a minimum, we need to differentiate between the transportation of passengers and freight. Freight can be partitioned into bulk goods and nonbulk goods, where bulk goods include many raw materials (like coal, grain, and oil), whereas nonbulk goods include many manufactured goods (such as cell phones or cooking utensils). The transportation sectors we examine in this chapter include long-distance freight and long-distance passenger. The main suppliers in the long-distance freight transportation sector include railroads, trucks, water barges, pipelines, and airlines (see table 16.1). Our primary concern will be with goods that are generally too large for airlines to be an effective competitor and goods that cannot readily be moved by pipelines. The primary suppliers of these products are railroads, trucks, and water barges. Because the impact of regulation has been felt most strongly in the railroad and trucking industries, we concentrate on surface freight transportation and ignore water barges. Table 16.1 Modal Shares of Intercity Freight Ton-Miles, Selected Years, 1929–2010 (%) Mode Year

Rail

Truck

Water

Pipeline

Air

1929 1939 1944 1950 1960 1970 1980 1988 1996 2000 2005 2010

74.9 62.4 68.6 56.2 44.1 39.8 37.5 37.0 25.5 38.8 37.1 67.9

3.3 9.7 5.4 16.3 21.7 21.3 22.3 25.2 39.6 28.4 28.2 44.2

17.4 17.7 13.8 15.4 16.9 16.4 16.4 15.5 14.2 17.0 12.9 8.8

4.4 10.2 12.2 12.1 17.4 22.3 23.6 21.9 20.5 15.2 20.5 16.8

0.0 0.0 0.0 0.0 0.0 0.2 0.2 0.3 0.2 0.3 0.3 0.2

Source: Clifford Winston, Thomas M. Corsi, Curtis M. Grimm, and Carol A. Evans, The Economic Effects of Surface Freight Deregulation (Washington, DC: Brookings Institution Press, 1990). U.S. Department of Transportation, Bureau of Transportation Statistics, Table 1-50: U.S. Ton-Miles of Freight 2015 available at https://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/national_transportation_statistics/index.html. Note: Includes both for-hire and private carriers.

With respect to long-distance passenger travel, the most important mode of transportation is airlines. Railroads and passenger buses can be adequate substitutes in some markets, but in many markets they are not effective competitors to airlines because of the distance being traveled and because many consumers value their time highly. In those markets, there is no adequate substitute for airline travel. Hence, it is not too severe a restriction to focus solely on the airlines as the suppliers of long-distance passenger travel. These two transportation subindustries—surface freight and airlines—both have a long history of economic regulation. In addition, both were deregulated in the past few decades and, as a result, offer

informative case studies of the impact of economic regulation. Surface Freight Transportation This section considers the regulation of the railroad and trucking industries in the United States. When assessing the effects of the regulation of surface freight transportation, it is important to consider the effects of the regulation of one mode of transportation on the other mode. One would expect the regulation of rail rates to affect the supply and demand not only for rail services but also for trucking services, since the two compete in some markets. In light of this fact, we consider both modes simultaneously.3 Regulatory History To understand the historical roots of regulation in the transportation industry, it is useful to start with the railroad industry in the second half of the nineteenth century. At that time railroads were the predominant form of long-distance transportation for both passengers and freight. During the 1870s and 1880s, the railroad industry experienced highly volatile rates for transport services, including several episodes of aggressive price wars. To avoid such wars, the railroads attempted to coordinate their pricing decisions in order to stabilize prices at profitable levels. This effort led to the formation of the Joint Executive Committee (JEC) in 1879. However, the JEC proved to be only mildly effective in keeping price above cost, as episodes of price wars arose in 1881, 1884, and 1885.4 Interstate commerce act of 1887 By the mid- to late 1880s the railroads came to realize that an authority more effective than the JEC would be required to stabilize rates at profitable levels. In 1887, the federal government replaced the JEC with the Interstate Commerce Act. This act established the Interstate Commerce Commission (ICC) for the purpose of regulating the railroads. It was the job of the ICC to see that rail rates were “reasonable and just,” that higher rates were not charged for short hauls than for long hauls, and that the railroads did not discriminate among persons or shippers. Subsequent legislation endowed the ICC with the powers it required to regulate the railroad industry effectively. The Hepburn Act of 1906 gave the ICC the power to set maximum rates, and the Transportation Act of 1920 allowed the ICC to set minimum rates and to control entry into and exit from rail routes. Before the 1920s, the ICC was able to help ensure that the railroads earned at least a fair rate of return. However, in that decade, both trucks in the surface freight market and buses in the passenger market arose as vigorous competitors to the railroads. In part as a response to intense lobbying pressure from the railroads, Congress passed the Motor Carrier Act of 1935, which brought motor carriers under ICC regulatory control. In addition, the Transportation Act of 1940 placed certain water barge transportation in the domain of the ICC. As in the case of railroads, the ICC controlled both rate setting and entry into and exit out of markets served by regulated motor carriers and water barges. Path to deregulation It was not long after this spurt of additional regulation that the railroads began lobbying for less ICC control —at least less control of railroads. When the Motor Carrier Act was passed, poor road conditions and the limited size of trucks prevented them from capturing much rail traffic. This situation changed substantially in the 1950s with the development of the interstate highway system and the presence of an unregulated

trucking sector comprising owner-operators who carried commodities produced by other entities and manufacturers and wholesalers that provided their own freight transportation.5 Railroads found that ICC regulations made it difficult to respond to this increased competition from alternative modes of transportation, and so they lobbied for increased flexibility in setting rail rates. The lobbying led to the passage of the Transportation Act of 1958, which caused the ICC to approve some of the lower rates requested by the railroads and allow railroads to discontinue passenger service in some markets that were considered unprofitable. However, the ICC turned down many other requests for rate changes, which led the railroads to seek greater rate flexibility. By the 1970s the railroads were demanding the flexibility to increase rates to reflect rising fuel costs caused by sharp increases in the price of petroleum products. Motivated in part by lobbying pressure from the railroads and by the bankruptcy of the Penn Central Railroad, the Railroad Revitalization and Regulatory Reform Act of 1976 (4R Act) was passed. The 4R Act set up a “zone of reasonableness” in which railroads could adjust rates on many routes. However, the ICC maintained more direct control over rates on routes where railroads had “market dominance,” and so shippers were deemed to lack adequate competitive alternatives. The 4R Act also gave railroads increased freedom to abandon certain unprofitable routes. Although the 4R Act was an initial step toward deregulation, it did not eliminate pressures for further reductions in government control. In the 1970s, the ICC began to deregulate the trucking industry, in part by loosening restrictions on entry into the industry. By the late 1970s, the ICC had further reduced restrictions on entry into trucking routes and reduced the power of rate bureaus to establish rates, thus allowing more rate freedom for trucking firms. Deregulation The major pieces of legislation mandating deregulation came in 1980 with the Staggers Rail Act and the Motor Carrier Act of 1980. The Staggers Act overturned much of the Interstate Commerce Act of 1887 by giving railroads considerable freedom in setting rates (except in the case of market dominance) and in allowing freedom of entry and exit. The Motor Carrier Act of 1980 captured in legislation much of the deregulation that the ICC had pursued in the late 1970s. After 1980 the surface freight transportation market was largely deregulated. Firms were free to compete and to enter and exit markets. Why Was Regulation Imposed? It appears that the ICC was created as a response to the inability of the railroad industry to maintain stable prices at profitable levels. There are at least two possible explanations for this inability. First, the railroads may have been attempting to keep rail rates artificially high so as to reap extranormal profits. If this were the case, then the observed price wars likely reflected the natural incentive of individual members of a cartel to undercut an agreed-on price to increase their own profit at the expense of the cartel’s overall profit. If this were the case and the ICC acted as a cartel rate-setter, then the ICC did not serve the public interest. An alternative explanation is that the railroad industry is a natural monopoly. With large fixed costs and relatively low variable costs, multiple industry participants might engage in “destructive competition” that drives prices toward marginal cost, below average cost. In this case, the price supports imposed by the ICC may have enhanced welfare by ensuring the financial viability of the railroads. An examination of the production technology suggests that average cost in the railroad industry might have been declining with output over extensive ranges of output. Railroads incur several components of cost that do not rise proportionately with traffic volume, including right-of-way, the cost of track, and certain equipment (like locomotive power and train stations).

Whatever the true rationale for railroad regulation, empirical evidence reveals that financial markets expected the profits of railroads to increase with regulation.6 Robin Prager examined movements in the stock prices of railroad firms in response to events surrounding the passage of the Interstate Commerce Act. Using monthly stock-price data from January 1883 to December 1887, the study reveals that the members of the JEC earned excess returns of 13.4 percent in the month in which the Senate passed the Cullom bill, which was the Senate’s version of the Interstate Commerce Act. Non-JEC members earned even higher returns. The natural monopoly argument for regulation does not seem relevant for the trucking industry. Economies of scale tend to be exhausted at relatively low rates of output in the trucking industry. Therefore, it seems likely that trucking regulation arose because an unregulated trucking sector would have made it more difficult for the ICC to effectively regulate the railroads. Given that the railroads and truckers compete in many markets, it would have been difficult for the ICC to ensure a desired outcome for the railroad industry if it had no control over the actions of trucking firms. The regulation of surface freight transportation appears to be consistent with the economic theory of regulation (see chapter 10). The railroads constituted a small and well-organized group that recognized the potential gains from price stability and how price and entry regulation could achieve that stability. Furthermore, while the benefits of regulation were concentrated among a few firms, the costs of regulation were spread across the many users of rail services. Regulation that favors railroads therefore should not be surprising. It is also possible that the ICC might have been “captured” by the railroads in that the regulation of the trucking industry was apparently a response to the competitive pressures felt by railroads. However, these developments eventually changed. As explained further below, regulation ultimately served to reduce railroad profit and increase the profit of trucking companies. Regulatory Practices Although regulation encompasses a wide array of restrictions, the most important restrictions are generally those placed on price and on entry into and exit from markets. In some respects, railroads and trucking firms were subject to similar ICC control. In both industries, rate changes had to be requested and approved by the ICC. In addition, a certificate of convenience from the ICC was required to operate in a particular market. Exit from a market also required ICC approval. Despite the similarities in their governing legislation, ICC control of price, entry, and exit often was quite different in the railroad and trucking sectors in practice. Price regulation The ICC was quite active in establishing maximum and minimum rates in the railroad industry. Its pricing practice was dictated by two basic principles: value-of-service pricing and equalizing discrimination. Valueof-service pricing entails charging higher rates for higher-valued commodities, regardless of whether the costs of transporting goods of different value differ. Because of value-of-service pricing, the railroads were forced to set higher rates for manufactured goods than for raw materials and agricultural products for many decades. The principle of equalizing discrimination states that rates should not discriminate between shippers or between the size of the shipment, even though cost may vary across shippers (for example, some ocean ports are more costly to transport to than others) as well as the size of the shipment (smaller shipments often have a higher per-ton cost than larger shipments). Rate setting in the trucking industry differed from rate setting in the railroad industry in several respects. First, the ICC allowed the trucking industry to establish ratemaking bureaus that were exempt from antitrust

prosecution by the Reed-Bulwinkle Act of 1948. Typically, the ICC automatically approved rates set by these bureaus unless a shipper or a trucking firm objected to the rates. As in the railroad industry, though, trucking rates could not vary across routes of similar distances, even if the relevant route densities, and thus transport costs, differed.7 However, in contrast to railroad rates, trucking rates could vary with the size of the shipment. In particular, truckers typically were permitted to charge lower rates (per unit of weight) for shipments of larger size. This pricing policy is consistent with setting prices to maximize industry (cartel) profit, because shipment costs per unit of weight generally decline as shipment size increases. Profit-maximizing prices decline as costs decline and as the demand for the service becomes less sensitive to price. Empirical studies found that rail rates were largely invariant to shipment size, whereas truck rates (per unit of weight) declined as shipment size increased.8 Furthermore, truck rates tended to increase as the demand for the service became less sensitive to price. These empirical findings suggest that whereas rail rates were set so as to equalize discrimination, truck rates may have been set to maximize industry profits. In light of these findings, it may not be surprising that the railroad industry tended to support deregulation, whereas the trucking industry did not. Entry and exit regulation Railroads and trucking companies both required ICC approval to enter or exit markets. However, due to the ratemaking practices described above, the binding regulatory constraint differed across industries. The effective constraint on railroads pertained to exiting markets. The policy of equalizing discrimination forced the railroads to incur a financial loss in some markets. Yet the ICC would not allow the railroads to abandon these unprofitable markets. In contrast, the prices set in the trucking industry typically admitted considerable profit. In the absence of entry restrictions, the entry of additional truckers normally would dissipate the profit. However, the ICC limited entry. To begin to serve a particular route, a trucker was required to demonstrate that demand could not be effectively met by existing suppliers. This requirement placed the burden of proof on the firm seeking entry and led to the ICC denying most petitions for entry. Often the only way to enter a new route was to purchase the operating license of an established trucking firm. Of course, this did not expand the number of competitors but only changed the identities of the firms. It is important to note that the ICC limited entry into specific routes and into the industry as a whole. Therefore, even established trucking companies found it difficult to enter routes that were currently being served by other trucking companies. Effects of Regulation Economic regulation can reduce welfare by introducing allocative inefficiency and productive inefficiency. Allocative inefficiency arises when the welfare-maximizing amounts of goods and services are not produced. For example, if the price of a good exceeds its marginal cost, then less than the welfaremaximizing amount of the good generally will be supplied. Recall from the theory of second best, though, that if two goods (say, X and Y) are substitutes and the price of X exceeds its marginal cost, then it may be optimal to have the price of Y above its marginal cost as well. Therefore, when assessing welfare losses from price regulation, one must consider both the relationship between rail rates and the cost of rail service and the relationship between rail rates and truck rates. If rail rates substantially exceed truck rates, then more than the welfare-maximizing level of truck services may be supplied even if truck rates exceed cost. Productive inefficiency arises when inputs are not allocated among their potential uses so as to maximize

welfare. Productive inefficiency can be caused by inappropriate input prices, by a lack of competitive pressure, or by distorted investment decisions. The ensuing analysis will consider both allocative and productive inefficiencies created by regulation. Price and quality of service To begin, consider the impact of regulation on surface freight rates by comparing rates during the period of regulation with rates after deregulation. Figure 16.1 presents the time path of average real revenue per tonmile of freight for rail services. Figure 16.2 provides the same information for motor carrier services. (Except where noted, all dollar figures in this chapter are in 1985 dollars.) It is apparent from figure 16.1 that rail rates declined after the Staggers Act. Real rail rates declined by more than 12 percent between 1981 and 1985. Recall that the deregulation of motor carrier rates began prior to the Motor Carrier Act of 1980, as the ICC deregulated rates beginning in the late 1970s. Figure 16.2 shows that motor carrier rates declined in the late 1970s. However, rates rose after the Motor Carrier Act of 1980, increasing by more than 5 percent between 1981 and 1985.

Figure 16.1 Average Real Revenue per Ton-Mile for Railroads Source: The data underlying this figure are from Table 1 (p. 410) in Kenneth D. Boyer, “The Costs of Price Regulation: Lessons from Railroad Deregulation,” RAND Journal of Economics 18 (Autumn 1987): 408–416. Copyright 1987. Reprinted with permission of RAND.

Figure 16.2 Average Real Revenue per Ton-Mile for Trucking

Source: The data underlying this figure are from Table 1 (p. 410) in Kenneth D. Boyer, “The Costs of Price Regulation: Lessons from Railroad Deregulation,” RAND Journal of Economics 18 (Autumn 1987): 408–416. Copyright 1987. Reprinted with permission of RAND.

When attempting to determine the impact of regulation on prices, it is important to recognize that many factors other than deregulation can influence the observed average revenue per ton-mile. One possible factor is the recession that took place in 1981–1983, which reduced the demand for transportation services. The reduced demand likely reduced the average revenue per ton-mile, because firms competed more vigorously for the reduced number of customer shipments. It seems, though, that the recession was not wholly responsible for the observed changes in surface freight rates, because rail rates continued to fall through the post-recession period of 1984 and 1985. Trucking rates rose in 1983, but they declined in 1984, only to rise again in 1985. A second factor confounding the effect of deregulation on surface freight rates is one that is endogenous to deregulation: the composition of traffic. Even if actual rates remained the same, average revenue per tonmile could change if the types of commodities being transported were to change over the sample period. For example, if railroads switched to transporting more bulk goods, which have lower prices than other goods, then average revenue would decline even if rates remained constant over time. An empirical analysis partially controlled for changes in the composition of traffic in order to assess the effect of deregulation on rail rates.9 The proxy for the change in traffic composition used is the average weight of freight trains. This weight would be expected to rise if railroads transported more bulk commodities. Using the rail rates in figure 16.1, it was found that 90 percent of the change in rates during 1971–1985 was due to the change in the average weight of freight trains. This finding suggests that deregulation may have had a large impact on the mix of commodities being transported by railroads. After controlling for changes in the average weight of freight trains, it was found that rail rates actually increased by 2 percent in the initial period of deregulation, 1980–1985.10 Examining the impact of deregulation on the average price of rail services can obscure important variation in rates for different services, because deregulation may affect the rates for different commodities in different ways. Notice that the price in any market depends on two factors: cost and markup (of price over cost). Deregulation can affect cost by, for example, reducing the constraints placed on railroads and allowing them to adopt more efficient practices. However, the exact change in cost depends on the characteristics of the commodity (for example, whether it is manufactured or bulk) and the route in question (for instance, whether it is a short or a long route). With regard to markups, regulation almost certainly produced markups that varied across markets because of the practice of equalizing discrimination. Deregulation seems likely to produce relatively large price increases in markets where regulation imposed relatively low markups. Furthermore, the markup that a railroad sets after deregulation will depend on the intensity of competition in that market from other railroads and from competing modes, such as trucking and water barges. Because the degree of competition varies across markets, one would expect post-deregulation markups to vary as well. Hence, even if two markets had the same cost before and after deregulation and the same markup under regulation, they may have different prices after deregulation because of differences in the degree of competition. Recognizing that the effect of deregulation can vary across commodity markets, a study estimated the impact of deregulation on rail rates for thirty-four different commodity classifications.11 These included farm products, coal, apparel, and machinery, among others. It estimated both the immediate impact of deregulation and its impact over time. The study found that regulation kept rail rates artificially low in

twenty-two of the thirty-four commodity markets (and this effect was statistically significant for nine of those twenty-two markets). For example, rail rates for forest products and coal increased by 13 percent and 5 percent, respectively, immediately after deregulation. For the other twelve commodity markets, regulation had kept rates artificially high (and this effect was statistically significant in four of the twelve). For example, rates for transporting farm products by railroad fell 8.5 percent upon deregulation. Although the initial impact of deregulation on rail rates varied across commodities, the long-term effects were more uniform. By 1988, there were no markets in which deregulation led to a statistically significant increase in rates. Rail rates declined by a statistically significant amount in twenty of the thirty-four markets. Deregulation did not have a statistically significant effect on rates in the other fourteen markets. Weighting these price changes by the size of the market (as measured by ton-miles), the cumulative effect of deregulation on average rail rates was a decline of 30 percent by 1988, which is to be contrasted with an initial increase of 10 percent. It seems likely that the decline in rail rates reflects cost savings emanating from deregulation. The finding of substantial increases in productivity in the railroad industry after deregulation further supports this hypothesis. See figure 16.3.

Figure 16.3 Performance in the U.S Railroad Industry, before and after the Staggers Act Source: Robert E. Gallamore and John R. Meyer, American Railroads: Decline and Renaissance in the Twentieth Century (Cambridge, MA: Harvard University Press, 2014), p. 424. Note: Productivity, revenue ton-miles per constant dollar operating expense; Rates, inflation-adjusted revenue per ton-mile; Volume, ton-miles. The decline in productivity in recent years is largely because of higher fuel prices.

With respect to motor carrier rates, detailed analysis of specific rates suggests that regulation kept rates well above competitive levels. The ICC exempted the transport of fruits, vegetables, and poultry from price regulation in the mid-1950s. In response to this exemption, motor carrier rates fell 19 percent for fruits and

vegetables and 33 percent for poultry.12 More recent evidence comes from surveys of shippers (those who hire motor carriers to transport their products) taken after passage of the Motor Carrier Act of 1980. A survey of thirty-five shippers revealed that truckload rates declined by 25 percent between 1975 and 1982, and less-than-truckload rates fell by 11 percent over this period. A survey of 2,200 shippers of manufactured goods taken shortly after deregulation found that 65 percent of respondents reported that truck rates were lower. In contrast, only 23 percent of respondents reported rail rates to be lower.13 Another study examined rates charged by sixty-one motor carriers of general freight between 1975 and 1985.14 The study found that deregulation had lowered rates by 15–20 percent by 1983 and by 25–35 percent by 1985. The increasing impact of deregulation on motor carrier rates over time may be due to increased productivity growth. This growth will be reviewed later in this chapter. Studies at the state level provide additional evidence. Several states deregulated intrastate trucking around the time of the Motor Carrier Act of 1980. For example, Florida deregulated trucking on July 1, 1980. State regulation was quite comparable to that on the federal level in terms of both pricing and entry policy. One study examined the effect of deregulation in Florida and in Arizona, where deregulation occurred in July 1982.15 Controlling for several factors that influence rates, including the commodity class, shipment size, and type of motor carrier, motor carrier rates were examined for Arizona from January 1980 to October 1984 and for Florida from January 1979 to October 1984. One study found that deregulation caused average intrastate motor carrier rates to fall for half of the routes in Arizona and all routes in Florida. A second study focused on motor carrier rates for Florida for the more limited period of June 1980–September 1982.16 It found that average rates fell by almost 15 percent. Interpreting the evidence for the effect of regulation on surface freight rates requires care, because railroads and motor carriers transport many different commodities and traverse many different routes. However, the evidence generally supports the hypothesis that regulated rail rates were held below welfaremaximizing levels for some products and above these levels for other commodities, whereas regulated motor carrier rates often exceeded welfare-maximizing levels. To fully understand the effects of regulation, it is useful to consider traffic composition patterns. The evidence suggests that the transportation of manufactured goods shifted from railroads to trucks after deregulation. In contrast, the railroads increased their share of the transport of bulk goods, including certain fruits and vegetables. See tables 16.2 and 16.3. Table 16.2 Market Shares of Sampled Manufactured Goods (%) Year

Rail

For-Hire Motor Carrier

Private Motor Carrier

1963 1967 1972 1977 1983

32.8 34.0 30.8 22.9 16.0

46.6 45.3 45.1 43.0 47.4

16.5 18.0 20.6 28.6 26.0

Source: Census of Transportation, 1963–1983. Table is from Kenneth D. Boyer, “The Costs of Price Regulation: Lessons from Railroad Deregulation,” RAND Journal of Economics 18 (Autumn 1987): 408–416. Reprinted by permission of RAND.

Table 16.3 Index of Rail Car Loadings of Various Types of Traffic (1978 = 100)

These changes in traffic composition help us interpret the data in figures 16.1 and 16.2. Average rail revenue per ton-mile declined even though some rates increased, because railroads moved to transporting bulk commodities, which generally have lower rates. Motor-carrier rates fell even though the average revenue per ton-mile increased between 1981 and 1985, because motor carriers moved away from transporting goods with lower rates. Estimates of the welfare impacts of regulation of surface freight transportation prices vary. Studies from the 1980s suggest that regulation reduced welfare by between $1 billion and $1.5 billion annually.17 A more recent study provides a much higher estimate.18 The study employs shipper, carrier, and labor behavior during the deregulated year of 1985 to estimate the industry outcomes that would have arisen in 1977 had the industry had been deregulated that year. The simulated data for an “unregulated” 1977 were then compared with the actual (regulated) data for 1977. The study estimates that the deregulation of motor carrier rates increased the welfare of shippers by almost $4 billion annually (in 1977 dollars). Although the deregulation of rail rates was found to reduce grain rates, thereby raising shippers’ welfare by about $280 million per year, it raised other rates on average, thereby reducing shipper welfare by $1.35 billion. Overall, deregulation is estimated to have increased shippers’ welfare by $2.89 billion annually.19 Static productive inefficiency Restrictions on entry and exit can lead to productive inefficiency. Entry restrictions can prevent more efficient firms from replacing less efficient firms. Exit restrictions can force firms to serve markets in which production costs exceed corresponding benefits. As explained above, the ICC required railroads to serve unprofitable routes. After the Staggers Act, though, railroads abandoned many routes. A notable example is Conrail, which immediately abandoned 2,600 route miles after the Staggers Act was passed in 1980. The abandoned routes represented 15 percent of Conrail’s total track miles, but they generated only 1 percent of Conrail’s revenue.20 Estimates suggest that providing rail services at 1969 levels at minimum cost would have required only 20–25 percent of existing capacity. The associated cost saving would have been between $750 million to $1.5 billion annually.21 In contrast, the ICC tended to limit entry into the motor carrier industry. Because motor carrier rates exceeded cost, trucking firms earned extranormal profits prior to the passage of the Motor Carrier Act of 1980. However, eased entry restriction after 1980 served to dissipate industry profit. The number of ICCcertified motor carriers doubled (from 16,874 to 33,823) between 1978 and 1985 and exceeded 40,000 by 1990. Of the nearly 17,000 companies that entered the industry between 1978 and 1985, more than 6,000

failed.22 As figure 16.4 demonstrates, bankruptcies rose dramatically in the early 1980s because of deregulation and the recession.

Figure 16.4 Bankruptcies among Trucking Firms Source: Data are from Dun & Bradstreet’s Quarterly Business Failure Report (New York, N.Y., Dun & Bradstreet Corporation) various years.

ICC regulation of the motor carrier industry also created productive inefficiency by raising wages. The Teamsters Union was able to extract some of trucking firms’ extranormal profits in the form of higher wages. It has been estimated that in the mid-1970s, union workers in the trucking industry received wages that were 50 percent higher than the wages paid to nonunion workers performing comparable work with comparable skills. In contrast, the union wage premium was only 27 percent in the early 1980s (following industry deregulation), which was close to the national average of 28 percent.23 When deregulation opened markets to competition, unions in the trucking industry realized that they would have to settle for lower wages or watch their members being laid off as trucking firms declared bankruptcy. Thus, labor was priced above competitive levels under regulation, which likely promoted a reduced supply of transportation services and an inefficient mix of inputs that employed too little labor. Additional productive inefficiency arose in the motor carrier industry when the ICC required trucking companies to charge the same rates for backhauls (which consist of transport from a destination of one haul to the origin of the truck’s next haul) that they charge for initial transport. Welfare and profit could both have improved if companies were permitted to charge lower prices for backhauls, because the opportunity cost of providing the service is lower (since the trucks must travel to the origin of their next haul anyway). Because the companies could not reduce price toward the level of opportunity cost, trucks often carried no

freight as they traveled from the destination of one haul to the origin of the next haul. The annual welfare loss due to empty backhauls was estimated to exceed $300 million.24 Following deregulation, empty backhauls declined from 28 percent to 20 percent.25 The regulation of rail rates also reduced competition in various commodity markets. For instance, the higher cost of transporting flour under regulation reduced the geographic size of flour markets. Because each (smaller) market had fewer competing firms, railroad price regulation tended to reduce the intensity of price competition in flour markets.26 Dynamic productive inefficiency By requiring railroads to serve unprofitable markets, regulation restricted the ability of railroads to generate profit that could be employed to finance investment. By the late 1970s, $15 billion of investment on track maintenance had been deferred or postponed. Investment increased substantially after deregulation enabled railroads to profitably adjust rates to reflect market conditions. Railroads spent $27 billion on structures, roadways, and route maintenance between 1981 and 1985. The railroads also spent $30 billion on rail cars, locomotives, and other equipment during this period.27 Railroad spending on infrastructure and equipment continued to increase in subsequent years, rising to more than $21 billion in 2008.28 Regulation in the surface freight industry is also likely to have further reduced welfare by stifling innovation. To attempt to quantify this impact, we can estimate what productivity growth would have been in the absence of regulation. The resulting estimate of the extent to which regulation reduced productivity growth will reflect all sources of reduced productivity growth, including reduced innovations and reduced investment. An estimate has been derived from a comparison of productivity growth for U.S. and Canadian railroads between 1956 and 1974. Although both industries had access to the same innovations, Canadian railroads were subject to much less regulation than were U.S. railroads. As shown in table 16.4, productivity growth in the railroad industry was 3.3 percent in Canada but only 0.5 percent in the United States between 1956 and 1974. To focus on a comparable subsample of railroads, the study compared productivity growth for the Canadian National and Canadian Pacific railroads in Canada with the Atchison, Topeka and Santa Fe and Southern Pacific railroads in the United States. At least between 1963 and 1974, this subsample supports the hypothesis that regulation reduced productivity growth in the railroad industry. The study estimated that if U.S. railroads had experienced the same productivity growth that Canadian railroads experienced, the cost of providing rail services in the United States in 1974 would have declined by $13.83 billion (in 1985 dollars). This evidence suggests that regulation of the railroad industry likely induced substantial productive inefficiency. Table 16.4 Overall Productivity Growth for U.S. and Canadian Railroads (average annual %)

Canada United States Canadian National Canadian Pacific Atchison, Topeka and Santa Fe Southern Pacific

1956–1963

1963–1974

1956–1974

1.7 0.6 1.8 1.7 1.4 3.1

4.0 0.1 4.3 3.3 1.0 0.4

3.3 0.5 3.3 2.7 1.1 1.4

Source: Douglas W. Caves, Laurits R. Christensen, and Joseph A. Swanson, “The High Cost of Regulating U.S. Railroads,” Regulation (January/February 1981): 41–46. Reprinted with the permission of the American Enterprise Institute for Public Policy Research, Washington, DC.

Lessons from Regulation Summarizing many of the studies that estimated the welfare loss from the regulation of surface freight transportation, Robert Willig and William Baumol report: Various studies estimated, for example, that between 1950 and 1980 more than a billion dollars a year was wasted in transporting freight by truck rather than by rail. Another billion dollars a year was wasted in transporting freight on rail routes that were too long or were utilized with too little traffic density. Another $1.5 billion a year or more (in 1977 dollars) was wasted on unnecessary mileage traversed by empty cars, unnecessary demurrage-time between car unloadings and loadings, and circuitous loaded routings.29

Three lessons from the regulation of surface freight transportation seem particularly noteworthy. First, once it is put in motion, regulation can be imperialistic. Beginning with the railroad industry, ICC control soon spread to motor carriers and water barges. There was little economic basis for regulating the trucking industry, other than the expanded regulation facilitated effective regulation of the railroad industry. Second, regulation likely caused large welfare losses due to product substitution. Much of the estimated allocative inefficiency in the surface freight industry did not arise because rail and truck rates diverged from marginal cost. Instead, the inefficiency arose from inappropriate relative prices of rail and truck transport. For example, relatively high prices for rail transport of agricultural products led to transport by truck, when rail transport would have been less costly. Third, undue limits on pricing flexibility seriously jeopardized the profitability of the railroad industry. With its regulatory jurisdiction evaporating, the ICC was abolished in 1995. The remaining regulatory tasks were transferred to the newly created Surface Transportation Board in the Department of Transportation. Thomas Gale Moore marked this occasion by writing the ICC’s obituary, titled “In Memoriam”: Federal regulation of trucking which, during its illustrious career, provided a competition-free system, guaranteeing huge profits for owners and high wages for unionized workers, died January 1. Last summer, its cousin, state oversight of intrastate carriers, sustained a mortal blow as all controls on interstate firms hauling intrastate loads were abolished. Interstate Commerce Commission regulation was in its 60th year. Ravaged by the Motor Carrier Act of 1980, Washington’s supervision of motor carriers suffered a long enfeeblement, leading ultimately to its demise, despite the heroic efforts of union workers to preserve their benefactor. Unfortunately for the health of federal curbs, a combination of free market economists, shippers groups, and a surprising group of liberal politicians, including Senator Ted Kennedy and President Jimmy Carter, had undermined its support system and it died, not with a bang but a whimper. Born at the height of the New Deal in 1935 from the unlikely marriage of federal and state regulators, large trucking interests, and railroads, ICC management of motor carriers evinced a long and successful career of prescribing prices, enjoining entry, and curtailing competition. By the early 1970s, Washington bureaucrats were forcing trucks to travel empty on return trips; to carry goods on circuitous routes, adding hundreds of miles to their transport; and to distinguish between carrying ordinary horses and those destined for the slaughterhouse. During the nearly six decades of ICC rulemaking, the economy suffered hundreds of billions of dollars in waste, loss and abuse. In addition to continued controls over household good carriers, federal regulation is survived by its adopted offspring, the International Brotherhood of Teamsters. No services are planned.30

Airlines Almost from its inception, the airline industry was characterized by substantial regulation. Then things began to change quite suddenly in the late 1970s. An intensive program of deregulation began that ultimately led to the complete absence of price and entry controls. We now consider precisely what changed and whether the change has been for the better.31

Regulatory History The commercial airline industry began in the late 1920s with the transport of mail for the U.S. Postal Service. Passenger service followed in the early 1930s. Initially, the U.S. Postal Service had the authority to set mail rates, although this authority came under the realm of the Interstate Commerce Commission (ICC) with the Airmail Act of 1934. The ICC established a competitive bidding system to determine which airline would transport mail on specific routes. The airline that offered the lowest price per mile received a route franchise. The airlines tended to submit very low bids to ensure that they would retain the routes they served, anticipating that they would later be permitted to raise rates for their service. However, the ICC generally did not permit rate increases, so many airlines found themselves on the verge of bankruptcy from having to transport mail at unprofitable rates. This episode is significant, because supporters of government regulation of the airline industry used it as evidence that an unregulated airline industry would be plagued by “destructive competition.” These supporters viewed government regulation as necessary to ensure that the airline industry would develop into a stable and healthy segment of the transportation sector of the U.S. economy. Civil aeronautics act of 1938 The Civil Aeronautics Act of 1938 brought the airline industry under federal regulation. The act created the Civil Aeronautics Authority, which two years later became the Civil Aeronautics Board (CAB). The CAB was authorized to set minimum and maximum rates and to determine the number of industry suppliers. The CAB was empowered to determine the route structures of firms. Consequently, the CAB could prevent an existing airline from entering a new route or abandoning a route it presently served. Initially, the CAB also was responsible for ensuring airline safety. However, this task was transferred to the Federal Aviation Administration in 1958. The CAB employed its control over route structures to implement a route moratorium in the early 1970s. In response to excess capacity among airlines and declining industry profit, the CAB prohibited any firm from entering existing routes. Consequently, new firms could not enter the industry, and existing firms could not increase the number of routes they served. The CAB feared that route expansion would intensify competition and reduce industry profit, which, in turn, might jeopardize industry safety (as firms skimped on maintenance and fleet modernization to reduce costs). Path to deregulation By the mid-1970s, pressure was building to reform airline regulation. Academics had long argued that regulation stifled competition and generated considerable welfare losses.32 In 1975, Senate hearings held by Senator Edward Kennedy seriously explored the idea of regulatory reform. The Department of Transportation was considering regulatory reform. Even the CAB supported regulatory reform and even full deregulation. CAB chairman John Robson provided the initial step in deregulation by relaxing entry restrictions. He lifted the route moratorium and allowed some entry on existing routes for the first time since the 1960s. After being named the new CAB chairman in June 1977, Alfred Kahn further reduced restrictions on entry and fares. Major fare reductions followed as the airlines competed for customers. Airline deregulation act of 1978 The CAB fares led to both lower fares and higher industry profit by 1978. These promising developments

encouraged Congress to pass the Airline Deregulation Act in 1978. The act called for a phased deregulation of the airline industry. The CAB’s authority over route structure was to terminate on December 31, 1981; its authority over fares was to terminate on January 1, 1983; and the CAB itself was to be eliminated on January 1, 1985. The actual pace of deregulation turned out to be considerably faster than outlined in the Airline Deregulation Act. Within a year of its enactment, airlines were free to serve any route. By May 1980 the CAB allowed unlimited downward flexibility in fares and considerable upward flexibility. Even prior to January 1, 1983, the date for which CAB control over entry and fares was to end, airlines were competing in an unregulated environment. Description of Regulatory Practices During the time it regulated the airline industry, the CAB sought to expand the industry, promote its financial health, and ensure passenger safety. To pursue these objectives, the CAB extensively exercised its control over price, the number of competitors, and route structures. Price regulation CAB fare setting exhibited four main properties. First, fares were set to permit airlines to earn a reasonable rate of return on their investments. Although this objective was an implicit feature of regulatory policy for many years, it was not made explicit until 1960. Before World War II, airfares were typically set at the level of first-class rail fares, as railroads were the main competitor to airlines. In 1960 the General Passenger Fare Investigation conducted by the CAB set fares so as to allow the industry to earn a 10.5 percent rate of return. The corresponding investigation in 1970 set fares to generate a 12 percent industry rate of return. Second, prices generally were not set to reflect costs. The Domestic Passenger Fare Investigation introduced some role for cost in 1970, when it established fares that reflected route length to some extent. However, fares for routes exceeding 400 miles were set above cost, whereas fares for routes of less than 400 miles were set below cost. This rate structure was implemented primarily to promote air service on shorter routes, where railroads typically provided more intense competition. Third, fare changes typically were implemented across the board rather than selectively. This practice reduced fare flexibility and contributed to distortions in the fare structure. Fourth, price competition was strongly discouraged. The CAB typically did not grant requests for fare changes that were deemed to be a competitive response to fares set by other airlines. Presumably, the CAB feared that fare competition might lead to destructive competition, which could jeopardize the industry’s financial health and reduce airline safety. Entry and exit regulation When regulation was implemented in the airline industry in 1938, sixteen airlines, referred to as trunk carriers, were “grandfathered” and became certificated carriers. Between 1938 and 1978, the CAB did not allow the entry of a single new trunk carrier. In preventing such entry, the CAB denied seventy-nine applications to provide service during 1950–1974.33 Only ten trunk carriers remained at the time of deregulation, six having disappeared through mergers. The CAB did permit some entry on short routes that had been abandoned by the trunk carriers. Existing airlines were permitted to begin operations on some routes served by other airlines. However, only 10 percent of these applications were approved between 1965 and 1974.34 The CAB generally chose not to allow more than two or three carriers to serve the same route.

Comparison to motor-carrier regulation In practice, airline regulation was comparable to the regulation of motor carriers. Prices were generally set to allow reasonable profits, and some prices were set above cost, while others were set below cost. In addition, entry into the industry and into specific routes was controlled. However, the effects of regulation differed considerably in the motor carrier and airline industries. As we shall see, these different effects stemmed largely from technological differences in the services being provided and from different responses to regulation by the regulated firms. Effects of Regulation The regulation of the airline industry offers a case study of two classic effects of price and entry/exit regulation. First, if regulation eliminates price as a competitive instrument, then firms will compete on other dimensions. In the case of airlines, firms competed aggressively through service quality. Second, the effects of regulation are hard to predict, because it is difficult to foresee the new and innovative products and production techniques that would have arisen under competition. The development of the hub-and-spoke system after airline deregulation is an example of an important innovation that was stifled by regulation. Price and quality of service To assess the impact of CAB regulation on airfares, one can employ the intermarket approach to compare airfares in regulated and unregulated markets during the same time period. Although all interstate markets were subject to CAB regulation, intrastate markets were not. To compare fares on intrastate and interstate routes that are of similar length and density, one must consider intrastate routes in large states like California and Texas. Table 16.5 compares fares for intrastate routes in California and comparable interstate routes subject to CAB regulation. Fares in unregulated intrastate markets were considerably below the fares charges in CABregulated markets. The differential was particularly pronounced on longer routes. Table 16.6 presents fares for selected routes in Texas in 1975. Southwest Airlines, the primary intrastate carrier in Texas, consistently set lower fares than did regulated carriers. Thus, the cross-sectional data provide evidence that the CAB tended to set airfares above competitive levels, at least on certain routes. Table 16.5 Differential between Intrastate Fares in California and Interstate Fares, 1972 (¢) Type of Haul

Intrastate Fare/Mile

Interstate Fare/Mile

Very short haul (65 miles) Short haul (109 miles) Short–medium haul (338–373 miles)

16.923 9.363 5.021

23.585 16.858 9.685

Source: Simat, Helliesen and Eichner, Inc., “The Intrastate Air Regulation Experience in Texas and California,” in Paul W. MacAvoy and John W. Snow, eds., Regulation of Passenger Fares and Competition among the Airlines (Washington, DC: American Enterprise Institute for Public Policy Research, 1977), p. 42. Reprinted with permission of the American Enterprise Institute for Public Policy Research, Washington, DC.

Table 16.6 Comparison of Interstate and Intrastate Fare Levels in Selected Texas Markets, December 1, 1975 ($) Fare Type

Dallas— Houston

Dallas—Harlingen

Dallas– San Antonio

Houston–Harlingen

CAB Interstate First class

48.00



51.00



Coach Economy Southwest Intrastate Executive class Pleasure class

35.00 32.00

57.00 51.00

37.00 33.00

42.00 38.00

25.00 15.00

40.00 25.00

25.00 15.00

25.00 15.00

Source: Simat, Helliesen and Eichner, Inc., “The Intrastate Air Regulation Experience in Texas and California,” in Paul W. MacAvoy and John W. Snow, eds., Regulation of Passenger Fares and Competition among the Airlines (Washington, DC: American Enterprise Institute for Public Policy Research, 1977), p. 45. Reprinted with permission of the American Enterprise Institute for Public Policy Research, Washington, DC. (Missing entries are not applicable.)

The effects of regulation on airfares can also be inferred by examining how fares changed when deregulation was implemented. Figure 16.5 shows the change in average real airfares between 1978 and 1993. Fares fell substantially for routes exceeding 1,000 miles but rose substantially for routes less than 500 miles. In addition, between 1976 and 1983, airfares between large metropolitan areas declined by 8.7 percent for long-haul markets and by 14.5 percent for short-haul markets. Fares rose on routes between small cities by 13.2 percent for short-haul markets and by more than 50 percent for medium-haul markets.35 These price changes induced by deregulation suggest that the CAB set fares in high-density markets above competitive levels while setting fares in low-density markets below competitive levels.

Figure 16.5 Percentage Change in Airfares by Distance, Adjusted for Inflation Source: Steven A. Morrison and Clifford Winston, The Evolution of the Airline Industry (Washington, DC: Brookings Institution Press, 1995), p. 20.

Deregulation also brought a major increase in the number of discount fares and a corresponding increase in the number of passengers traveling on these fares. Only about 25 percent of passengers in major markets traveled on discount fares in 1976. The corresponding percentage had increased to nearly 75 by 1983. The

increase in the use of discount fares was even more pronounced on less-dense route.36 These changes indicate that deregulation introduced greater use of price discrimination, as airlines charged relatively low prices to leisure travelers (who often can adjust their day and time of departure to secure lower fares) and relatively high prices to business travelers, who typically lack comparable scheduling flexibility. The effective price of air travel for a consumer includes more than just the price of the airline ticket. Because consumers value their time, the effective cost of travel includes the value of a consumer’s time used during travel. Therefore, the effective cost of travel increases as travel time increases and as the actual departure time of a flight diverges from a consumer’s ideal departure time. Even if all airlines charge the same price for a flight, the effective price of the flight can differ across airlines and across consumers. In addition to affecting the time cost of travel through the frequency of flights and the prevalence of nonstop flights, airlines can influence the pleasure that passengers derive from travel. Consumers generally prefer an airline that offers less crowded flights, better on-board services, and a better safety record, for example. Table 16.7 provides estimates of the average consumer’s willingness to pay for improvements in some of these nonprice factors. The table shows that, on average, a consumer is willing to pay $5.67 for a ten-minute reduction in travel time. In other words, if airlines A and B offered identical services except that travel on airline A was ten minutes faster than on airline B, the average consumer (flying the average route) would prefer to fly on airline A as long as its charged at most $5.66 more than airline B did. Table 16.7 also indicates that if an airline increased the likelihood of an on-time flight by ten percentage points, consumers would be willing to pay up to $12.13 more for the flight. Table 16.7 Consumers’ Willingness to Pay for Better Airline Service ($/Round Trip) Improvement in Service

Willingness to Pay

Ten-minute reduction in travel time Ten-minute reduction in transfer time Ten percentage points increase in flights on time Carrier not to have a fatal accident in the preceding six months

5.67 6.67 12.13 77.80

Source: Steven A. Morrison and Clifford Winston, “Enhancing the Performance of the Deregulated Air Transportation System,” Brookings Papers on Economic Activity: Microeconomics 1989 (1989): 61–123. Reprinted by permission.

Because a traveler’s choice of airline is influenced by factors other than price, airlines will compete on nonprice factors to attract more customers. In the presence of limited price competition, airlines competed for customers by offering a wide array of flight times. By offering many flights to and from specified cities, an airline increased the likelihood that customers would be able to find a flight that fit their schedule well. In addition, with many flights each day, any given flight was less likely to be crowded. Time-series data provides some evidence about the impact of regulation on load factors. The load factor for a flight is the number of passengers on the flight divided by the number of seats on the airplane. In an unregulated environment, load factors typically will increase as the length of a flight increases. Airlines would lose a substantial amount of money if they flew planes that were nearly empty over long distances. Consequently, unregulated, profit-maximizing airlines would be expected to devote considerable effort to securing relatively high load factors on long flights. In contrast, if regulation set relatively high fares for long-distance flights, then airlines would be expected to compete particularly vigorously for long-distance passengers by offering many long-distance flights and associated low load factors on these flights. Figure 16.6 presents the estimated relationship between load factors and flight distance for 1969 (when

fares were tightly regulated by the CAB) and for 1976 and 1981 (as deregulation of the airline industry was under way). As expected, load factors declined with distance under regulation and rose with distance under deregulation. By setting relatively high fares for long-distance travel, regulation induced heightened nonprice competition on long-distance flights that resulted in increased flight frequency and reduced load factors. Although consumers valued the lower load factors, regulation induced a suboptimal mix of fares and load factors. Consumers would have been willing to accept lower fares and higher load factors than those that arose under CAB regulation.37

Figure 16.6 Load Factors and Distance Source: David R. Graham, Daniel P. Kaplan, and David S. Sibley, “Efficiency and Competition in the Airline Industry,” Bell Journal of Economics 14 (Spring 1983): 118–138. Copyright 1983. Reprinted with permission of RAND.

Excessive nonprice competition also occurred with regard to on-board services, including the number of flight attendants and food quality. Between 1976 and 1982, the number of flight attendants per passenger declined by 16 percent.38 During the same period, the Consumer Price Index for food increased by 62 percent. In contrast, the cost of food served by the airlines increased by only 40 percent, suggesting a reduction in the quality and/or quantity of food served on the airlines as deregulation took hold in the industry.39 Deregulation brought lower fares on many routes, but some dimensions of service quality declined. To begin to assess the overall impact of deregulation on consumers, it is useful to consider changes in the volume of air travel. The marked increase in air travel following deregulation suggests that deregulation increased consumer welfare. In 1978, domestic airlines carried 275 million passengers. Thirty years later, they carried 741 million passengers.40 Development of the hub-and-spoke system Although regulation may have induced relatively high quality on certain dimensions of airline service, it may have reduced quality on one important dimension: flight frequency, which increased after the airline industry was deregulated. Consumers now have a wider array of departure times from which to choose.

Flight frequency has increased even as load factors have increased due to the development of the hub-andspoke system. To understand the hub-and-spoke system, consider an airline serving two large destinations that we will refer to as cities B and C. It also serves two smaller destinations, called towns A and D, with A reasonably near B and D reasonably near C. (Imagine these destinations along a line running from A to B to C to D, as in figure 16.7.) The array of route markets consists of A-B (that is, traveling from A to B or B to A), A-C, A-D, B-C, B-D, and C-D. Given these six markets, one possible route system is to have six pairs of direct flights, so that, for example, someone in town A who wished to fly to city C would take a direct flight from A to C.

Figure 16.7 A Simple Hub-and-Spoke System

An alternative route system is to turn cities B and C into hubs, so that all travelers go through these two points. For example, an individual traveling from A to C would first fly from A to B and then take a second flight from B to C. The virtue of this system is that it concentrates traffic. So, for example, a flight from B to C would not include only those whose initial point of departure and final point of arrival are B and C, respectively. The flight would also include travelers who started in A and want to go to C or D. Cost savings arise when larger aircraft are employed to provide transport between B to C, because the per passenger cost of providing air transport declines as airplane capacity increases. In contrast, with direct flights, smaller aircraft with higher per passenger costs would have to be used to transport someone directly from A to C. This concentration of traffic permits a larger array of flights. If the density of traffic from A to C is limited, it may only be economic to offer a single direct flight from A to C each day. In contrast, when travelers who seek to fly from A to C are placed on the same airplane as travelers who wish to fly from A to B and from A to D, it may be economical to offer several daily flights from A to B. In addition, many flights between B and C will be economical, given the high density of travelers between these hubs. The hub-and-spoke system has been widely adopted in the airline industry since its deregulation. To illustrate the substantial change in route structure that deregulation has brought, figure 16.8 depicts the route structure of Western Airlines before and after deregulation. Salt Lake City and Los Angeles are the two hubs in this route system. By restricting the entry of existing carriers into currently served markets, CAB regulation prevented the massive route restructuring required to move to a hub-and-spoke system.

Figure 16.8 Adoption of the Hub-and-Spoke System by Western Airlines Source: Steven Morrison and Clifford Winston, The Economic Effects of Airline Deregulation (Washington, DC: Brookings Institution Press, 1986), p. 8. Copyright 1985 by The New York Times Company. Reprinted by permission.

Although the hub-and-spoke system typically provides a wider array of departure times, it tends to increase passenger travel time, because many trips require at least one stop between the traveler’s origin and destination. To examine the overall impact of the hub-and-spoke system on consumers, a study analyzed

812 city pairs with and without regulation. The study began with actual data on fares and flight frequency for 1977 to create a regulatory benchmark. The study then employed data from 1983 to estimate the fares and flight frequency that would have prevailed in 1977 in the absence of regulation. As table 16.8 reports, the study estimates that deregulation would have reduced real fares in medium hub to large hub routes and in large hub to large hub routes, but increased fares in other markets. This finding is consistent with regulated airfares being set above competitive levels in dense markets and below competitive levels in less dense markets. The movement to hub-and-spoke systems is estimated to have increased flight frequency on most routes. For instance, nonhub to nonhub flights and small hub to small hub flights increased by onethird. Overall, the route-weighted average flight frequency increased by 9.2 percent under the hub-andspoke system. Table 16.8 Weighted Average Change in Fares and Frequency from Deregulation, 1977 Category of Route Nonhub–nonhub Nonhub–small hub Nonhub–medium hub Nonhub–large hub Small hub–small hub Small hub–medium hub Small hub–large hub Medium hub–medium hub Medium hub–large hub Large hub–large hub

Number of City Pairs

Coach Fare (%)

Discount Fare (%)

Frequency (%)

51 52 45 53 60 69 57 69 161 205

21.2 22.5 5.4 16.3 15.3 18.7 25.0 15.6 17.4 8.6

22.1 12.3 −0.4 9.1 11.3 10.4 8.1 2.0 −6.8 −17.6

33.9 1.4 24.3 28.7 33.9 20.8 19.2 −4.3 14.4 −3.5

Source: Steven Morrison and Clifford Winston, The Economic Effects of Airline Deregulation (Washington, DC: Brookings Institution Press, 1986), p. 23. Reprinted by permission.

The development of the hub-and-spoke system points out an important lesson from economic regulation which was perhaps best stated by Alfred Kahn, former chairman of the CAB: The essence of the case for competition is the impossibility of predicting most of its consequences. The superiority of the competitive market is the positive stimuli it provides for constantly improving efficiency, innovating, and offering consumers diversity of choices.41

Before deregulation, economists generally believed that regulation produced excessive fares and service quality. The economists failed to realize, though, that by restricting entry, regulation was precluding a restructuring of the route system that would reduce operating costs and increase flight frequency. Although deregulation reduced service quality by increasing load factors and travel time and reducing on-board services, it raised quality by increasing the number of departures. The development of the hub-and-spoke system illustrates the more general lesson that it is only in retrospect that we can fully comprehend the effects of regulation. Welfare estimates from changes in price and quality To further assess the welfare effects of regulation, consider the changes in fares, travel times, and delays during the initial phase of deregulation from late 1976 to late 1978.42 This period captures the initial effect of price deregulation but not the effect of the restructuring of the route system. The average real standard coach fare fell by nearly 5 percent during this period, while first class fares declined from 150 percent to 120

percent of coach fare. Because lower fares induced more travel, load factors rose from 55.6 percent to 61.0 percent. After accounting for the effect of deregulation on airfares, travel time, and delays, deregulation was estimated to increase consumer welfare by an amount roughly equal to 10 percent of an average regulationera round-trip fare, or about $25 to $35 per round trip (recall that all figures reflect 1985 dollars). Thus, any undesirable changes in travel time and delay were more than offset by the reduction in fares. It should be noted that this estimate does not account for welfare reductions due to reduced on-board services and higher load factors. Table 16.9 provides more recent estimates of the welfare gains from deregulation. Consumers are estimated to enjoy welfare gains of $12.4 billion annually from lower fares and $10.3 billion from greater flight frequency under deregulation. These gains are offset to some extent by welfare losses due to increases in travel restrictions, travel time, load factors, and the number of connections. On balance, consumer welfare gains from airline deregulation are estimated to exceed $18 billion annually. Table 16.9 Annual Welfare Gains from Deregulation ($ billion) Category

Gain

Fares Travel restrictions Frequency Load factor Number of connections Mix of connections (on-line/interline) Travel time

12.4 −1.1 10.3 −0.6 −0.7 0.9 −2.8

Total

18.4

Source: Steven A. Morrison and Clifford Winston, The Evolution of the Airline Industry (Washington, DC: Brookings Institution Press, 1995), p. 82.

Increased industry competition under deregulation has also reduced industry profit. Many airlines have declared bankruptcy and as of 2014, the airline industry as a whole had lost $35 billion since deregulation was instituted.43 Dynamic productive inefficiency To assess the effect of regulation on productivity, a study compared the average unit cost reductions achieved by twenty-one U.S. airlines (including all trunk airlines) with the corresponding reductions achieved by twenty-seven non-U.S. airlines.44 This comparison is informative, because the aircraft and fuel employed by all airlines are sold in world markets, and the operation and maintenance of all aircraft are governed by strict international standards. The study compared the annual percentage decline in unit costs over periods of regulation (1970–1975) and deregulation (1975–1983). The study found that under regulation, the annual decline in unit cost for U.S. airlines was only 3.0 percent, compared to 4.5 percent for non-U.S. airlines. In contrast, after deregulation, U.S. airlines reduced unit costs annually by 3.3 percent, whereas non-U.S. airlines reduced their unit costs by only 2.8 percent annually. These findings suggest that, as it did in the railroad industry, regulation likely reduced productivity growth in the airline industry. Airline safety Many observers expressed fears that deregulation would cause airline safety to deteriorate. Superficially,

this fear might appear to be irrational, because airline safety is controlled by the Federal Aviation Administration and the Airline Deregulation Act did not affect safety regulations. However, nonprice competition under regulation may have generated safety levels above those mandated by law. Consequently, if deregulation reduces nonprice competition and intensifies efforts to control costs, deregulation could conceivably reduce airline safety. Specifically, in an attempt to reduce costs, airlines might conceivably hire less experienced pilots or reduce aircraft maintenance, for example. Furthermore, increased air traffic and congestion after deregulation might increase the likelihood of accidents. However, countervailing forces suggest deregulation could lead to increased airline safety. For example, increased air travel induces airlines to purchase additional modern planes, which typically are more reliable than older planes. In addition, increased competition among airlines could spur them to deliver the high levels of safety that travelers value. Airline safety can be measured in several ways, including the lack of fatalities, accidents, and dangerous incidents (such as near mid-air collisions). These variables can be measured per departure or per passenger mile. Regardless of the measure employed, studies consistently find that the airline safety generally has increased since the deregulation of the industry.45 To illustrate, figure 16.9 depicts the number of fatal accidents for the U.S. airlines over time. Such accidents have generally declined in recent decades, despite the ever-increasing number of airline passengers and flights.

Figure 16.9 Number of Fatal Accidents in the U.S. Airline Industry, 1950–2014 Source: Planecrashinfo.com.

Competition and Antitrust Policy after Deregulation Industry regulation tends to reduce the need for antitrust oversight. Price collusion is not a concern when a regulatory agency sets industry prices. Furthermore, concerns about predation and other forms of entry deterrence tend to be muted when a regulator is overseeing entry into an industry. In contrast, an active role for antitrust policy typically emerges once an industry is deregulated. Antitrust can help ensure that industry suppliers do not conspire to raise price, undertake anticompetitive actions to deter entry, or engage in predatory behavior. The Airline Deregulation Act of 1978 explicitly recognized the important role for

antitrust policy in a deregulated airline industry by mandating that the CAB place additional emphasis on the standards for mergers outlined by Section 7 of the Clayton Act. The 1978 act required formal approval of all proposed mergers or acquisitions in the airline industry. Many mergers and acquisitions have been approved in the airline industry since 1978. Recent mergers include those between Delta and Northwest Airlines in 2008, Continental and United Airlines in 2010, AirTran Airways and Southwest Airlines in 2011, and US Air and American Airlines in 2013.46 These mergers have increased supplier concentration in the U.S. domestic airline industry. As of 2016, American, Southwest, Delta, and United Airlines accounted for more than two-thirds of all U.S. domestic revenue passenger miles.47 Measuring Concentration in the Airline Industry Supplier concentration in the airline industry also can be measured by calculating the number of “effective competitors,” which is defined as the inverse of the sum of the squares of the firms’ market shares. (Recall from chapter 5 that this sum is referred to as the Herfindahl-Hirschman Index, or HHI.) Formally, if si is the market share of firm i and there are n firms, then the number of effective competitors equals 1/[s12 + s22 + … + sn2]. The effective number of competitors for an industry is the number of equal-sized firms that would give the same value for the HHI. For example, if there is only one firm, its market share is one, and so the number of effective competitors is one. If there are n firms, each with the same market share (1/n), then the number of effective competitors is 1/[(1/n)2 + … + (1/n)2] = 1/[n(1/n2)] = n. However, if there are n firms and market share is skewed so that a few firms have a large part of the market, then the number of effective competitors is less than n. Competition in the airline industry typically occurs at the level of a route (e.g., between Boston and Chicago). Therefore, measures of concentration at the national level can be misleading and typically are not ideal for understanding the competitiveness of airline markets. To develop a better understanding of trends in competition since deregulation, figure 16.10 plots the number of effective competitors per route, averaged over: (1) all routes; (2) routes in excess of 2,000 miles; and (3) routes less than 500 miles. The figure indicates that competition increased immediately after deregulation. The average number of effective competitors (for all routes) climbed from 1.7 in 1979 to 2.5 in 1986. The wave of mergers in the mid- to late 1980s reduced the number of effective competitors. However, on average, route markets remained more competitive under deregulation than they were under regulation. The increased competition is especially pronounced on routes in excess of 2,000 miles. As noted below, though, the hub-and-spoke system has resulted in some airports being dominated by a single carrier.

Figure 16.10 Domestic Competition at the Route Level Source: Steven A. Morrison, “Deregulation of U.S. Air Transportation,” in K. Button, D. A. Hensher, eds., Handbook of Transport Strategy, Policy and Institutions, Handbooks in Transport Economics, vol. 6 (Amsterdam: Elsevier, 2005), p. 411.

Anticompetitive Nonprice Practices Contentious antitrust issues have surfaced in the airline industry since its deregulation. Numerous practices have come under scrutiny for possibly making entry more difficult, driving competitors from the market, and, more generally, creating an anticompetitive advantage for some or all established firms. Airport gates and slots Restrictions on the right to take off or land at a given airport at a specific time of day can constitute important barriers to entry. These takeoff and landing slots have long been in short supply at several major airports, including O’Hare in Chicago, LaGuardia and Kennedy in New York, and Ronald Reagan National Airport in Washington, DC. When takeoff and landing slots are in short supply, potential competition generally is less effective at disciplining incumbent airlines. In practice, these slots often are controlled by a small number of airlines. At LaGuardia Airport, American, Delta, and US Airways controlled 62 of 98 slots in 1998. American and United controlled 82 of 101 slots at O’Hare Airport in 1998, well above the 66 of 100 slots they controlled in 1986.48 Concentration in the control of airport gates also can inhibit competition. In practice, a few incumbent airlines often control a large number of the gates at hub airports. After it acquired Ozark Airlines in 1986, TWA controlled more than 75 percent of the gates at the St. Louis airport. Shortly thereafter, TWA raised its fares by 13–18 percent on its flights out of St. Louis.49 Barriers to entry are exacerbated by the fact that airports often enter into contracts that provide a specified airline exclusive use of a gate for many years. Frequent flyer programs American Airlines introduced an innovation in 1981 that has since become an industry fixture: frequent flyer programs. These programs boost an airline’s demand by offering awards of free flights to passengers

who have flown a specified number of miles on the airline. Such programs are especially effective with business travelers, who typically do not pay for the flight they take yet often are permitted to enjoy the free flights they earn. It is argued that these programs may provide an anticompetitive advantage to airlines with extensive route systems, because a traveler interested in accumulating frequent flyer miles will prefer to concentrate her mileage on one airline rather than split it between two smaller airlines. For example, suppose it takes 40,000 miles to receive a free domestic flight and a passenger flies 50,000 miles. Other things being equal, this passenger will prefer to fly 50,000 miles on one airline than to fly 25,000 miles on each of two different airlines. In addition, because larger airlines usually fly to more destinations, a free flight on a large airline typically will be more highly valued than a free flight on a smaller one. Marketing alliances In 1998 three pairs of airlines—American Airlines and US Airways, United Airlines and Delta Air Lines, and Continental Airlines and Northwest Airlines—proposed “marketing alliances.” The allied airlines link their route systems, so passengers do not have to pay a higher fare for traveling on, say, American Airlines and US Airways on the same trip. The allied airlines also consolidated their frequent flier programs. Alliances enable airlines to reap some benefits of a merger without incurring many of the costs of a merger. In principle, alliances can have mixed effects. They can cause fares to rise if the allied airlines coordinate their pricing decisions. However, alliances could promote lower fares by allowing for more efficient pricing of flights that involve multiple airlines. One study finds alliances to enhance welfare.50 Anticompetitive Pricing Practices Predatory pricing In principle, incumbent airlines might engage in predatory pricing to drive new entrants from the market.51 Commenting on the exit of many of the new airlines that entered the industry soon after it was deregulated, former CAB chairman Alfred Kahn said: I take perverse satisfaction in predicting the demise of price-cutting competitors like World and Capital Airways if we did nothing to limit the predictable geographically discriminatory response of the incumbent carriers to their entry.52

Smaller airlines have complained that their larger rivals have engaged in predatory pricing. As an example, Frontier Airlines accused United Airlines of using predatory pricing to drive Frontier from the Billings–Denver route.53 In 1994, Frontier began to serve the route with a fare of around $100, which was about half of United’s preentry fare (see figure 16.11). United responded with a fare comparable to Frontier’s fare. These low fares caused traffic on the route to increase by 60 percent. After about a year, Frontier withdrew from the route and lodged a complaint with the Department of Justice (DOJ). United increased its fare precipitously after Frontier’s departure.54

Figure 16.11 Evaluation of Postentry Pricing for Predation

What is one to make of this episode? Is United just responding as any good competitor would, or is it acting in a predatory manner? Recall from chapter 8 that predatory pricing is pricing with the intent to induce a competitor to exit the market. In practice, to establish predatory pricing in an antitrust case, it must be shown that the predator priced below some appropriately defined measure of unit cost.55 Let us suppose that the appropriate measure is average cost. So, did United price below its average cost? Could its pricing behavior only be rationalized if it induced the exit of Frontier? To address these questions, let DF (PF; PU) and DU (PU; PF) represent the demand functions for Frontier and United, respectively. The variables PF and PU denote the prices charged by Frontier and United, respectively. Also let ACF (qF) and ACU (qU) denote the average cost functions of the two airlines, where qF and qU denote the quantities supplied by Frontier and United. We assume that each firm’s average cost declines as its output increases because of the fixed costs associated with supplying air travel. In light of the low marginal cost of transporting one more passenger (given that the fixed cost of flying a plane has already been incurred), it is unlikely that the $100 fares that United and Frontier charged were below marginal cost. However, it is more difficult to determine whether these fares were below average cost. Assuming that Frontier was able to forecast demand accurately and it did not wish to operate at a loss, we would expect 100 > ACF (DF (100; 200)). This inequality says that Frontier’s $100 fare exceeds its expected average cost, assuming that United continued to charge $200. If this inequality did not hold, then it would be difficult to rationalize Frontier’s entry, as it would have been pricing below its average cost given United’s current price, so entry would be unprofitable for Frontier at this or any lower price for United.56 We also know that Frontier eventually exited after United matched its price, so it is reasonable to infer that Frontier was incurring losses, which means 100 < ACF(DF(100; 100)). Figure 16.11a depicts a setting where Frontier’s price of 100 exceeds its average cost when United sets a price of 200, but Frontier’s price is below its average cost when United sets a price of 100. Thus, where and Market demand, D(P) in figure 16.11, is the total number of units demanded when both firms set price P. Because the firm that sets the lowest price should be expected to have the highest market share, Frontier’s demand curve will be closer to the market demand curve as Frontier reduces its price further below United’s price. Figure 16.11a assumes that Frontier’s demand is half of market demand when Frontier and United set the same price. Thus, Frontier and United are assumed to offer comparable service, and each airline can satisfy all realized demand for its service. Finally, notice that Frontier’s demand curve shifts inward when United lowers its price from 200 to 100. Because these airlines offering substitute services, a reduction in United’s price causes some passengers to switch from Frontier to United, which reduces the demand for Frontier’s service. If United were no more efficient than Frontier—that is, if ACU(q) ≥ ACF(q) for all q—and if the two firms faced similar demand curves, then the preceding discussion indicates that United’s price would also be 57 This situation is depicted in figure below its average cost: where 16.11b. By matching Frontier’s price, United would be creating a situation that is not stable in the long run, as both firms are incurring losses. To rationalize matching Frontier’s price, United would have to expect higher profit in the future. The increased profit could conceivably arise from at least three sources. First, Frontier might exit the route, and United could then increase its price (which is precisely what happened in practice). Second, United’s “punishment” of Frontier might contribute to United’s reputation for responding

aggressively to entry. Even if Frontier did not exit the Billings–Denver route, United’s enhanced reputation might enable it to earn higher profit in other markets by more effectively deterring entry into these markets. Third, United’s price matching might encourage Frontier to raise its price, anticipating that United would follow the price increase, thereby enabling the two firms to achieve a long-run equilibrium at a price above 100 that would allow both airlines to earn positive profit. As noted, the preceding analysis presumes that United and Frontier had comparable demand and cost functions. If United had a sufficiently pronounced cost advantage over Frontier, then United might have been able to earn positive profit by matching Frontier’s price of 100. This situation is depicted in figure 16.11b, where United’s average cost function is now Even if United did intend to drive Frontier from the market in this case, United would be driving out a less efficient firm. It is also possible that customers might deem United’s flights to be of higher quality than Frontier’s flights. In this case, when the two firms set the same price, United will experience greater demand than Frontier, and so achieve lower average cost than Frontier. In particular, United’s average cost might have been less than 100. In this event, United’s price matching policy could have been profitable even if it did not intend to drive Frontier from the market. Of course, if passengers were willing to pay a premium to fly on United Airlines, it is not clear why United would match Frontier’s price rather than charge travelers the higher price they were willing to pay. Without additional information, it is difficult to determine whether United engaged in predatory pricing in this instance. This discussion illustrates some of the many subtle issues that can arise when attempting assess whether an observed pricing practice constitutes predatory pricing. Price signaling As discussed in chapter 4, parties to a collusive pricing arrangement must agree on the prices to set. Direct communication about the prices to charge, perhaps at a clandestine meeting, may be the most effective means of communication, but it may also entail the greatest risk. Evidence of direct contact between firms can constitute extremely damaging evidence in a price-fixing case. Indirect communication through price announcements and reactions to price announcements avoids the “smoking gun” problem, but typically is less effective than direct communication. In 1991, the DOJ launched an investigation of possible tacit communication through price signaling by the major U.S. airlines and the Airline Tariff Publishing Company (ATPCO).58 Each day, airlines submit fare changes and fares for new routes to ATPCO, as well as such ancillary information as first and last ticket dates (that is, when a travel agent can begin and end selling seats at a particular fare). If a first ticket date is some time in the future, then an airline is preannouncing a fare change. ATPCO disseminates this information to all major airlines and to computer reservation systems. Severin Borenstein consulted for the DOJ on its antitrust charge and wrote that the DOJ claimed that the airlines had carried on detailed conversations and negotiations over prices through ATPCO. It pointed to numerous instances in which one carrier on a route had announced a fare increase to take effect a number of weeks in the future. Other carriers had then announced increases on the same route, though possibly to a different fare level. In many cases cited, the airlines had iterated back and forth until they reached a point where they were announcing the same fare increase to take effect on the same date.59

Attached to these fare changes was further information in the form of fare basis codes and footnote designators that, it was argued, allowed airlines to signal a connection between fares on different routes. For example, suppose airlines A and B compete on the Atlanta–Albuquerque and Boston–Buffalo routes and

that the former route is particularly important for airline A whereas the latter route is particularly important for airline B. Now suppose that A lowers its fare on the Boston–Buffalo route. B might then lower its fare on the Atlanta–Albuquerque route and attach information that would link the fare reduction to fares on the Boston–Buffalo route. Such behavior could signal that B’s reduced fare on the Atlanta–Albuquerque route is a response to A’s reduced fare on the Boston–Buffalo route, and if A wants B to raise its fare on the Atlanta–Albuquerque route, then A should raise its fare on the Boston–Buffalo route. The airlines and ATPCO argued that the preannouncement of fare changes benefited consumers by facilitating travel planning. The parties also contended that their activities should be judged under the rule of reason, because, even if the practices did promote collusion, they also benefited consumers. The case resulted in a consent decree. The airlines were prohibited from preannouncing price increases except when the announcements were heavily publicized (so that they could be of substantial value to consumers). Furthermore, the announcements could only convey basic information about fare changes and could not link fares with special codes. Concentration and air fares The hub-and-spoke system has resulted in some routes being dominated by a single carrier. For example, in 2016, American Airlines carried 68 percent of passengers at the Dallas/Fort Worth International Airport and 71 percent of the passengers at the Charlotte Douglas International Airport. Delta Airlines’ corresponding market share was 73 percent at the Hartsfield-Jackson Airport in Atlanta, and United Airlines’ share was 51 percent in George Bush Houston Intercontinental Airport.60 Such concentration is potentially problematic for the following reason. Suppose a passenger travels from Baltimore to Los Angeles through Pittsburgh. Even though Southwest Airlines has a substantial market share of traffic in Pittsburgh, it has no real market power over the Baltimore–Los Angeles route. This is because a traveler could easily go from Baltimore to Los Angeles by, for example, flying on American Airlines through Dallas/Fort Worth or on United Airlines through Chicago. However, a passenger whose final destination is Pittsburgh may find it difficult to find a convenient flight on an airline other than Southwest. Passengers whose origin or destination is a hub with a single dominant firm may be forced to pay relatively high fares. As Figure 16.12 reveals, these passengers have paid price premiums in the range of 10–30 percent in recent years.

Figure 16.12 Price Premiums at the Ten Largest Hub Airports, 1986–2010 Source: Severin Borenstein, “What Happened to Airline Market Power?” University of California at Berkeley discussion paper, September 3, 2012 (http://faculty.haas.berkeley.edu/borenste/AirMktPower 2013.pdf). Note: Price premiums reflect the extent to which the average fare at the largest U.S. hub airports (in Atlanta, Charlotte, Cincinnati, Denver, Dallas/Ft. Worth, Detroit, Memphis, Minneapolis, Chicago O’Hare, and Pittsburgh) exceeds the average fare at other airports.

Lessons from Regulation and Deregulation The experience in the airline industry reveals that regulatory policy and antitrust policy typically must work together if the potential gains from deregulation are to be fully realized. The end of regulation can create a void that antitrust policy may need to fill. Thus, deregulation does not necessarily eliminate the government’s role in an industry. Instead, deregulation signals a change in the government’s role from one of detailed industry oversight and control to one of maintaining industry competition. Although the trucking and airline industries were subject to similar regulatory practices, regulation had different impacts on the two industries. Trucking firms did not engage in vigorous nonprice competition when their prices were regulated. Consequently, they were able to earn substantial extranormal profit. In contrast, airlines competed away most of the potential profits that regulation created through abnormally high fares. They did so through the provision of frequent flights, low load factors, and high-quality on-board services. Why did regulation induce excessive nonprice competition in the airline industry but not in the trucking industry? One possible answer is that the demand for passenger air service is more responsive to nonprice factors than is the demand for surface freight transportation. If demand is very elastic with respect to these factors, then a firm has a strong incentive to invest in them, since a relatively small investment can produce a substantial increase in demand. The CAB regulation of passenger air service provides at least two important lessons. First, competition will follow the path of least regulatory resistance. Because regulation cannot fully control every dimension of competition, firms often will compete on the least regulated dimensions. In the case of airline regulation, the CAB prevented fare competition but did not control service quality competition. As a result, regulation produced relatively high fares and relatively high levels of service quality. Second, it is difficult to identify all relevant effects of prevailing regulations. Competition entails developing new ways to provide more highly valued products at lower cost, and it typically is difficult to anticipate the innovations that competition would bring if regulation were ended. Although economists long predicted deregulation would increase welfare in the airline industry, they did not anticipate the development of the hub-and-spoke system, a development that was stifled by the entry restrictions imposed by the CAB. In part because of the hub-and-spoke system, deregulation has reduced fares in many markets and has not reduced service quality as much as some feared. Summary The experience in the transportation sector provides at least four important lessons about regulation more generally. First, regulation is often imperialistic: Although it is difficult to regulate an industry, it is often even more difficult to regulate only part of the industry. This is what the ICC discovered when it regulated

the railroads but not the motor carriers. When rail rates were set above cost, truckers were able to undercut rail prices and thereby serve demand that would have been more efficiently served by the railroads. To limit difficulties inherent in such a situation, the Motor Carrier Act of 1935 extended ICC control from railroads to motor carriers. Second, when regulation controls one dimension of firm performance, industry suppliers may compete vigorously on other dimensions. In the airline industry, firms competed to deliver high-quality service in response to high regulated prices. Consumers value higher quality, but they may be harmed by the absence of lower-priced, lower-quality alternatives. Third, regulation often promotes inefficient practices. For instance, railroads were not permitted to abandon unprofitable lines, and inefficient trucking companies were permitted to operate. When entry and exit are limited, a key attribute of competition is forfeited. It is no longer the case that the efficient suppliers thrive and inefficient suppliers perish. Fourth, regulation can reduce welfare in ways that are difficult to anticipate. Regulation can stifle innovation that would have taken place in an unregulated market. Regulation reduced productivity growth in both the railroad industry and airline industry. A case in point is the development of the hub-and-spoke system that developed only after regulation of the airline industry ended. As a result of this unanticipated restructuring of the industry, deregulation led to both lower fares and increased flight frequency. Questions and Problems 1. How could one measure the effect of regulation on profits in the trucking industry? Did regulation increase or reduce industry profit? 2. Why did the railroads favor regulation in the 1880s but favor deregulation in the 1950s? 3. The Staggers Act gave railroads considerable flexibility to set rates except on routes where market dominance prevailed. If a planner sought to maximize industry welfare, how should the planner define “market dominance”? 4. During the years when the CAB regulated the airline industry, the unit cost for U.S. airlines declined 3 percent annually, whereas the unit cost for (unregulated) non-U.S. airlines declined 4.5 percent annually. How might regulation have reduced industry productivity gains? 5. Do you think the deregulation of the railroad industry produced Pareto gains? How about the deregulation of the trucking industry? If Pareto gains did not arise, which parties gained and which parties lost from deregulation? 6. What role should antitrust policy play after an industry is deregulated? 7. When the airline and trucking industries were regulated, rates were set well above cost on many routes in both industries. Nonprice competition often eliminated much of the rent in the airline industry but not in the trucking industry. What were the likely causes of this difference? Do you believe the nonprice competition likely increased welfare? 8. Has airline deregulation made all consumers better off? 9. Why did deregulation lead to lower wages in the airline and trucking industries? 10. Use the economic theories of regulation from chapter 10 to explain the observed patterns of deregulation in the U.S. transportation sector. 11. Explain briefly the ways in which one might estimate the impact of regulation on air fares. 12. The trucking industry was regulated in 1935 in part because trucks were attracting a substantial amount of the traffic traditionally served by railroads. Do you think it would have been preferable to leave the trucking industry unregulated? Would it have been better to have deregulated the railroad industry at that time?

Notes

1. For the economics of transportation more generally rather than just the effects of regulation on the transportation industry, see Clifford Winston, “Conceptual Developments in the Economics of Transportation: An Interpretive Survey,” Journal of Economic Literature 23 (March 1985): 57–94, and the references cited therein. 2. A fictional exception took place in the 1987 James Bond movie, The Living Daylights. A Russian defector is transported across the Iron Curtain in a capsule propelled through a pipeline. 3. General background references used for our discussion of the regulation of surface freight transportation are Stephen Breyer, Regulation and Its Reform (Cambridge, MA: Harvard University Press, 1982); Theodore E. Keeler, Railroads, Freight, and Public Policy (Washington, DC: Brookings Institution Press, 1983); and Thomas Gale Moore, “Rail and Trucking Deregulation,” in Leonard W. Weiss and Michael W. Klass, eds., Regulatory Reform: What Actually Happened (Boston: Little, Brown, 1986), pp. 14–39. 4. For a discussion of the JEC’s role in setting rail rates, see Robert H. Porter, “A Study of Cartel Stability: The Joint Executive Committee, 1880–1886,” Bell Journal of Economics 14 (Autumn 1983): 301–314. 5. Clifford Winston, Thomas M. Corsi, Curtis M. Grimm, and Carol A. Evans, The Economic Effects of Surface Freight Deregulation (Washington, DC: Brookings Institution Press, 1990). 6. Robin A. Prager, “Using Stock Price Data to Measure the Effects of Regulation: The Interstate Commerce Act and the Railroad Industry,” RAND Journal of Economics 20 (Summer 1989): 280–290. 7. The density of a route refers to the volume of goods transported per route mile. 8. Kenneth D. Boyer, “Equalizing Discrimination and Cartel Pricing in Transport Rate Regulation,” Journal of Political Economy 89 (April 1981): 270–286. 9. Kenneth D. Boyer, “The Costs of Price Regulation: Lessons from Railroad Deregulation,” RAND Journal of Economics 18 (Autumn 1987): 408–416. 10. However, some other studies find that deregulation caused average rail rates to fall. For example, a 16.5–18.5 percent reduction is reported by Christopher C. Barnekov and Andrew N. Kleit, “The Efficiency Effects of Railroad Deregulation in the United States,” International Journal of Transport Economics 17 (1990): 21–36. One reason for such variation in findings is that studies differ in how they weight rail rates for different commodities when calculating an average rate. 11. Wesley W. Wilson, “Market-Specific Effects of Rail Deregulation,” Journal of Industrial Economics 42 (March 1994): 1–22. 12. John W. Snow, “The Problem of Motor Carrier Regulation and the Ford Administration’s Proposal for Reform,” in Paul W. MacAvoy and John W. Snow, eds., Regulation of Entry and Pricing in Truck Transportation (Washington, DC: American Enterprise Institute, 1977), pp. 3–43. 13. Both survey data are from Moore, “Rail and Trucking Deregulation.” 14. John S. Ying and Theodore E. Keeler, “Pricing in a Deregulated Environment: The Motor Carrier Experience,” RAND Journal of Economics 22 (Summer 1991): 264–273. 15. Richard Beilock and James Freeman, “Effect of Removing Entry and Rate Controls on Motor Carrier Levels and Structures,” Journal of Transport Economics and Policy 21 (May 1985): 167–188. 16. Roger D. Blair, David L. Kaserman, and James T. McClave, “Motor Carrier Deregulation: The Florida Experiment,” Review of Economics and Statistics 68 (February 1986): 159–184. 17. Richard C. Levin, “Surface Freight Transportation: Does Rate Regulation Matter?” Bell Journal of Economics 9 (Spring 1978): 18–45; Anne F. Friedlaender and Richard Spady, Freight Transport Regulation: Equity, Efficiency, and Competition in Rail and Trucking Industries (Cambridge, MA: MIT Press, 1980); Clifford Winston, “The Welfare Effects of ICC Rate Regulation Revisited,” Bell Journal of Economics 12 (Spring 1981): 232–244; and Ronald R. Braeutigam and Roger G. Noll, “The Regulation of Surface Freight Transportation: The Welfare Effects Revisited,” Review of Economics and Statistics 56 (February 1984): 80–87. 18. Winston et al., Economic Effects of Surface Freight Deregulation. 19. The study reports large welfare gains even after netting out the loss in carriers’ profits. Large estimates of the welfare gain from deregulation are also reported in Barnekov and Kleit, “Efficiency Effects.” 20. Thomas Gale Moore, “Rail and Truck Reform: The Record So Far,” Regulation (November/December 1983): 33–41.

21. Keeler, Railroads, Freight, and Public Policy. 22. B. Starr McMullen and Linda R. Stanley, “The Impact of Deregulation on the Production Structure of the Motor Carrier Industry,” Economic Inquiry 26 (April 1988): 299–316; and Thomas Gale Moore, “Unfinished Business in Motor Carrier Deregulation,” Regulation 7 (Summer 1991): 49–57. 23. Nancy L. Rose, “Labor Rent Sharing and Regulation: Evidence from the Trucking Industry,” Journal of Political Economy 95 (December 1987): 1146–1178. 24. John R. Felton, “The Impact of Rate Regulation upon ICC-Regulated Truck Back Hauls,” Journal of Transport Economics and Policy 15 (September 1981): 253–267. 25. Moore, “Unfinished Business.” 26. Noel D. Uri and Edward J. Rifkin, “Geographic Markets, Causality and Railroad Deregulation,” Review of Economics and Statistics 67 (August 1985): 422–428. 27. The statistics in this paragraph are drawn from from Robert D. Willig and William J. Baumol, “Using Competition as a Guide,” Regulation 1 (1987): 28–35. 28. Robert E. Gallemore and John R. Meyer, American Railroads: Decline and Renaissance in the Twentieth Century (Cambridge, MA: Harvard University Press, 2014), p. 255. 29. Willig and Baumol, “Using Competition,” p. 31. 30. Thomas Gale Moore, “In Memoriam,” Regulation 17 (1994): 10. 31. General background references used for our discussion of airline regulation are Theodore E. Keeler, “The Revolution in Airline Regulation,” in Leonard W. Weissand Michael W. Klass, eds., Case Studies in Regulation: Revolution and Reform (Boston: Little, Brown, 1981), pp. 53–85; Breyer, Regulation and Its Reform; and Elizabeth E. Bailey, David R. Graham, and Daniel P. Kaplan, Deregulating the Airlines (Cambridge, MA: MIT Press, 1985). 32. The first academic study to propose deregulation was Lucile S. Keyes, Federal Control of Entry into Transportation (Cambridge, MA: Harvard University Press, 1951). 33. Breyer, Regulation and Its Reform. 34. Ibid. 35. Thomas Gale Moore, “U.S. Airline Deregulation: Its Effects on Passengers, Capital, and Labor,” Journal of Law and Economics 29 (April 1986): 1–28. 36. Ibid. 37. For a demonstration of this suboptimal mix, see George W. Douglas and James C. Miller III, Economic Regulation of Domestic Air Transport: Theory and Policy (Washington, DC: Brookings Institution Press, 1974). 38. Ibid. 39. Moore, “U.S. Airline Deregulation.” 40. Scott McCartney, “The Golden Age of Flight,” Wall Street Journal, July 22, 2010 (available at http://www.wsj.com/ articles/SB10001424052748704684604575380992283473182). 41. Alfred E. Kahn, “Deregulation and Vested Interests: The Case of Airlines,” in Roger G. Noll and Bruce M. Owen, eds., The Political Economy of Deregulation (Washington, DC: American Enterprise Institute, 1983), p. 140. 42. Donald W. Koran, “The Welfare Effects of Airline Fare Deregulation in the United States,” Journal of Transport Economics and Policy 18 (May 1983): 177–189. 43. Alex Mayyasi, “Can Airlines Make Money?” Priceonomics, 2014 (http://priceonomics.com/can-airlines-make-money). 44. Douglas W. Caves, Laurits R. Christensen, Michael W. Tretheway, and Robert J. Windle, “An Assessment of the Efficiency Effects of U.S. Airline Deregulation via an International Comparison,” in Elizabeth E. Bailey, ed., Public Regulation: New Perspectives on Institutions and Policies (Cambridge, MA: MIT Press, 1987), pp. 285–320. 45. Nancy L. Rose, “Fear of Flying? Economic Analyses of Airline Safety,” Journal of Economic Perspectives 6 (Spring 1992): 75–94. 46. Trefis Team, “How M&A Has Driven the Consolidation of the US Airline Industry over the Last Decade?” Forbes, May 4, 2016 (available at http://www.forbes.com/sites/greatspeculations/2016/05/04/how-ma-has-driven-the-consolidation-of-the-

us-airline-industry-over-the-last-decade/#68dfdf9b211d). 47. Between November 2015 and October 2016, the share of U.S. domestic airline passenger miles supplied by American, Southwest, Delta, and United Airlines were 19.3, 18.3, 16.9, and 14.5 percent, respectively (U.S. Bureau of Transportation Statistics, “Airline Domestic Market Share November 2015–October 2016,” http://www.transtats.bts.gov). Revenue passenger miles are the product of the number of revenue-paying passengers and the number of miles flown. 48. Statement of John H. Anderson, Jr., director, Transportation Issues, U.S. General Accounting Office, before the Subcommittee on Aviation, Committee on Transportation and Infrastructure, U.S. House of Representatives, April 23, 1998. 49. “Happiness Is a Cheap Seat,” The Economist, February 4, 1989, pp. 68, 71. 50. Jan K. Brueckner and W. Tom Whalen, “The Price Effects of International Airline Alliances,” Journal of Law and Economics 43 (October 2000): 503–546. 51. See chapter 8 for a more general treatment of predatory pricing. 52. Alfred E. Kahn, “Surprises of Airline Deregulation,” American Economic Review 78 (May 1988): 316–322. 53. Steven A. Morrison, “New Entrants, Dominated Hubs, and Predatory Behavior,” statement made before the Subcommittee on Antitrust, Business Rights, and Competition, Committee on the Judiciary, U.S. Senate, April 1, 1998. 54. This encounter is not necessarily typical. When Southwest Airlines entered United’s Los Angeles–Sacramento route, it set a fare of $56, well below United’s fare of around $100. Three years later, both carriers were still serving that route and fares remained low. In the case of Southwest’s entry into the Baltimore–Cleveland route, intense fare competition did induce exit, but this time by the incumbent firm, US Airways (ibid.). 55. Whether the most appropriate measure of cost is average cost, average variable cost, or marginal cost is a difficult issue that is addressed in chapter 8. 56. As United’s price declines, the demand for Frontier’s service declines, and so its average cost increases, assuming that average cost declines as output increases. 57. An incumbent firm like United and a new competitor like Frontier are likely to have different cost functions, in part because an incumbent’s system is typically hub-and-spoke while a new competitor’s system is often point-to-point. However, it is not clear which system should produce lower costs. While there appear to be economies from the hub-andspoke network, Southwest, which operates more of a point-to-point system, has lower costs than the major airlines. See Severin Borenstein, “The Evolution of U.S. Airline Competition,” Journal of Economic Perspectives 6 (Spring 1992): 45– 73. 58. This section is based on Severin Borenstein, “Rapid Price Communication and Coordination: The Airline Tariff Publishing Case (1994),” in John E. Kwoka Jr. and Lawrence J. White, eds., The Antitrust Revolution, 3rd ed. (New York: Oxford University Press, 1998), pp. 310–326. 59. Ibid., p. 314. 60. Bureau of Transportation Statistics, United States Department of Transportation (http://www.transtats.bts.gov/airports .asp?pn=1).

17 Economic Regulation in the Energy Sector

At least since the Industrial Revolution, the energy sector has played a vital role in the world economy. Energy is an essential input in many production processes. To illustrate, electricity is employed to power a wide variety of devices, including computers and lighting equipment. Oil in the form of gasoline is employed to power trucks and automobiles, and in the form of heating oil it is used to warm residential and business establishments. Natural gas is also employed to warm residential and commercial structures and to power industrial machinery, including electricity-generating equipment. Regulation has been imposed on the energy sector for several reasons, including the natural monopoly elements of portions of the sector, the vital role that energy plays in modern economies, and the fact that many consumers spend a substantial portion of their disposable income on energy products. Regulation has taken the form of price controls in many instances. However, controls have also been placed on the production and the importation of oil. The purpose of this chapter is to discuss the primary forms of regulation that have been imposed on the energy sector and to analyze the effects of the regulation. The discussion begins with a review of regulation in the electricity sector. The nature and extent of regulation in this sector varies with the prevailing industry structure. In some jurisdictions, a single regulated entity generates and distributes electricity. In other jurisdictions, many independent and largely unregulated firms compete to supply electricity to the regulated enterprise that delivers electricity to consumers. The advantages and disadvantages of these two distinct industry structures are discussed, and the relative performance of the two structures assessed. Regulation in the oil sector is limited today. Historically, though, the U.S. government has regulated the price of crude oil, restricted the supply of domestic oil, and limited oil imports. The discussion in this chapter concludes with a review of the reasons for, and the effects of, these regulations. The historical review is instructive in part because it further illustrates the problems that can arise when government regulations, rather than market forces, determine industry outcomes. Regulation in the Electricity Sector Historical, Technological, and Regulatory Background The supply of electricity entails four main functions: generation, transmission, distribution, and retailing.1 Electricity is generated using several different fuel sources, including fossil fuels (primarily coal and natural gas), nuclear fuels, and falling water. The transmission function entails the transport of electricity (via highvoltage transmission lines) from the generating sites to geographically dispersed distribution centers. The distribution function entails the transport of electricity (via low-voltage transmission lines) from the

distribution centers to residential, commercial, and industrial customers. The retail function includes measuring the amount of electricity that individual customers consume and billing customers for this consumption. Historically, all these functions were undertaken in a given geographic region by a single vertically integrated company. In particular, a regulated electric utility would own generating plants, transmission lines, and the local distribution network and would produce power for delivery to final customers. A utility typically was awarded an exclusive franchise to provide service to customers in a specified geographic region. The relevant state regulatory commission would set the prices the utility could charge for electricity. The utility was obligated to provide reliable service to all customers at the specified rates. Most state commissions initially employed rate of return regulation (RORR), as described in chapter 13. Some commissions subsequently experimented with forms of performance based regulation, as reviewed in chapter 13. Peak-load prices were employed by many commissions (recall the discussion in chapter 12) to limit the need to expand capacity to serve the demand for electricity at times of peak demand. Although the vertically integrated nature of production may have facilitated some coordination between the generation and distribution functions of electricity supply, economies of scope between these two functions were not deemed to be substantial. Indeed, some companies believed they could generate electricity at lower cost than it was being produced by the local regulated utility and petitioned for the right to generate electricity and sell it to the utility. The Public Utilities Regulatory Policy Act of 1978 (PURPA) gave “qualifying facilities” the right to sell power to vertically integrated utilities. This led to a sizable increase in the number of nonutility power generators (that is, companies that produce power but do not distribute it to final consumers). PURPA helped introduce competition into electricity generation. However, it did not allow nonutility power generators to contract directly with retail customers. PURPA also required nonutility generators to sell their electricity to the local utility. The Energy Policy Act of 1992 rescinded the second limitation but maintained the first. The act also empowered the Federal Energy Regulatory Commission (FERC) to order a utility to transmit power for others over its regional transmission lines. In 1986, FERC Order 888 mandated that owners of regional transmission networks act as common carriers of electric power. In particular, a network owner was required to provide interconnection services between independent power producers and wholesale buyers on the same terms and conditions that it provides the services to itself. These changes laid the groundwork for vertical unbundling in the electricity sector, whereby different entities can conduct different elements of electricity supply. Consequently, a retail customer can purchase electricity directly from an independent generator and contract with the local utility to transport and deliver the electricity. This arrangement permits competition to develop in the generation of electricity while maintaining regulation over the transmission and distribution of electricity (where economies of scale are more pronounced). Introducing competition into a market raises many questions. For instance, should any restrictions be placed on entry into the market? How will the activities among multiple industry actors be coordinated? In particular, should any rules be established that govern the interaction between the incumbent regulated utility and new industry suppliers? Restructuring in California The State of California was among the first states to grapple with these challenging questions.2 Historically, retail electricity prices in California were relatively high, so the state sought new ways to reduce these prices

for its citizens.3 Because it was a leader in restructuring its electricity sector, California encountered challenging questions about the optimal manner in which to organize and regulate its electricity sector that others had not encountered before. Different groups had different ideas about how to transition from the historical vertically integrated supply model with no competition among utilities to a model with substantial vertical separation and competition among generators. Paul Joskow describes the resulting conflict that arose: The debates over the design of these wholesale market institutions in California in 1996 and 1997 … were contentious and highly politicized, reflecting perceptions by various interest groups about how different wholesale market institutions would advance or constrain their interest and, in my view, an inadequate amount of humility regarding uncertainties about the performance attributes of different institutional arrangements. The discussion of alternative institutions was polluted by an unfortunate overtone of ideological rhetoric that attempted to characterize the debate about wholesale market institutions as one between “central planners” and “free market” advocates. The market design process in California in 1997 and 1998 also demonstrates how market design by committee, reflecting interest group compromises and mixing and matching pieces of different market models, can lead to a system with the worst attributes of all of them.4

After several years of discussion, the California state legislature approved a four-year transition plan that went into effect on March 31, 1998. Under this plan, consumers continued to have the option to buy electricity from their existing utility distribution company (UDC), which was Pacific Gas & Electric, California Edison, or San Diego Gas & Electric. For up to four years, the retail price of electricity purchased from the UDCs was capped at 90 percent of the regulated retail rate in 1996. Although the new rates were lower than historical rates, they were still substantially above the prevailing wholesale price of electricity, so the new rates were deemed adequate to ensure a normal profit for the UDCs. The transition plan also allowed consumers to purchase electricity from a nonutility electric service provider (ESP). The plan also imposed several restrictions on the UDCs. Some of the restrictions were designed to reduce the role of the UDCs in generating power so as to facilitate increased competition in this sector of the industry. Specifically, the UDCs were ordered to divest at least half of their fossil-fuel generating capacity. They ultimately chose to divest all of this capacity, retaining only their nuclear and hydroelectric plants.5 The UDCs also were required to meet all demand for electricity not supplied by the ESPs. The restructuring tasked the California Power Exchange (CALPX) with running public auctions for wholesale electricity. Buyers would report to CALPX the amounts they were willing to pay for electricity one hour (or one day) after the auction concluded. Sellers would similarly report the compensation they required to supply specified amounts of electricity at the appointed time. CALPX would then determine the wholesale prices that equated the demand for electricity and the supply of electricity in the day-ahead and hour-ahead markets. Independent power producers could either sell their power through CALPX or directly to consumers, but the UDCs were required to sell their power to CALPX. Furthermore, during the transition phase, the UDCs were required to acquire all of their electricity from the day-ahead and hour-ahead markets run by CALPXs. Specifically, the UDCs could not enter into long-term supply contracts with electricity producers. Consequently, the UDCs were forced to bear the risk associated with wholesale electricity prices that could conceivably change substantially as demand and supply conditions change. This restriction was implemented in part to facilitate the development of CALPX and in part to limit any market power the UDCs might be able to exercise in the developing markets for wholesale electricity. The restructuring plan also created the California Independent System Operator, whose objective was to provide nondiscriminatory transmission access to all generation facilities and to manage transmission congestion. Thus, the California Independent System Operator would enable any generator of electricity to have its electricity transmitted to any buyer of the electricity. Although the California Independent System

Operator was owned by the UDCs, it was structured as an independent entity to avoid preferential treatment of UDCs in the transmission of power. The vertical unbundling complicated the monthly bill that retail customers received for their electricity purchase, as the sample bill in table 17.1 reveals. The bill included separate entries for the wholesale price of electricity ($14.40), for transmitting the electricity from the generator to the relevant distribution center ($2.40), and for transporting the electricity from the distribution center to the customer ($21.60). One other substantial fee is the competition transition charge ($20.40). This fee was employed to compensate the UDCs for past investments (in divested generating facilities) that were not covered by other retail charges. Table 17.1 Sample Electricity Bill in California, 1998 Bill Item

Use Factor

California Power Exchange (energy charge) Transmission charges Distribution charges Public purpose programs State regulatory fee Nuclear decommissioning Trust transfer amount Competition transition charge 10 percent rate reduction

600 kwh × 0.024 600 kwh × 0.004 600 kwh × 0.036 600 kwh × 0.004 600 kwh × 0.0001 600 kwh × 0.0005 600 kwh × 0.0161 600 kwh × 0.034 —

Total

Cost ($) 14.40 2.40 21.60 2.40 0.06 0.30 9.66 20.40 −7.12 64.10

Source: www.cpuc.ca.gov.

California Energy Crisis, 2000–2001 With the exception of some short-lived spikes in the wholesale price of electricity, the new regulatory structure worked reasonably well initially. Price spikes were not uncommon in other wholesale electricity markets, such as the ones that operate in the Pennsylvania–New Jersey–Maryland region and the New England region. However, the spikes in California ultimately proved to be of unusually long duration. The pronounced, sustained spikes in wholesale electricity prices, combined with certain flaws in regulatory policy design, caused a financial disaster that led California to abandon many of its restructuring and deregulation efforts. Serious problems began to emerge in June 2000, when the wholesale price of electricity on CALPX rose sharply, ascending to more than $150 per megawatt-hour (mwh). The price continued to be elevated for almost a year, often exceeding $350 per mwh (see figure 17.1). During this period, retail prices were capped at $65 per mwh, so the UDCs experienced pronounced financial losses—on the order of $50 million per day! FERC, which has the authority to set “just and reasonable” wholesale electricity prices, did not intervene effectively. The UDCs were becoming insolvent and stopped paying for the electricity they purchased. Producers began to curtail the electricity they supplied, fearing that they would not be paid. However, the U.S. Department of Energy mandated that the producers continue to supply electricity to ensure retail customers were not left without electricity. CALPX ceased operations and subsequently declared bankruptcy. The State of California suspended the retail competition program and subsumed market functions, including power procurement and regulating retail prices so as to recoup the cost of the power they purchased during the crisis. By the time the crisis subsided (because the wholesale price of

electricity declined substantially), retail consumers were paying 40 percent more for electricity than they did before restructuring. In addition, the state had incurred related expenses in excess of $40 billion.6

Figure 17.1 California Wholesale Day-Ahead Price of Electricity, April 1998–April 2002 Source: Christopher Weare, The California Electricity Crisis: Causes and Policy Options (San Francisco: Public Policy Institute of California, 2003). Available at http://www.ppic.org/content/pubs/report/ R_103CWR.pdf.

Causes of the crisis The problems in California stemmed primarily from a regulatory policy that was not well structured to reflect the prevailing industry conditions. The UDCs were required to divest much of their generating capacity but were not permitted to procure electricity via long-term contracts. Furthermore, the price the UDCs could charge for electricity was capped at a level that did not vary as the wholesale price of electricity changed. Consequently, the UDCs faced a severe threat of financial shortfall if the wholesale price of electricity on the newly created CALPX were to increase substantially. The substantial increase in the wholesale price of electricity in California in 2000 reflected the combined influence of several factors. First, the cost of producing electricity increased sharply due to significant increases in the price of natural gas and in the price of nitrous oxide emission credits (which producers were required to secure in order to emit pollutants during the process of generating electricity). Second, the demand for electricity increased considerably in the late 1990s, in part due to unusually high summer temperatures in California. Third, imports of electricity, especially hydroelectric power from the Pacific Northwest, declined. Fourth, some large generators may have deliberately reduced available capacity so as to cause the wholesale price of electricity to rise. Strategic withholding of electricity The incentive to strategically reduce available capacity (by claiming production units must undergo maintenance, for example) reflects the following considerations. As noted above, the wholesale price of electricity was set in CALPX to equate demand and supply. Therefore, as supply diminishes, the marketclearing price increases. As a result, a wholesale supplier of electricity faces a tradeoff when deciding how

much electricity to supply to the market. If the firm withholds some electricity from the market, it receives no compensation for the withheld electricity. However, the firm receives a higher price in equilibrium for the electricity that it does supply to the market. Many large wholesale suppliers of electricity operate several generating units. By declining to operate some of its units, a supplier can secure a higher price for the electricity it supplies from its other units. Just as monopolists typically secure higher profit by restricting their output below competitive levels, large wholesale suppliers of electricity (whose output has a substantial effect on the equilibrium price of electricity) often can find it profitable to intentionally withhold some electricity from the market. Studies suggest that a significant portion of the increase in wholesale price of electricity in the summer of 2000 likely resulted from strategic withholding of electricity supplies.7 Figure 17.2 helps explain why the strategic withholding of electricity may be profitable.8 The figure considers a setting where the demand for electricity (D) is insensitive to the wholesale price of electricity. This is often the case in practice, in part because the retail price of electricity typically does not respond to short-term variation in the wholesale price. The figure also depicts two industry supply curves for electricity. Curve S1 denotes the industry supply curve when the supplier in question does not withhold any electricity from the wholesale market. Curve S2 denotes the industry supply curve when the supplier withholds electricity from one of its two plants. The shapes of these curves reflect the fact that suppliers often can generate electricity at constant marginal cost (c) until output approaches capacity, at which point marginal cost increases rapidly with output.

Figure 17.2 Strategic Withholding of Electricity

Source: John Kwoka and Vladlena Sabodash, “Price Spikes in Energy Markets: ‘Business by Usual Methods’ or Strategic Withholding?” Review of Industrial Organization 38 (May 2011): 285–310.

For simplicity, assume that the supplier can either withhold none of the electricity that its two identical plants can generate or it can withhold X, the maximum amount of electricity each plant can generate. If the supplier does not withhold any electricity, the market-clearing price is P1 in figure 17.2. The market-clearing price increases to P2 (where the market supply curve, S2, intersects the market demand curve, D) if the supplier withholds electricity from one of its plants. Consequently, the supplier’s profit is [P1 − c]2X if it does not withhold electricity and [P2 − c]X if it does. Therefore, the supplier secures higher profit by withholding electricity if

Inequality 17.1 holds if the area of rectangle D exceeds the area of rectangle B in figure 17.2. The former area is the increase in profit the supplier secures from selling the electricity generated by its first plant when it withholds electricity from its second plant (and thereby increase the wholesale price of electricity from P1 to P2). Area B is the profit the supplier forgoes on the sale of electricity from its second plant when it withholds the electricity that plant could have generated. As inequality 17.1 and figure 17.2 suggest, strategic withholding of electricity will tend to be more profitable for a supplier if the withholding serves to increase the wholesale price of electricity substantially (so the relevant industry supply curve is steeply sloped) and if the supplier serves a large portion of the base load demand for electricity. This may be the case, for instance, when many of the supplier’s plants continue to generate electricity even as one or two of its plants are taken off line. Thus, strategic withholding tends to be most profitable for firms that deliver a large share of the industry supply of electricity. Retail price cap The $65/mwh cap on retail electricity prices in California prevented retail prices from increasing as wholesale prices of electricity increased. Higher retail prices would have signaled to consumers the social value of reducing electricity consumption. Absent these prices signals, though, consumers had little financial incentive to conserve the electricity that was in short supply. Furthermore, by mid-2000, only 12 percent of demand had switched to the ESPs, so the UDCs were required to serve most of California’s citizens at below-cost prices. The prevailing high wholesale prices for electricity even provided the ESPs with a financial incentive to “return” customers to the utilities, which imposed additional losses on the utilities. An ESP that had agreed to supply electricity to a customer at, say, $40/mwh could “buy out” the contract by paying the customer $25/mwh for the contracted level of electricity. This amount is the difference between the retail price at which the customer could purchase electricity from the utility and the contractual price of $40. The ESP could then sell the electricity that it no longer had to supply to this customer on the spot market for hundreds of dollars per mwh. Implications of the California experience for future regulatory policy A key cause of the unsuccessful restructuring in California was poor regulatory policy design. By requiring the UDCs to divest much of their generating resources, precluding them from procuring electricity via longterm contracts, and capping the retail prices of electricity, California left its UDCs vulnerable to financial ruin should the wholesale price of electricity rise substantially. As noted in chapter 13, price cap regulation

plans can include Z factor adjustments that allow retail prices to rise if the prices of key inputs increase substantially for reasons that are beyond the control of the regulated enterprise. California did not incorporate such adjustments when it restructured its electricity sector, and so the restructuring unnecessarily encountered severe difficulties. With more appropriate regulatory policy design, many of these difficulties could have been avoided. Indeed, other states and countries since have managed to avoid many of these serious problems. The other factor that likely contributed to the problems in California—the market power of wholesale electricity producers—may be more difficult to change. Market power in the electricity sector stems from at least three sources. First, electricity is not readily stored in large quantities. Consequently, demand must be met with just-in-time production by the generating capacity that is available at the time. In the absence of inventories that can be called on to meet demand, large suppliers of electricity often have substantial power to raise the prevailing price of electricity by withholding supply. Second, many electricity networks face congestion problems due to limited transmission capacity, which limits the ability of remote suppliers to meet demand. Third, the short-run demand for electricity tends to be highly price inelastic. Consequently, changes in supply can cause substantial changes in equilibrium prices of electricity. When large electricity suppliers have substantial ability to affect the prevailing wholesale price of electricity, they should be expected to exercise their market power. The resulting supracompetitive wholesale prices will inevitably lead to higher retail prices of electricity, to the detriment of consumers. This potential drawback to industry restructuring must be weighed carefully against the benefits that are likely to arise from increased competition among wholesale suppliers of electricity. Effects of Restructuring in the Electricity Sector The ultimate benefits from restructured electricity sectors remain to be established. Fortunately, variation in restructuring across U.S. states will facilitate empirical study of its effects. As of 2010, industry restructuring was being actively pursued in fourteen states, had been suspended in seven states (including California), and was not being actively pursued in the remaining states (see figure 17.3).

Figure 17.3 Status of Electricity Restructuring by U.S. State, September 2010 Source: U.S. Energy Information Administration, Independent Statistics & Analysis, Status of Electricity Restructuring by State (http://www.eia.gov/electricity/policies/restructuring/restructure_elect. html).

One potential benefit of industry restructuring is that increased competition might compel electricity generators to operate more efficiently. Initial evidence suggests that some modest gains have been secured in this regard. One study finds that electricity generators in jurisdictions where the electricity sector has been restructured have reduced their labor and nonfuel costs by between 3 and 5 percent relative to their counterparts in unrestructured jurisdictions.9 Another study that focuses on nuclear generators finds a 10 percent increase in operating performance (primarily due to reductions in the duration of reactor outages) in restructured jurisdictions.10 From the perspective of consumers, the key question is whether industry restructuring has secured lower retail prices of electricity. Little evidence suggests that this is the case. In fact, a recent study finds that retail electricity prices generally increased more rapidly in states with restructured electricity sectors than in other states between 1998 and 2007. Between 2007 and 2012, though, prices in the states with restructured electricity sectors declined, while prices in other states increased, as depicted in figure 17.4. For this entire period (1998–2007), there was no statistically significant difference between the rates at which prices changed in states with and without restructured electricity sectors.11 This finding is consistent with a review of earlier studies of the effects of restructuring in the electricity sector. The review concludes that “there is little reliable and convincing evidence that consumers are better off as a result of the restructuring of the U.S. electric power industry.”12

Figure 17.4 U.S. Average Retail Electricity Prices and Natural Gas Prices, 1990–2012 Source: Figure 6 in Severin Borenstein and James Bushnell, “The U.S. Electricity Industry after 20 Years of Restructuring,” University of California–Berkeley Energy Institute at HAAS Working Paper 252R, May 2015.

The limited impact of industry restructuring on electricity prices likely reflects the countervailing effects in play. On one hand, industry restructuring may have induced some modest increases in operating efficiencies. On the other hand, the market power of generators may permit them to increase the wholesale price of electricity in restructured markets above the competitive price of electricity. Figure 17.4 illustrates one effect of industry restructuring that merits further discussion. Observe that the average retail price of electricity tends to track the price of natural gas more closely in states with restructured electricity sectors than in other states. This is likely the case because in restructured electricity sectors, the wholesale price of electricity reflects the cost of the marginal supplier, which is often a generating plant powered by natural gas.13 Thus, the retail price of electricity tended to increase relatively rapidly in states with restructured electricity sectors between 1998 and 2007, when the price of natural gas was increasing. In contrast, after 2007 when the price of natural gas was declining, the average retail price of electricity declined in states with restructured electricity sectors but not in other states.14 This observation implies that one long-term effect of industry restructuring may be to increase the sensitivity of the retail price of electricity to the price of natural gas. Distributed Generation Industry restructuring is not the only major change in the electricity sector in recent years. Distributed generation of electricity is growing in popularity in many jurisdictions. Distributed generation of electricity can be defined as the “generation of electricity from sources that are near the point of consumption, as opposed to centralized generation sources such as large utility-owned power plants.”15 As indicated in Figure 17.5, the distributed generation of electricity (DG) accounts for more than a third of all electricity production in some countries.

Figure 17.5 Distributed Generation of Electricity around the World Source: World Alliance for Decentralized Energy, 2017 (http://www.localpower.org/deb_where.html).

DG is popular in part because it has the potential to reduce electricity distribution costs by moving generation sites closer to where the electricity is consumed. DG may also enhance system reliability in some cases by ensuring that electricity is produced by independent entities at multiple sites. In addition, DG can sometimes limit the amount of capacity required at the primary production site and potentially reduce externalities (e.g., carbon emissions) associated with electricity production. The potential to reduce carbon emission is particularly pronounced when DG entails the use of solar (photovoltaic) panels. Solar panels can generate a great deal of electricity in regions with extensive sunshine throughout the year. In states like California, Hawaii, Arizona, and Florida, many homeowners generate electricity by installing solar panels on their rooftops. In many states homeowners are billed at the prevailing price of electricity for the difference between the total amount of electricity they consume and the amount produced by their solar panels. This billing arrangement, commonly known as “net metering,” implies that consumers are being compensated for the electricity they produce at a rate equal to the prevailing retail price of electricity.16 This rate of compensation for DG generally differs from the welfare-maximizing rate. The retail price of electricity typically reflects the utility’s average cost of supplying electricity to ensure the utility’s financial solvency. Average cost exceeds marginal cost in the presence of scale economies. Therefore, DG compensation that reflects the utility’s average cost typically will exceed the utility’s marginal cost of supplying electricity. Consequently, under net metering, the utility is effectively paying consumers to generate electricity more than the additional cost the utility would incur if it were to generate the electricity itself. Net metering thereby tends to induce more than the welfare-maximizing level of investment in DG capacity and fails to induce the pattern of electricity production that minimizes industry production costs.17 Potential overinvestment in solar DG is compounded by the fact that solar panels are an intermittent source of electricity. Solar panels do not produce electricity during the nighttime, and their supply of electricity declines substantially when clouds block the sun’s rays. A utility must be prepared to deliver the total amount of electricity that consumers demand whenever they demand it. Consequently, the utility often cannot reduce its production capacity substantially in response to increased intermittent DG capacity, because the DG capacity cannot be relied on to generate electricity whenever it is needed.18 Some states have begun to recognize the drawbacks to net metering and have taken steps to reduce the net compensation that customers receive for generating electricity with solar panels on their rooftops. In 2016, California introduced a one-time interconnection fee (of approximately $100) that residential customers must pay to the utility when they first install solar panels.19 In 2015, Nevada announced a new policy that will substantially reduce solar DG compensation toward the utility’s cost of procuring electricity rather than setting it to reflect the prevailing retail price of electricity.20 Hawaii closed its net metering program to new customers in 2015, replacing it with a program that offers substantially less generous compensation for solar DG. The new compensation reflects the cost the utility avoids by not having to generate the electricity supplied by solar DG.21 The reduced compensation for solar DG is designed in part to limit the utility’s expenses and thus limit the charges that must be imposed on electricity customers who do not install solar panels. Future Regulation in the Electricity Sector

The electricity sector and its regulation seem likely to undergo further change in the coming years. To illustrate, time-varying DG compensation may become popular as smart meters that measure the time at which electricity is produced and consumed become more widely deployed. Industry restructuring may also expand as the problems in California become more distant memories. That restructuring does not appear to have increased consumer welfare substantially to date does not imply that restructuring was a mistake. As we know from experience in the airline industry, for example, increased competition can produce benefits that even industry experts could not anticipate. Additional benefits may arise over time as expanded competition encourages innovation and reduces the market power of electricity generators. Economic Regulation in the Oil Sector In contrast to retail electricity prices, the retail prices of gasoline and other refined petroleum products are not presently regulated in the United States. However, the price of crude oil has been regulated historically. Furthermore, both domestic production and imports of crude oil have been regulated. We review some of these historic regulations to demonstrate the effects of price and quantity controls in sectors with many suppliers. We begin by reviewing some basic characteristics of the oil sector. Industry Background The supply of oil entails three primary functions: production, refining, and distribution. Production involves the discovery of underground reservoirs of crude oil and extracting the oil from these reservoirs.22 After extraction, the crude oil is refined into usable products, such as gasoline and heating oil, and then distributed to retailers and consumers. Many firms in the oil sector are vertically integrated, performing all production, refining, and distribution functions. There are many suppliers of oil, both domestic and foreign. Thus, regulation in the oil sector is not motivated primarily by natural monopoly concerns. Instead, historical price controls reflected a concern with keeping important refined petroleum products (like gasoline and heating oil) affordable for individual citizens and businesses alike. To combat rampant inflation, President Richard Nixon instituted an economywide price freeze in August 1971. Two years after this freeze, Phase IV decontrolled all prices with the exception of crude oil. These controls were set to expire in April 1974, but oil price regulation continued unabated until President Ronald Reagan, in his first act as president, decontrolled oil prices in 1981. Effects of Price Ceilings To analyze the basic effects of ceilings on the prices of commodities like crude oil, consider a competitive market where the market function, D(P), and market supply function, S(P), are as shown in figure 17.6. The equilibrium price in the absence of a price ceiling is P*, where Q* units are demanded and supplied. Suppose that the government requires all firms to sell the product at prices that do not exceed a specified ceiling.

Figure 17.6 Effects of a Price Ceiling

If the price ceiling exceeds P*, then the government regulation does not affect market activities, because the equilibrium price complies with the specified price ceiling. More interesting considerations arise when the price ceiling is set at a level, such as in figure 17.6, which is less than the competitive equilibrium price P*. At price market demand exceeds market supply so the price ceiling serves to * reduce output from Q to Relative to the case of unfettered competition, consumers gain the area of rectangle because they pay less for each of the units they purchase. However, consumers lose surplus equal to the area of triangle bcd, because fewer units are supplied. The net gain to consumers is the difference between the areas of rectangle and triangle bcd. Firms are harmed by the imposition of a price ceiling. In its absence, producer surplus is equal to the area of triangle P*cg. When the price ceiling is imposed, producers lose surplus equal to the area of triangle dcf due to the reduction in supply from Q* to Producers also lose surplus equal to the area of rectangle because of the imposed price reduction. Accounting for the changes in both consumer and producer surplus, the overall reduction in welfare due to the price ceiling is equal to the area of the shaded triangle

bcf. These calculations reflect the assumption that the units of output that are supplied when price ceiling is imposed are allocated to the consumers who value the output most highly. These are the consumers whose demand is represented by the demand curve between output levels 0 and Different conclusions would emerge if the excess demand were not allocated efficiently in this manner. To analyze this issue most simply, suppose each consumer seeks to purchase at most one unit of output. Further suppose that different consumers have different reservation prices. A consumer’s reservation price is the maximum amount the consumer is willing to pay for a unit of the good. If the market price is less than a consumer’s reservation price, the consumer will purchase the good. If the market price exceeds the consumer’s reservation price, she will decline to purchase the good. In this setting, the market demand function can be viewed as stating that there are D(P) consumers whose reservation price is at least P. Consider a setting where the units are allocated randomly to the consumers who would like to purchase the good. A total of consumers would like to purchase the available units at price Only the fraction of these consumers are able to buy the good. With random allocation of the units, only D(P′) of the D(P′) consumers with a reservation price at least as high as P′ will be able to purchase the good. The allocation rule is depicted as curve D(P) in figure 17.7.

Figure 17.7 Effects of a Price Ceiling with Random Allocation

Let us now reexamine the welfare loss due to the imposition of a price ceiling. A welfare loss equal to the area of triangle bcf arises due to the reduced supply of output. In addition, a welfare loss equal to the area of triangle abf arises, because the available supply is not allocated to the consumers who value the good most highly. To maximize welfare, the units should be allocated to consumers with reservation prices that are at least P0. The surplus these consumers would derive from the units of output is equal to the area of trapezoid In contrast, when the units are allocated randomly, only the fraction of these consumers receive the good. The remaining units are allocated to consumers who value them less highly; specifically, consumers with reservation prices between and P0. The resulting level of consumer surplus is the area of triangle Consequently, random allocation results in an additional welfare loss equal to the area of triangle abf, which generates a total welfare loss equal to the area of triangle acf.23 If the allocation of the units were determined not by random allocation but by the suppliers, then consumers likely would compete for the units, and consumers who valued the product most highly likely would spend the most to secure a unit of the product. The expenditure might entail providing some form of

additional payment to a firm, or perhaps simply waiting in line to purchase the product. When such expenditures ensure that the good is allocated to the consumers that value it most highly, the allocation procedure entails no direct welfare loss. However, the resources consumed in competing for the good can constitute an additional welfare loss from price regulation.24 This discussion has two primary implications. First, the imposition of a binding price ceiling reduces social welfare by reducing the amount exchanged in the market. Second, additional welfare losses can arise if the good in short supply is not allocated to the consumers who value it most highly or if valuable resources are consumed in competing for the right to purchase the good at the stipulated price. Rationale for Restricting Domestic Oil Production Government regulation in the oil sector has not been limited to price regulation. Limits have also been placed on both the domestic supply and imports of crude oil. Limits on imports have been driven in part by fears of excessive reliance on foreign suppliers of oil. When a country relies primarily on foreign suppliers for a key resource, the country may find it difficult to resist the demands of the foreign suppliers, and even national security might be threatened. There are at least two possible rationales for restricting domestic supply of oil. First, as our discussion of the economic theory of regulation in chapter 10 suggests, such regulation may promote the interests of oil producers by increasing the price of oil above competitive levels.25 Second, such regulation can help alleviate what is known as the common pool problem, which arises when two or more individuals share property rights to a resource. These common property rights can result in the inefficient use of the resource, for the reason that we now explore in some detail. To begin, let us consider the case of a resource that is owned by a single individual.26 Consider a newly discovered oil reservoir, and suppose that it is entirely contained in the landholdings of a single individual. The landowner (or the company to which he has assigned his mineral rights) has to decide how rapidly to extract the oil. For simplicity, suppose there are only two periods, period 1 (today) and period 2 (tomorrow). Let Pt and Qt denote the price and the extraction rate, respectively, in period t. Also let MCt(Q) denote the marginal cost of extracting at rate Q in period t. Suppose the landowner is initially thinking about pumping Q1 barrels of oil today and Q2 tomorrow. If he considers pumping a little more oil today, the change in today’s profit is P1 − MC1(Q1), because the landowner sells the additional oil for P1, which costs MC1(Q1) to extract. Extracting more oil today also affects tomorrow’s profit. The technology of oil reservoirs is such that pumping at a faster rate reduces the amount of oil that can ultimately be recovered. If extraction is too rapid, then subsurface pressure declines, and pockets of oil become trapped. Less oil can then be retrieved from the well in total. Let b denote the number of units of oil that cannot be extracted tomorrow because an additional barrel is extracted today. The discounted loss in tomorrow’s profit from extracting a little more today is then b[1/(1 + r)][P2 − MC2(Q2)], where r is the interest rate. In other words, for each additional barrel pumped today, b fewer barrels can be pumped tomorrow, and each of those barrels represents forgone profit of P2 − MC2(Q2). This loss is discounted, since it is not incurred until tomorrow. Accounting for all relevant effects, the discounted marginal return to pumping another barrel today is

Similarly, by pumping a little more oil tomorrow, the landowner will receive additional revenue of P2 at cost MC2(Q2), so his discounted marginal return is [1/(1 + r)][P2 − MC2(Q2)]. There is no additional cost to

consider because, by assumption in this simple setting, there is no future beyond “tomorrow” that is of concern to the landowner. Let Q1* and Q2* denote the rates of extraction for today and tomorrow, respectively, that maximize the present value of the landowner’s profit. To maximize profit, these rates must be set to equate the marginal return from pumping another barrel today and pumping another barrel tomorrow:

To see why this equality must hold, suppose that the marginal return from pumping more today exceeds the marginal return from pumping more tomorrow:

By pumping one less barrel of oil tomorrow and one more barrel of oil today, the landowner loses discounted profit [1/(1 + r)][P2 − MC2(Q2)] but gains discounted profit

Because the latter expression exceeds the former by assumption, the present value of the profit stream increases. But if this is the case, then the original extraction rates could not be the rates that maximize the present discounted value of the landowner’s profit. Therefore, profit maximization requires that the marginal discounted returns be equal, as expressed in equation 17.2. Furthermore, because Pt reflects the marginal social benefit from a unit of oil in period t, profit maximization also achieves the rates of extraction that maximize social welfare. A common pool problem arises when the oil reservoir spans the property of two or more individuals. According to U.S. law, property rights over the oil reservoir are determined by the rule of capture, which states that any extracted oil belongs to the landowner who captures it through a well on his land. Therefore, if an oil reservoir spans several properties, several individuals have the right to extract oil from the reservoir. To analyze the new considerations that arise when multiple landowners extract oil from a common reservoir, let x denote the additional fraction of a barrel of oil that a landowner’s neighbors will extract from the common reservoir if the landowner postpones his extraction of a barrel of oil today. In this setting, if a landowner considers pumping one less barrel today, he will not find the entire barrel in the reservoir tomorrow. Only the fraction 1 − x of that barrel would remain, because his neighbors have extracted the fraction x. The condition for profit maximization is no longer as specified in equation (17.2). Instead, the relevant condition is

where and denote the new profit-maximizing rates of extraction. Note that if x = 0 (as is true if there is no common pool problem), equation 17.3 is identical to equation 17.2. The key impact of x > 0 is that a landowner is motivated to extract at a faster rate today; that is, For every barrel of oil not pumped today, a landowner loses x of that barrel to his neighbors. To avoid this loss, an individual landowner has an incentive to increase the rate at which he extracts oil before his neighbors acquire the oil. All landowners

face this incentive to extract oil from the common pool more rapidly, so the overall rate of extraction will exceed the rate that transpires when a single landowner owns the reservoir. To demonstrate that the common pool problem will drive landowners to extract oil at a rate that exceeds the socially optimal rate, equation 17.2 can be written as

where SUC = (1 + b)[1/(1 + r)][P2 − MC2(Q2)], and SUC stands for “social user cost.” The quantity P1 is the marginal revenue from extracting one more barrel of oil today, and MC1(Q1) + SUC is the marginal cost to society from extracting that additional barrel of oil. This social marginal cost is the sum of the marginal cost of extracting the oil, MC1(Q1), and SUC, which is the future cost to society of pumping another unit today. When there is just one landowner, private user cost is the same as social user cost. Therefore, the profitmaximizing rate of extraction satisfies

This relationship is depicted in figure 17.8.

Figure 17.8 Effect of Oil Prorationing on the Extraction Rate

If a common pool problem prevails, the cost to an individual landowner from extracting an additional barrel of oil is less than the cost to society. This is the case because the landowner is reluctant to leave a barrel in the ground, knowing that some of it will be extracted by his neighbors. In contrast, society does not care who extracts the oil (if all landowners are equally efficient at extracting oil). When more than one landowner is extracting oil from a common reservoir, an individual landowner’s private user cost is [1 − x]SUC, which is less than the social user cost. The profit-maximizing rate of extraction, satisfies As shown in figure 17.8, the common pool problem results in a rate of extraction that exceeds the socially optimal rate. Competition may have caused excessive early extraction in the Texas oil fields during the 1920s and 1930s, with a corresponding decline in the total amount of oil eventually recovered from the fields. It is estimated that the amount of oil ultimately extracted from reservoirs was only about 20–25 percent of the total amount of oil in the reservoirs. In contrast, with more gradual withdrawal, 80–95 percent of the oil ultimately could have been extracted.27 This difference does not necessarily imply that extraction occurred too rapidly, because rapid early extraction (and corresponding low ultimate extraction) could have been socially optimal if the marginal social benefit from a barrel of oil in the early years greatly exceeded the corresponding social benefit in later years. However, the limited variation in the price of oil over time and the substantial amount of oil forfeited due to the rapid early extraction suggests it is unlikely that such a fast rate of extraction was socially optimal. Oil prorationing in practice In practice, U.S. states have limited the domestic production of oil (perhaps in an attempt to resolve the common pool problem) by specifying the total amount of authorized production and then allocating this production pro rata among wells. Such a mechanism is referred to as prorationing.28 Events during the late 1920s and early 1930s spurred a wave of state intervention in the production of oil. One critical event was the discovery of new reserves and, in particular, the East Texas oil field in 1930. By 1933, 1,000 firms with a total of 10,000 wells were pumping oil from this field, which was estimated to have 5.5 billion barrels of oil in the reservoir below it.29 This activity led to a major increase in the supply of oil, which, coupled with the substantial reduction in the demand for oil due to the Great Depression, caused oil prices to decline sharply. Texas instituted prorationing in 1930, and Kansas followed in 1931.30 The federal government aided the oil-producing states by passing the Connally “Hot Oil” Act of 1935, which prohibited the interstate shipment of oil that was extracted in violation of state regulations. Alternative solutions to the common pool problem Even though unrestricted competition does not ensure welfare-maximizing production levels in the presence of a common pool problem, the presence of the problem does not imply that government intervention is necessary. Private mechanisms may be able to solve the common pool problem. The simplest solution is to have a single individual own all the land over an oil reservoir (or at least own all of the rights to extract oil from the reservoir). The ultimate single owner can pay others a premium for their land that reflects the profit they would forgo by no longer being able to extract oil from the reservoir. The ultimate owner can profitably deliver such compensation because, by pursuing the rate of extraction that maximizes her profit, she will generate more than the sum of the maximum profits that multiple landowners could secure by acting independently. The same outcome also could be achieved through unitization. Unitization occurs when one of the owners

operates the entire reservoir, and the all parties share the resulting profit. Of course, parties must agree ex ante as to how to share these returns. A third solution is for the landowners to agree privately to prorationing and thereby limit their production. Why did states intervene despite these potential private solutions? The private solutions may sound simple, but they can be difficult to implement in practice for several reasons. First, contracting entails transaction costs. If there are several hundred landowners, for example, it can be quite costly for an individual to negotiate to buy several hundred properties. For similar reasons, unitization and private prorationing typically entail substantial transaction costs when many landowners are involved. A second problem is getting all landowners to agree. Each landowner is likely to have different information on and opinions about the value of the reservoir. In addition, some landowners may try to hold out for a higher price for their land, a higher share of profits in the case of unitization, or a higher share of production in the case of private prorationing. Such behavior can prevent the consummation of an agreement. In light of these difficulties, there can be a rationale for government intervention in the production of crude oil. When a common pool problem exists and private solutions are costly to implement, social welfare may be increased when the government imposes production restrictions. Restricting Oil Imports As noted above, the United States has also restricted imports of crude oil. The Mandatory Oil Import Program (MOIP) was instituted in 1959. Some of the restrictions were relaxed in 1970, but MOIP was not suspended until April 1973. MOIP initially restricted oil refiners to importing no more than 9 percent of the projected domestic demand for oil. Because projected demand often was difficult to characterize, the quota was changed to 12.2 percent of domestic crude oil production in 1962. In figure 17.9, D(P) denotes the domestic demand curve for oil, and Sd(P) the supply curve of domestic oil producers. (P denotes the price of oil.) The world supply curve, denoted Sw(P), is assumed to be horizontal at the world price Pw. To analyze the effects of MOIP, first consider the market equilibrium in its absence. With no restrictions on oil imports, the market-clearing price in the U.S. market would be the world price, Pw. No oil would be supplied at any lower price, because suppliers can secure a higher price for their oil by selling it in non-U.S. markets.31 An unlimited supply of oil would be forthcoming at any price at or above Pw. Therefore, the domestic price would be Pw in the absence of MOIP. At this price, domestic oil producers will supply Sd(Pw), and D(Pw) − Sd(Pw) units of oil will be imported to satisfy consumer demand for D(Pw) units of oil.

Figure 17.9 Equilibrium under the Mandatory Oil Import Program

Now consider the implementation of MOIP under the assumption that the regulatory constraint is binding,32 so oil imports exceed the regulatory limit of 12.2 percent of domestic supply at the unregulated market equilibrium. Formally, we have

For prices at or above Pw, the supply curve for the domestic market is S(P) = 1.122Sd(P), because domestic suppliers supply Sd(P) and, although foreign supply is unlimited at these prices, MOIP restricts oil imports to 0.122Sd(P). As depicted in figure 17.9, the equilibrium price under MOIP is which exceeds the world price. Notice that MOIP has achieved its objective of reducing oil imports, as imports have fallen from D(Pw) − Sd(Pw) (a distance of cf) to (a distance of ba). When assessing the welfare effects of limiting oil imports, first note that consumers are worse off by the sum of the areas of rectangle and triangle bcd. Because demand decreases from D(Pw) to consumers lose surplus equal to the area of triangle bcd. In addition, consumers have to pay the higher price for the remaining demand. However, this higher price is paid only for domestic oil, so expenditure on domestic oil increases by which is the area of

rectangle Although MOIP harms consumers, it benefits domestic oil producers by increasing the demand for their oil. MOIP increases the profit of domestic oil producers from the area of triangle Pwfg to the area of triangle an increase equal to the area of The total welfare loss from MOIP is the sum of the areas of triangles aef and bcd. The area of triangle bcd reflects the reduction in consumer surplus due to the reduction in demand from D(Pw) to The area of triangle aef reflects the value of wasted resources when domestic oil producers supply the additional amount rather than having this oil supplied more efficiently through imports at price Pw. The welfare losses from limiting oil imports have been estimated. The resale price of import quota vouchers can be employed as a measure of the price differential because this resale price should reflect the difference between the marginal cost of producing oil domestically and the price of importing another barrel of oil. Using this measure, was estimated to be $1.174 per barrel in 1969.33 The average world price of oil was approximately $2.10 per barrel at this time, so the import quotas increased the domestic price of oil by more than 50 percent. The corresponding reduction in consumer surplus was estimated to be $3.2 billion in 1960, and the loss increased steadily to $6.6 billion by 1970.34 Crude Oil Price Controls We now consider the impact of limits on the price that oil producers could charge refiners for crude oil. Such controls were implemented in the oil industry beginning in 1971.35 By limiting the compensation that domestic suppliers received for their oil, regulation reduced the amount of domestic oil supplied. By reducing the amount that domestic oil refiners paid for crude oil, regulation increased the demand for crude oil. This section considers the welfare implications of these supply and demand distortions. Figure 17.10 depicts the domestic supply curve for crude oil, Sd(P), the domestic demand curve for crude oil, Dd(P), and the world supply curve for crude oil, Sw(P). For simplicity, the supply of imports is assumed to be perfectly elastic at the world price. The effect of allowing Sw(P) to be upward sloping is considered later.

Figure 17.10 Equilibrium under Crude Oil Price Controls

In the absence of oil price controls, the equilibrium price would be Pw, and refiners would demand Q* units. At price Pw, domestic oil producers would supply Qd units, and imports would be Q* − Qd. Now suppose that the maximum price a domestic supplier can charge a refiner for crude oil is limited to Further suppose that, of the total amount of crude oil a refiner processes, the fraction z is supplied by foreign producers, and the fraction 1 − z is supplied by domestic producers.36 Then the marginal price that a domestic refiner faces for crude oil is [1 − z]Pw + z which we will denote by Ps. Because they face a price below the world price, refiners increase their demand for crude oil from Q* to Q**. Because domestic producers receive price for the crude oil they supply, they are only willing to supply units. * Consequently, imports must increase from Q − Qd to to satisfy demand. We can now assess the welfare effects of regulation. Regulation induced domestic refiners to increase the amount of oil they refined by Q** − Q*. For those additional units, the social cost of production, as measured by the world price Pw, exceeds the social benefit by the area of triangle abc. In addition, the reduced domestic supply of was replaced by foreign supply at higher cost. The associated additional welfare loss is the area of triangle def. The total welfare loss from regulation is the sum of the areas of the two

shaded triangles in figure 17.10. Table 17.2 presents estimates of the areas of these triangles for 1975–1980. In 1975, the area of triangle abc was estimated to be $1,037 million (in 1980 dollars). The area of triangle def was estimated to be $963 million. These two losses combined total $2 billion. The average annual welfare loss due to demand and supply distortions between 1975 and 1980 was estimated to be approximately $2.5 billion, so the estimated total welfare loss was on the order of $15 billion. Table 17.2 Welfare Losses from Oil Price Controls, 1975–1980 (millions of 1980 dollars) Year

Deadweight Loss

Additional Expenditure on Imports

1975 1976 1977 1978 1979 1980

963 1,046 1,213 816 1,852 4,616

1,948 6,288 8,444 6,034 8,444 20,214

Source: Joseph P. Kalt, The Economics and Politics of Oil Price Regulation (Cambridge, MA: MIT Press, 1981), p. 201.

By assuming that the world price is fixed at Pw, these estimates implicitly assume that U.S. demand does not influence the world price. If increased U.S. consumption increases the world price of oil (as it likely does in practice), the higher price will tend to reduce the increase in domestic consumption that regulation induces. If Pw is the world price in the absence of regulation, the world price with U.S. regulation will exceed Pw, because price regulation increases domestic demand, which in turn increases the world price of oil. The higher world price will reduce U.S. consumption, so the welfare loss from demand distortions will be smaller than estimated above. For a range of reasonable values for the elasticity of the world supply curve, the estimated welfare loss declines by between 13 and 61 percent. For example, if the price elasticity of the world supply of oil is 1, then the average annual welfare loss from demand distortions between 1975 and 1980 was $413 million, in contrast to $751 million when world supply curve is assumed to be perfectly elastic.37 A second additional effect of price regulation that arises when the world supply curve for oil is upward sloping is the increased wealth transfer from the United States to foreign oil producers. This increased transfer results from the increased world price of oil. Depending on the elasticity of the world supply curve, the average wealth transfer between 1975 and 1980 was between $1.625 billion and $8.115 billion per year.38 Summary Regulation in the energy sector has been motivated by several considerations. The presence of natural monopoly in transmission and distribution is a principal rationale for regulation in the electricity sector. The common pool problem and national security concerns have motivated regulation in the oil sector. Most regulations that were imposed on the oil sector have been eliminated, thereby avoiding the substantial welfare losses those regulations imposed. Regulations persist in the electricity industry. In states where the electricity sector has been restructured to encourage competition in electricity generation, regulation focuses on the transmission and distribution

components of the industry. Just as in the telecommunications sector, a portion of the electricity industry has been opened to competition, and regulation has been focused on the components of the industry where natural monopoly is thought to persist. The increased competition in the generating sector does not appear to have increased consumer welfare substantially. However, additional gains may arise in the future, as the gains from competition are often difficult to predict. Regulation in the transmission and distribution components of the electricity sector has been complicated by the increasing prevalence of the distributed generation of electricity (DG). Regulators now face the challenging task of effectively managing competition between the incumbent utility and DG sources. As experience in the transportation sectors reveals, managing competition can be a very challenging task. Regulatory policy is evolving in the electricity sector and seems likely to change further in the coming years as technologies change and the potential for economical DG (and perhaps electricity storage) increases. Questions and Problems 1. What are the primary advantages and disadvantages of restructuring in the electricity sector, whereby electric utilities are required to divest their generating plants, and the wholesale price of electricity is determined by market forces in an unregulated setting? 2. Some observers suggest that the experience in California indicates that restructuring of the electricity sector is an illadvised public policy. Do you agree? Explain your answer. 3. An electricity generator in a restructured market is accused of “withholding” electricity during times of peak demand. The owner of the generator responds to the accusation by arguing that it would be irrational not to supply all the electricity it could generate during times of peak demand, when the wholesale price of electricity is relatively high. Assess the owner’s argument. 4. Is “net metering” a good policy? Explain why or why not. Propose an alternative policy that you think would increase welfare. 5. Suppose that two sources are available for distributed generation of electricity: a solar (intermittent) source and a nonintermittent source powered by natural gas. What rates of compensation would you recommend for the two DG sources? In particular, would you recommend the same rate of compensation for each source? 6. Is there an economic rationale for a state government to control the rate at which crude oil is extracted? 7. How does the optimal extraction rate of crude oil change when: a. Information is revealed that increases the expected future price of crude oil. b. The interest rate rises. 8. Assume that the market supply curve is QS(P) = 30P and the market demand curve is QD(P) = 500 − 20P, where P denotes price. a. Derive the competitive equilibrium price. b. Suppose that the government institutes a price ceiling of 5. Derive the welfare loss from price controls. Hint: The industry supply curve represents the industry marginal cost curve, and total cost is the area under the marginal cost curve. c. What is the maximum amount that producers would pay to get their legislators to remove the price ceiling? 9. How does the elasticity of the industry supply curve affect the welfare implications of price ceilings?

Notes 1. The discussion in this section is drawn in part from Paul L. Joskow, “Restructuring, Competition and Regulatory Reform in the U.S. Electricity Sector,” Journal of Economic Perspectives 11 (Summer 1997): 119–138; Paul L. Joskow,

“Deregulation and Regulatory Reform in the U. S. Electric Power Sector,” in Sam Peltzman and Clifford Winston, eds., Deregulation of Network Industries (Washington, DC: Brookings Institution Press, 2000), pp. 113−188; and William W. Hogan, “Electricity Market Restructuring: Reforms of Reforms,” Journal of Regulatory Economics 21 (2002): 103–132. 2. The discussion in this section reflects the analysis in Paul L. Joskow, “California’s Energy Crisis,” Oxford Review of Economic Policy 17 (Autumn 2001): 365–388; Paul L. Joskow, “The Difficult Transition to Competitive Electricity Markets in the U.S.,” Massachusetts Institute of Technology, Department of Economics, Cambridge, MA, May 2003; Severin Borenstein, “The Trouble with Electricity Markets: Understanding California’s Restructuring Disaster,” Journal of Economic Perspectives 16 (Winter, 2002): 191−211; and Frank A. Wolak, “Diagnosing the California Energy Crisis,” Electricity Journal (August/September 2003): 11–37. 3. Borenstein, “The Trouble with Electricity Markets.” 4. Joskow, “The Difficult Transition to Competitive Electricity Markets,” p. 26. 5. Other states allowed their UDCs to retain a larger fraction of their generating capacity, often in the form of separate unregulated wholesale power affiliates. 6. Christopher Weare, The California Electricity Crisis: Causes and Policy Options (San Francisco: Public Policy Institute of California, 2003). Available at http://www.ppic.org/content/pubs/report/ R_103CWR.pdf. 7. See, for example, Paul L. Joskow and Edward Kahn, “A Quantitative Analysis of Pricing Behavior in California’s Wholesale Electricity Market during Summer 2000: The Final Word,” Energy Journal 23 (October 2002): 1–35; Severin Borenstein, James B. Bushnell, and Frank A. Wolak, “Measuring Market Inefficiencies in California’s Restructured Wholesale Electricity Market,” American Economic Review 95 (December 2002): 1376–1405; Steven L. Puller, “Pricing and Firm Conduct in California’s Deregulated Electricity Market,” Review of Economics and Statistics 89 (February 2007): 75– 87; and James B. Bushnell, Erin T. Mansur, and Celeste Saravia, “Vertical Arrangements, Market Structure, and Competition: An Analysis of Restructured US Electricity Markets,” American Economic Review 98 (March 2008): 237–266. 8. The ensuing discussion is drawn from John Kwoka and Vladlena Sabodash, “Price Spikes in Energy Markets: ‘Business by Usual Methods’ or Strategic Withholding?” Review of Industrial Organization 38 (May 2011): 285–310. 9. Kira R. Fabrizio, Nancy L. Rose, and Catherine Wolfram, “Do Markets Reduce Costs? Assessing the Impact of Regulatory Restructuring on U.S. Electric Generation Efficiency,” American Economic Review 97 (September 2007): 1250– 1277. 10. Lucas W. Davis and Catherine Wolfram, “Deregulation, Consolidation, and Efficiency: Evidence from U.S. Nuclear Power,” American Economic Journal: Applied Economics 4 (October 2012): 194–225. 11. Severin Borenstein and James Bushnell, “The U.S. Electricity Industry after 20 Years of Restructuring,” University of California−Berkeley Energy Institute at HAAS Working Paper 252R, May 2015. 12. John Kwoka, “Restructuring the U.S. Electric Power Sector: A Review of Recent Studies,” Review of Industrial Organization 32 (May 2008): 165–196 (p. 165). A more recent study that also finds little long-term impact of industry restructuring on retail electricity prices is Ted Kury, “Price Effects of Independent Transmission System Operators in the United States Electricity Market,” Journal of Regulatory Economics 43 (April 2013): 147–167. 13. The marginal supplier is the one with the highest marginal cost among active suppliers. Hydroelectric, coal, and nuclear generating plants typically have relatively low marginal costs and are employed to supply base load demand for electricity. Natural gas units, which tend to experience higher marginal costs, often are called on to generate electricity during periods of peak demand, and thus they are the marginal suppliers during these periods. 14. Borenstein and Bushnell, “The U.S. Electricity Industry after 20 Years of Restructuring.” 15. American Council for an Energy-Efficient Economy, “Distributed Generation” (http://www.aceee.org/topics/distributedgeneration). 16. As of 2016, more than four-fifths of U.S. states had implemented net metering policies to encourage DG in their electricity sectors. (Solar Energy Industries Association, “Issues and Policies: Net Metering” (http://www.seia.org /policy/distributed-solar/net-metering). 17. This discussion abstracts from the social benefits that can arise if negative externalities associated with electricity production are reduced when DG replaces production by the utility. For an analysis of this and related issues, see David P. Brown and David E. M. Sappington, “Designing Compensation for Distributed Solar Generation: Is Net Metering Ever

Optimal?” Energy Journal, 38 (May 2017): 1–32. 18. Potential overinvestment in solar DG can be further compounded by the presence of increasing-block retail tariffs. In some jurisdictions (California, for instance), the marginal price that residential consumers pay for electricity increases with the amount of electricity they consume. Therefore, households that consume a large amount of electricity may pay a marginal price for electricity that substantially exceeds the utility’s average cost of supplying electricity. In such cases, net metering can induce far more than the welfare-maximizing level of DG capacity investment. See Severin Borenstein, “Private Net Benefits of Residential Solar PV: The Role of Electricity Tariffs, Tax Incentives, and Rebates,” University of California–Berkeley Working Paper 259, 2015 (http://ei.haas.berkeley.edu/research/papers/WP259.pdf). 19. Diane Cardwell, “California Votes to Retain System That Pays Solar Users Retail Rate for Excess Power,” New York Times, January 28, 2016 (available at http://www.nytimes.com/2016/01/29/business/energy-environment/californianarrowly-votes-to-retain-system-that-pays-solar-users-for-excess-power.html?_r=0). 20. Krysti Shallenberger, “Nevada Regulators Approve New Net Metering Policy, Creating Separate Rate Class for Solar Users,” Utility Dive, December 22, 2015 (http://www.utilitydive.com/news/nevada-regulators-approve-new-net-meteringpolicy-creating-separate-rate-c/411284). 21. Herman K. Trabish, “What Comes after Net Metering: Hawaii’s Latest Postcard from the Future,” Utility Dive, October 22, 2015 (http://www.utilitydive.com/news/what-comes-after-net-metering-hawaiis-latest-postcard-from-the-future/407753 /). Solar DG compensation is similar in Minnesota, but it also accounts for the social gain associated with reduced environmental externalities as solar DG production replaces electricity production by the utility. See John Farrell, “Minnesota’s Value of Solar: Can a Northern State’s New Solar Policy Defuse Distributed Generation Battles?” Institute for Local Self-Reliance Report, April 2014 (http://www.ilsr.org/wp-content/uploads/2014/04/MN-Value-of-Solar-from-ILSR .pdf). 22. For an introduction to the technological aspects of petroleum, see Stephen L. McDonald, Petroleum Conservation in the United States: An Economic Analysis (Baltimore: Johns Hopkins University Press, 1971), chapter 2. 23. This analysis presumes that consumers cannot resell the good (and cannot sell the right to buy the good). Such resale could avoid the welfare loss abf. Consumers fortunate enough to be allocated the good can resell it to the highest bidder through a secondary market. If the costs of engaging in this transaction are sufficiently small, then the units will end up in the hands of those consumers who value them the most, regardless of who received the units initially. In this case, the only effect of random allocation is to distribute surplus to those consumers who are lucky enough to be awarded a unit of the good initially. 24. If the competition only entails financial bribes paid to firms, with no corresponding resource expenditure, then the competition only results in wealth transfers, with no corresponding welfare loss. 25. Limits on imported oil also can serve this function. 26. The ensuing analysis is from McDonald, Petroleum Conservation, chapter 5. 27. Federal Oil Conservation Board, Complete Record of Public Hearings (Washington, DC: U.S. Government Printing Office, 1926), p. 30; Report III (Washington, DC: U.S. Government Printing Office, 1929), p. 10. 28. For additional information on prorationing, see McDonald, Petroleum Conservation; Erich W. Zimmermann, Conservation in the Production of Petroleum (New Haven, CT: Yale University Press, 1957); Melvin G. De Chazeau and Alfred E. Kahn, Integration and Competition in the Petroleum Industry (New Haven, CT: Yale University Press, 1959); Wallace F. Lovejoy and Paul T. Homan, Economic Aspects of Oil Conservation Regulation (Baltimore: Johns Hopkins University Press, 1967); and Gary D. Libecap and Steven N. Wiggins, “Contractual Responses to the Common Pool: Prorationing of Crude Oil Production,” American Economic Review 74 (March 1984): 87–98. 29. Zimmermann, Conservation in the Production of Petroleum, p. 142; and Libecap and Wiggins, “Contractual Responses to the Common Pool.” 30. Oklahoma had instituted prorationing earlier. 31. This statement abstracts from transportation cost considerations. A domestic supplier might be willing to sell oil in the United States at a price below the world price if it experienced higher transportation costs by selling oil outside the United States than by selling oil in the country. 32. This analysis is based on James C. Burrows and Thomas A. Domencich, An Analysis of the United States Oil Import Quota (Lexington, MA: Heath, 1970).

33. Douglas R. Bohi and Milton Russell, Limiting Oil Imports (Baltimore: Johns Hopkins University Press, 1978). 34. Ibid. For comparable estimates, see Burrows and Domencich, Analysis of the United States Oil Import Quota. 35. For simplicity, we analyze the effects of a single limit on the price of all crude oil. In practice, a higher ceiling was set for newly discovered oil in order to enhance incentives for ongoing discovery of new oil reserves. General background references for crude oil price controls include Joseph P. Kalt, The Economics and Politics of Oil Price Regulation (Cambridge, MA: MIT Press, 1981); W. David Montgomery, “Decontrol of Crude Oil Prices,” in Leonard W. Weiss and Michael W. Klass, eds., Case Studies in Regulation: Revolution and Reform (Boston: Little, Brown, 1981), pp. 187–201; Richard H. K. Vietor, Energy Policy in America Since 1945 (Cambridge: Cambridge University Press, 1984); and R. Glenn Hubbard and Robert J. Weiner, “Petroleum Regulation and Public Policy,” in Leonard W. Weiss and Michael W. Klass, eds., Regulatory Reform: What Actually Happened (Boston: Little, Brown, 1986), pp. 105–136. 36. The U.S. government instituted an “entitlements program” in the 1970s that determined how much of the lower priced domestic crude oil each refiner was entitled to purchase. 37. Hubbard and Weiner, “Petroleum Regulation.” 38. Ibid.

18 Regulation in the Financial Sector

The financial sector of an economy consists of companies that provide financial services to consumers and businesses. These companies include commercial banks, investment banks, brokerage firms, mutual funds, and hedge funds. Commercial banks primarily accept deposits from individuals and use the deposits to make loans to consumers and businesses. Investment banks help companies to merge and to attract investments, often by issuing stocks and bonds. Brokerage firms assist clients in the purchase and sales of stocks and bonds. Mutual funds pool the financial resources of individuals in order to purchase securities like stocks and bonds. Hedge funds do the same but often assume substantial risk in order to generate high returns, primarily for wealthy clients. The commercial banking industry is a key component of the financial sector. As noted, commercial banks attract funds from depositors and lend these funds to home purchasers, entrepreneurs, and businesses to conduct activities that contribute to the growth of the economy. These activities include buying residential and commercial properties, purchasing new machinery and equipment, and financing research and development. The tasks of a commercial bank are complicated by the difficulty of accurately predicting which borrowers will repay their loans and which borrowers will fail to do so. A depositor’s task can also be complicated by uncertainty and limited information. Absent government regulation, depositors may find it difficult to distinguish between banks that will invest their deposits wisely and ones that might lose deposits by investing in projects that are unduly risky. In light of such uncertainty, a role for government regulation in the banking sector can arise. This chapter focuses on the rationale for and the nature of government regulation in the banking sector. Some additional regulation in the financial sector is also noted briefly, including the information disclosure requirements that a company faces if it wishes to sell stock in the company to the public. This chapter also explains why, despite substantial government regulation in the financial sector, the sector experienced serious disruption in and around 2007 that caused the deepest and most protracted recession in the U.S. economy since the Great Depression. The chapter concludes by discussing the additional regulation that was imposed on the financial sector in response to the Great Recession.1 Role of the Financial Sector Investment is a key element of economic growth. To deliver innovative new products and services to consumers, firms typically must undertake research and development, acquire machines and infrastructure, and make consumers aware of the new products and services. All such activities are costly, and new firms seldom have sufficient revenue to cover these costs of operation. Consequently, many firms borrow funds to finance their activities, at least until they generate sufficient revenue to cover their operating costs.2

Conceivably, a new firm might solicit the funding it requires from individual investors. However, individual investors often have limited financial resources. In addition, individual investors often find it difficult to distinguish between “good” projects that are likely to be financially successful and “bad” ones that are likely to be unprofitable. Consequently, investors may decline to provide funding to most or all projects out of fear that the projects will fail, and so financial investments in the projects will be lost. In principle, individual investors could undertake research that would enable them to better distinguish between good and bad projects. However, a free-rider problem can arise in this regard. Each individual investor has an incentive to let other investors devote their time and effort to distinguishing between good and bad projects, and then simply follow the lead of the better-informed investors. Collectively, then, investors may end up devoting a relatively limited amount of resources to screening projects and so may be reluctant to fund projects that they know little about. This reluctance can be further fostered by what is known as a lemons problem.3 If potential project operators anticipate that individual investors will not screen projects diligently, then these operators may be highly motivated to pursue their projects, in the hope that the flaws in their projects will not be detected and so the projects will be funded. Given that bad projects are relatively easy to devise, limited screening by investors can lead to a pool of available projects that is populated disproportionately by bad projects. The prevalence of bad projects can further discourage financing of all projects, including good ones. Banks can help overcome the free-rider problem and thereby avoid the lemons problem. Banks attract deposits from many individuals and employ the deposits to finance loans to project operators. Banks can earn profit by loaning funds for good projects and declining to loan funds for bad projects.4 In this respect, a bank can be viewed as a single entity with a strong financial interest in ensuring that the funds of all depositors are employed to finance good projects and are not loaned to the operators of bad projects (who are relatively likely to fail to repay their loans). In acting as a single representative of many depositors, a bank can overcome the free-rider problem that the depositors would face if they acted independently. By helping resolve the free-rider problem, banks also can help resolve the lemons problem. When banks devote substantial resources to evaluating the merits of proposed projects, a potential operator of a bad project recognizes that he is unlikely to convince a bank that his project merits funding. Consequently, potential operators of bad projects often decline to seek funding for their projects, devoting their efforts instead to improving their projects so that banks will indeed finance them. Rationale for Regulation in the Financial Sector If banks can resolve free-rider problems and lemons problems, then why is regulation in the financial sector appropriate? Such regulation can enhance welfare in part because, just as individuals often find it difficult to distinguish between good and bad investment projects, individuals can find it difficult to distinguish between particularly capable banks and less capable banks. Some banks may be better than others at distinguishing between good and bad investment projects. Consequently, some banks may constitute safer havens for deposits than do other banks. When they are uncertain about the ability of banks to act as faithful stewards of their deposits, individuals may be reluctant to place their funds in any bank. When an individual fears that a bank may lose her deposit by investing the funds in bad projects, the individual may rationally decline to deposit money in the bank. Thus, just as uncertainty about the merits of specific projects can cause individuals to decline to invest in projects, uncertainty about a bank’s ability and effort to invest deposits wisely can cause individuals to decline to deposit their financial resources in banks. When banks do

not receive deposits, they cannot make loans to project operators. Consequently, even good projects may not be funded, and so the benefits these projects could generate for the economy may not be realized. In addition to investing deposits in promising projects, banks provide liquidity services to depositors. Specifically, to convince individuals to deposit their financial resources in banks, a bank typically must assure its depositors that they will have ready access to their funds whenever they wish to withdraw their deposits from the bank. Therefore, banks must anticipate the likely withdrawal needs of depositors and be sure to maintain sufficient cash (or securities that can be converted into cask with little delay and little risk of change in market value) to meet these needs. To do so, a bank will use only a portion of the deposits it receives to make loans to project operators and will keep the remaining portion available to meet depositors’ withdrawal (liquidity) needs. When determining what portion of deposits to loan to project operators and what portion to devote to depositors’ liquidity needs, banks consider the flow of scheduled interest payments on the loans made to project operators. In principle, these interest payments can provide a steady flow of funds that will be available to serve depositors’ liquidity needs. On occasion, though, a project may fail, rendering the project operator unable to make the (interest) payments promised to the bank. If a large project or several smaller projects fail, the resulting decline in bank revenue could limit its ability to meet depositors’ liquidity needs. Bank Runs Any problem that a bank encounters in meetings its depositors’ liquidity needs can multiply quickly. When depositors observe that a bank is having difficulty meeting the prevailing requests for deposit withdrawals— or even if they simply suspect that the bank may soon experience such difficulty—the depositors will naturally be concerned about their ability to withdraw their deposits, should they need to do so. Consequently, even though they may not presently need to withdraw their deposits from the bank, depositors may attempt to do so. The resulting increased demand for withdrawals can be difficult for the bank to meet, even when it is well situated to meet the normal demand for deposit withdrawals. In this sense, fears about a bank’s ability to meet its depositors’ liquidity needs can become self-fulfilling: the fears cause depositors to request more withdrawals than they otherwise would, which limits the bank’s ability to grant all requests for withdrawals. Such bank runs can even cause banks that are financially sound to fail. Corresponding contagion problems also can arise. If depositors at one bank (“bank A”) experience problems in withdrawing their deposits, then depositors at a different bank (“bank B”) may fear that they will experience similar problems. In practice, depositors may have difficulty discerning whether problems at one bank are unique to that bank or instead reflect systemic problems that are likely to affect the financial performance of all banks. For instance, bank A’s problems may stem from its idiosyncratic failure to screen out bad projects and fund only good projects. Alternatively, bank A’s problems may arise because, even though it restricted its loans to the very best projects, even the best projects are failing because of an impending downturn in the economy.5 In the latter case, the failure of projects funded by bank A may indicate that corresponding failure is likely to arise at bank B. When they fear such failure, depositors at bank B may seek immediate withdrawal of their deposits rather than wait until such failure is actually realized. In this event, problems at bank A can cause a run on bank B. This can be the case even when bank B has only funded good projects that will not fail, and so bank B could readily meet its depositors’ liquidity needs in the absence of their unfounded fears about the bank’s activities. Bank Runs in Practice

Depositors requested more withdrawals than banks could accommodate many times during the Great Depression. More than 600 banks failed in 1929, and more than 1,300 failed in 1930. In some instances, the failures appear to have occurred not because the bank’s investments were fundamentally unsound, but rather simply because depositors speculated that the investments might be unsound. To illustrate, in December 1930, a merchant from Brooklyn was reported to have visited his local branch of The Bank of the United States, intending to sell his stock in the institution. Officials at the bank counseled the merchant that his stock represented a good investment in a solid company, so he would be better off if he did not sell the stock. The merchant may have misunderstand—or not believed—what he heard at the bank. On leaving the bank, the merchant informed his friends and business associates that the bank had refused to sell his stock. Within hours, a large crowd gathered outside the bank, fearful that the bank might soon close, leaving them unable to withdraw their deposits. That afternoon, somewhere between 2,500 and 3,500 depositors were reported to have withdrawn approximately $2 million in deposits from the bank. In December 1931, New York’s Bank of the United States collapsed. The bank had more than $200 million in deposits at the time, making it the largest single bank failure in American history.6 Government Intervention and Regulation When banks fail, even the most promising projects can experience considerable difficulty securing requisite funding. When entrepreneurs anticipate that funding for their projects will be difficult to secure, they may become less likely to devote the time and effort required to develop promising projects. The resulting reduction in innovative activity can stifle economic growth and harm consumers throughout the economy. Consequently, government action to limit bank failures can enhance welfare. There are several actions that governments can undertake (and have undertaken) in this regard. Deposit Insurance Most prominently, governments can issue deposit insurance, which guarantees that even if a bank fails, each of the bank’s depositors will recover the funds he has deposited in the bank. In the presence of such insurance, a depositor will no longer feel compelled to rush to withdraw his deposits before others rush to withdraw theirs. Even if the bank has insufficient funds to allow all of its depositors to withdraw all of their deposits simultaneously, the government will have sufficient funds for this purpose. Consequently, government-sponsored deposit insurance can eliminate the fundamental cause of bank runs and thereby add stability to the banking sector. Deposit insurance is not without its potential drawbacks, however. The insurance can encourage a bank to fund risky, imprudent investments. The essential problem is that the deposit insurance effectively allows the bank to avoid the full loss associated with failed projects. Consequently, the bank may devote insufficient effort to avoiding risky projects that, despite some potential for a large financial payoff, may well fail and thereby generate large financial losses. To illustrate this point, consider the following simple setting. Suppose a bank has $6 million of deposits that it can employ to invest in projects. Two projects are available. Project S (for “safe”) is certain to deliver a 10 percent return to the bank. Therefore, if the bank invests the available $6 million in project S, the bank knows that it will end up with $6.6 million. Project R (for “risky”) offers a higher expected return but entails considerable risk. Specifically, an investment in project R generates either a 50 percent gain, no gain, or a

100 percent loss. Each of these outcomes is equally likely, and so occurs with probability 1/3. If the bank bears the full consequences of its investments, it will prefer to invest in project S rather than project R. This is the case because the bank’s expected payoff is $6.6 million if it invests in project S. In contrast, if the bank invests its $6 million in project R, its expected payoff is

The calculation in equation 18.1 reflects the fact that $9 million constitutes a 50 percent gain on a $6 million investment, $6 million represents no gain on this investment, and $0 reflects the complete loss of the investment. Therefore, if the bank bears the full consequences of its investments, the bank will anticipate a larger financial payoff from investing in project S than from investing in project R. Now suppose that the bank does not bear the full consequences of its investments. In particular, suppose that if project R fails, the bank merely suffers no gain on its investment, rather than losing the entire investment. In this case, the bank’s expected payoff from investing its $6 million in the project is

Equation 18.2 indicates that when it is insured against losses, the bank anticipates a higher expected payoff ($7) million from project R than from project S ($6.6 million). Consequently, the bank is likely to invest its deposits in project R rather than in project S, even though project R has a lower expected payoff in the absence of insurance against losses. More generally, to the extent that government-sponsored deposit insurance protects banks against the full downside risk associated with their investments, banks may have an incentive to invest in relatively risky projects, including projects with relatively low expected payoffs. Consequently, deposit insurance can limit the extent to which the most promising projects are financed, thereby potentially limiting economic growth. Additional regulation can be undertaken to limit this undesirable consequence of government-sponsored deposit insurance. The additional regulation includes restrictions on the amount and type of investment that banks can undertake, limits on bank competition, and direct oversight of bank activities. Restrictions on Banks’ Investments Different types of securities entail different levels of risk. For example, investment-grade bonds are highly unlikely to default, which is to say that they are very likely to make all their promised payments to investors. In contrast, “junk bonds” can entail a significantly higher probability of default.7 Junk bonds usually promise higher payments than investment-grade bonds to compensate investors for their higher risk. Thus a bank that is fully insured against the full downside risk of its investments may be inclined to invest in junk bonds rather than investment-grade bonds. To avoid undue risk taking, a bank’s investment in risky securities might be explicitly limited. For example, a bank might be required to devote a large fraction (or perhaps all) of its investment in bonds to investment-grade bonds. Regulation might also limit the fraction of its investment portfolio that the bank can devote to projects in a single sector of the economy (the housing sector, for example). By requiring substantial diversification of investments, regulation can reduce the likelihood that a bank will experience financial losses that would limit its ability to satisfy the liquidity needs of its depositors.

Reserve Requirements Regulations that require banks to hold a substantial portion of deposits in cash (or in “cash equivalents,” which are securities that can be converted to cash with virtually no delay and no risk of changing market value) also can help ensure that banks are always able to meet the liquidity needs of their depositors. The larger the fraction of deposits a bank is required to hold in cash, the more likely it is that the bank will be able to continually meet its depositors’ liquidity needs. Limiting Competition among Banks Regulation might also limit competition among banks. Intense competition among banks to attract deposits could conceivably drive banks to pursue relatively risky investments, as such investments typically offer relatively high expected returns. Two distinct types of regulations can limit such risk-taking behavior. First, the payments that banks offer for deposits might be limited. For instance, the maximum annual interest rate that a bank can pay on deposits might be limited to, say, 5 percent. Such limits on payments for deposits can reduce the intensity of competition among banks and thereby reduce their perceived need to pursue risky investments in order to finance high rates of return on deposits. Second, the geographic locations that a particular bank is authorized to serve might be limited. For instance, a bank might only be able to establish a limited number of branches. Alternatively or in addition, banks might be precluded from operating in more than one state. Ongoing Monitoring of Bank Activities In practice, it can be difficult to assess the amount of risk that particular investments entail. (As we will see shortly, this difficulty is widely viewed as a key cause of the Great Recession.) Consequently, ongoing monitoring of bank investments often can provide useful information about the risk inherent in a bank’s investment portfolio. Stringent accounting requirements also can be helpful in this regard. Mandates to provide detailed information about the exact nature and potential risks of individual investments and the extent to which risks are correlated across investments can help uncover, and thus avoid, situations in which a bank is unlikely to be able to meet the liquidity needs of its depositors. Historic Legislation in the Financial Sector The U.S. government has intervened in the banking sector in all of the abovementioned respects. The interventions generally were initiated by legislation. We now briefly review the major pieces of U.S. legislation that were enacted before the Great Recession of 2007 to enhance the performance of the financial sector in general and the banking industry in particular.8 Federal Reserve Act of 1913 This piece of legislation created the Federal Reserve System, which comprises regional national banks. A primary function of these banks is to serve as lenders of last resort to member banks, should the banks experience problems that might threaten their ability to meet the liquidity needs of their depositors. The legislation required all national banks to join the system. Other banks were permitted to join the system if they wished to do so.

Banking Acts of 1933 (Glass-Steagall) and 1935 These acts empowered the Federal Deposit Insurance Corporation (FDIC) to insure deposits in commercial banks for up to $2,500 per account. The insurance was designed to alleviate concerns about the solvency of individual banks and to thereby limit bank runs. The acts also imposed ceilings on the interest rates that member banks could pay on deposits. These ceilings were intended to reduce competition for deposits among banks, thereby reducing the incentives of banks to pursue risky investments in order to generate high returns that could be employed to finance high interest rates on deposits. The Glass-Steagall Act also restricted the activities in which commercial banks could engage. Specifically, commercial banks were not permitted to engage in many of the activities pursued by investment banks. These activities include investing in potentially risky securities. These restrictions were designed to counteract the incentives that deposit insurance might provide to invest in relatively risky projects (or securities). Gramm-Leach-Bliley Act of 1999 Commercial banks felt that the Glass-Steagall Act unduly limited their ability to generate substantial earnings, and so they petitioned Congress to repeal the key provisions of the act. Congress did so in the Gramm-Leach-Bliley Act of 1999. This act permits commercial banks to engage in many of the activities that investment banks routinely pursued, including investing in relatively risky securities. The act also permitted commercial banks, investment banks, securities firms, and insurance companies to consolidate.9 Depository Institutions Deregulation and Monetary Control Act of 1980 Banks also felt that ceilings on the interest rates paid for deposits unduly hindered their ability to attract deposits, particularly in light of the emergence of money market funds. Money market funds did not provide the same insurance that bank deposits enjoyed. However, the unregulated money market funds were generally regarded as safe investments, and they often offered higher rates than bank deposits, particularly during the 1970s when inflation rates were relatively high. By 1980, the commercial banks had been able to convince Congress to help “level the playing field” in the competition to attract deposits. The Depository Institutions Deregulation and Monetary Control Act of 1980 phased out interest rate ceilings on deposits in commercial banks. The Act also increased the federal deposit insurance to $100,000 per account. These pieces of legislation directed at the banking sector were supplemented by legislation aimed at the financial sector more broadly. Two prominent pieces of such legislation are the Securities and Exchange Act of 1934 and the Sarbanes-Oxley Act of 2002. Securities and Exchange Act of 1934 This Act created the Securities and Exchange Commission (SEC). The mission of the SEC is to protect investors, maintain efficient capital markets, and facilitate capital formation.10 The SEC pursues its mission by helping ensure that investors “have access to certain basic facts about an investment prior to buying it, and so long as they hold it.” The SEC does so by requiring public companies “to disclose meaningful financial and other information to the public,” thereby creating “a common pool of knowledge for all investors to use to judge for themselves whether to buy, sell, or hold a particular security.” The SEC notes that “only through the steady flow of timely, comprehensive, and accurate information can people make sound investment decisions.”11

The Security and Exchange Act recognizes that banks are not the only source of investment capital. Individual investors also provide financing to companies, typically by purchasing stocks or bonds issued by the companies. The Security and Exchange Act attempts to ensure that individual investors have ready access to pertinent information about the likely risks and rewards of investing in individual companies. Such information can encourage investment, reduce the lemons problem, and help ensure that scarce investment capital flows to the most deserving projects. Sarbanes-Oxley Act of 2002 Despite the best intentions and actions of the SEC, incidents have arisen which suggest that if public firms are intent on doing so, they can deceive potential and actual investors. A case in point is Enron, which was once the seventh largest company in the United States, according to its financial filings. However, it turns out that Enron’s financial statements overstated its profits by nearly $600 million between 1997 and 2000.12 As Enron’s “accounting irregularities” eventually came to light, the price of the company’s stock price plummeted from $90.75 per share in 2000 to less than $1 per share in November 2001, reflecting the dramatic change in perception about Enron’s likely future earnings.13 The company declared bankruptcy on December 2, 2001, and several of its executives were eventually sentenced to prison for fraud. Arthur Andersen, the company that Enron hired to audit its financial reports, also was convicted of wrongdoing in the affair. Many investors lost a great deal of money, because Enron failed to provide fully transparent information about its activities and prospective earnings. In part as a reaction to the Enron debacle, Congress passed the Sarbanes-Oxley Act in 2002.14 The act created the Public Company Accounting Oversight Board to “oversee the audits of public companies in order to protect investors and the public interest by promoting informative, accurate, and independent audit reports.”15 The act also required the chief executive officer and the chief financial officer of public companies to certify the accuracy of the company’s financial statements. Causes of the Great Recession Despite all the actions that had been taken to ensure the smooth operation of the U.S. financial sector, the sector experienced a major disruption in 2007, which spread throughout the world. The primary cause of the disruption was an unprecedented decline in housing prices in the early 2000s, a decline that rocked the economy and the financial sector. Rising Housing Prices The decline in housing prices was preceded by dramatic price increases, fueled in part by low interest rates. The Federal Reserve kept interest rates low during the 1990s to encourage economic growth.16 The low interest rates supported low mortgage rates, which enabled more individuals to afford a home. The resulting increased demand for housing caused hosing prices to increase substantially. The median price of a singlefamily home rose by one-third, from $143,600 to $219,600, between 2000 and 2005. Price increases were much more pronounced in many cities during this period. To illustrate, the median price of a home rose by 79 percent in New York City, by 110 percent in Los Angeles, and by 127 percent in San Diego.17 The government also helped fuel the increased demand for houses by encouraging loans to “subprime” borrowers. These are borrowers who pose a greater risk of default than do the borrowers who typically receive mortgages from banks and mortgage companies. Issuing mortgages to subprime borrowers can

impose substantial risk on lenders in part because these borrowers typically lack the resources required to make a sizable down payment on the house they wish to purchase. When a borrower makes a substantial down payment on a home purchase (say, 20 percent of the price of the home), the selling price of the home can decline by 20 percent before the lender experiences any substantial risk of financial loss should the borrower default on the mortgage. In the event of default, the lender can assume ownership of the property and sell it at the prevailing market price. The lender experiences greater financial risk when the borrower makes little or no down payment on the house. In this case, the lender will experience a financial loss in the event of default even if housing prices decline only slightly. The government wished to extend the American dream of home ownership to as many citizens as possible. Historically, home ownership constituted a sound investment, because housing prices generally increased over time. Furthermore, expanded home ownership increases the demand for goods and services associated with home ownership (e.g., furniture and appliances) and thereby promotes expanded economic growth throughout the economy. Thus, the government encouraged expanded home ownership through lending to subprime borrowers in an attempt to increase their well-being while supporting the benefits of expanded economic activity for all citizens. Government encouragement included directing the Federal National Mortgage Association (“Fannie Mae”) and the Federal Home Loan Mortgage Corporation (“Freddie Mac”) to promote lending to subprime borrowers.18 Banks and mortgage companies became more willing to issue mortgages to subprime borrowers as the prevalence of mortgage-backed securities increased. These securities are essentially packages of mortgages bundled into a single marketable security. A bundle of mortgages was believed to entail less risk than individual mortgages because, historically, the United States had experienced few incidents of major nationwide declines in housing prices. Therefore, even if housing prices declined and mortgage defaults increased in one region of the country, corresponding defaults were unlikely in other regions. Consequently, a broad, diversified bundle of mortgages was thought to entail little risk of a major decline in value. Rather than bear the full risk associated with the mortgages they issued, banks and mortgage companies could effectively sell the individual mortgages they issued to entities that would bundle mortgages from many different lenders to create mortgage-backed securities and then sell these securities to investors. As they became better able to avoid much of the risk associated with issuing mortgages to subprime borrowers, banks and mortgage companies became more willing to issue the mortgages. Because the widespread creation and sale of mortgage-backed securities was a relatively new phenomenon, considerable uncertainty prevailed regarding the risk posed by the securities. Based in part on the fairly steady historical rise in housing prices and low default rates in the United States, securities rating agencies (such as Moody’s and Standard & Poor’s) generally certified mortgage-backed securities as being high-quality investments with limited risk. The entities that created and sold the mortgage-backed securities typically bore none of the risk associated with the securities. The issuers received compensation that depended only on the number of securities sold, not on the underlying quality or the ultimate performance of the securities.19 With mortgages widely available and housing prices continuing to rise, individuals rushed to buy houses before the prices increased even further. The expanding demand for houses pushed their prices higher, which encouraged expanded construction of new homes. Between 2001 and 2005, approximately 40 percent of net job creation in the private sector of the U.S. economy occurred in housing-related sectors.20 The Bubble Bursts

Dramatic changes began in 2006. The unprecedented rise in housing prices led to the realization that further increases were unlikely. Reduced fear that housing prices would increase substantially in the near future limited the urge to buy houses quickly before their prices increased even further. Consequently, the demand for houses declined. Meanwhile, a massive supply of new houses prevailed, as homebuilders had undertaken extensive construction in response to projections of expanding demand. The combination of substantial supply of houses and diminishing demand for them caused housing prices to decline substantially. As figure 18.1 reveals, housing prices dropped on average by more than 30 percent between 2006 and 2009.

Figure 18.1 U.S. Housing Prices, 2002–2009 Source: Frederic S. Mishkin, The Economics of Money, Banking and Financial Markets, 11th ed. (Boston: Pearson, 2016, p. 276).

The falling housing prices had major ramifications for the U.S. economy. Home building activity declined dramatically, which caused employment and economic activity to drop sharply. The U.S. labor market lost 8.4 million jobs in 2008 and 2009, which reflected approximately 6.1 percent of payroll employment. The decline in employment was by far the most pronounced in the country since the Great Depression.21 The reduced employment and economic activity caused mortgage defaults to increase dramatically (see figure 18.2). Homeowners who either lost their jobs or had their hours of employment reduced substantially could no longer afford to make the payments on their mortgages. Furthermore, the homeowners often were unable to sell their homes and use the proceeds to pay off their mortgages, because the value of their homes had fallen below the amounts initially borrowed to purchase the homes.

Figure 18.2 Default Rates on First Mortgages, 2004–2011 Source: S&P Dow Jones Indices: HousingViews (http://www.housingviews.com/2011/06/21/the-backlog-is-the-problem /mortgage-default-rate/).

In light of the increased default rates on mortgages and reduced expectations about future economic conditions, banks reduced their lending to homeowners and businesses alike. Banks and mortgage companies also increased the down payments that borrowers were required to make when purchasing a home. The more stringent borrowing requirements reduced the pool of individuals who qualified for mortgages. These changes further reduced economic activity in the country. The gross domestic product declined by more than 5 percent between 2008 and 2009, as figure 18.3 illustrates.

Figure 18.3 Real U.S. Gross Domestic Product, 2007–2011 Source: Federal Reserve Bank of St. Louis, Economic Research (https://fred.stlouisfed.org/series/GDPC1).

Crisis in the Financial Sector As defaults reduced the payoffs generated by mortgage-backed securities, the prices of the securities began to decline. The prices dropped further as holders of the securities rushed to sell them before the additional price declines that were anticipated were realized. The holders of mortgage-backed securities suffered substantial declines in their financial net worth. These entities—which included large investment banks— also experienced pronounced reductions in their ability to borrow funds to finance their ongoing operations. This is the case because investment banks (and other financial institutions) commonly employed the mortgage-backed securities they owned as collateral for the loans they secured. As the value of the securities declined, creditors demanded additional collateral before new loans would be issued. Many commercial banks were forced to sell other securities they owned to meet their short-term obligations. The widespread sale of securities depressed their prices, leading to substantial declines in the net worth of large commercial banks. Large insurance companies that had insured the mortgage-backed securities also experienced precipitous drops in net worth as the securities fell in value. The upshot of the plummeting values of mortgage-backed securities was a substantial tightening of credit standards and reduced lending throughout the economy. As lending declined, so did economic activity. Fears of further declines in economic activity raised concerns about ever-increasing default rates on loans, which led to further declines in lending and in economic activity. The Federal Reserve (“the Fed”) was able to make funds available to its member banks for lending. However, the Fed had limited ability to lend funds quickly to nonmember financial institutions, including investment banks and insurance companies. The Fed also had limited authority to monitor and direct the activities of these financial institutions, and so had little power to prevent them from undertaking the actions that seriously jeopardized their financial health in 2007 and 2008.22 The Fed had no legal mandate to prevent the bankruptcy of large investment banks and insurance companies. However, the Fed’s members feared that a collapse of such large, financially powerful entities would dramatically reduce economic activity and thereby jeopardize the financial health of the banking sector and the economy more generally. Consequently, the Fed provided loans to some entities, brokered the merger of other financial institutions, purchased the assets of some failing firms at above-market prices, and bailed out Fannie Mae and Freddie Mac. In total, the Fed expended more than $700 billion in its efforts to stabilize the U.S. financial sector.23 Regulatory Reform: The Dodd-Frank Act of 2010 The severe impact of the Great Recession on the U.S. economy and the staggering cost of the Fed’s efforts to prevent further economic losses led to calls for major reforms to prevent similar occurrences in the future. The calls culminated in the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. The stated purpose of the act is to “promote the financial stability of the United States by improving accountability and transparency in the financial system, to end ‘too big to fail,’ to protect the American taxpayer by ending bailouts, [and] to protect consumers from abusive financial services practices.”24 The act

introduces many reforms, including the following six.25 Limiting Systemic Risk to Avoid Future Financial Shocks The Dodd-Frank Act creates a Financial Stability Oversight Council, which is charged with identifying and responding to emerging risks throughout the financial system. The council is also charged with making recommendations to the Federal Reserve on appropriate restrictions to impose on the activities of financial companies as they grow in size and complexity. The council can also require a nonbank financial company to be regulated by the Federal Reserve if the council believes the failure of the company would pose a risk to the financial stability of the United Sates. In addition, the act creates a new Office of Financial Research that is staffed with highly trained specialists to support the council’s work by collecting financial data and conducting economic analysis. Banking Reform The Dodd-Frank Act establishes a uniform standard for mortgages that requires all lenders to comply with the standard. The standard is designed to help ensure that borrowers repay the loans they receive, thereby avoiding future incidents of widespread defaults on mortgages. The act also implements the “Volcker Rule,” which repeals a key provision of the Glass-Steagall Act. Specifically, commercial banks are no longer permitted to buy and sell stocks or invest in or sponsor hedge funds or private equity funds. This provision of the Dodd-Frank Act is intended to limit undue risk taking by commercial banks. The act also establishes a permanent increase in federal insurance for deposits in banks, thrifts, and credit unions. Accounts are insured for amounts up to $250,000. Ending Bailouts of Firms That Are “Too Big to Fail” The Dodd-Frank Act prohibits the Federal Reserve from bailing out individual firms in the financial sector. Any lending program that the Fed institutes must be broad based rather than directed toward individual firms. The act also requires large, complex financial companies to periodically submit plans for their rapid and orderly shutdown should the company fail financially. The plans are designed to help regulators understand the structure of the companies and to serve as a roadmap for terminating their activities, if necessary, without disrupting the overall economy unduly. In addition, the act mandates that taxpayers bear no costs for liquidating large financial companies that fail. The government is designated as the first entity to be paid from funds derived from the sale of the company’s remaining assets. Any excess of government costs over the sale proceeds will be repaid by recovering earlier payments to creditors. Reducing Risks Posed by Securities The Dodd-Frank Act requires companies that sell products like mortgage-backed securities to retain at least 5 percent of the risk associated with the securities. This provision is intended to ensure that entities that create and sell new securities have a financial incentive to ensure the quality and safety of the securities. The act also requires companies that sell such securities to carefully analyze the quality of the assets that underlie the securities (for example, the mortgages that underlie mortgage-backed securities) and to report their findings to potential buyers of the securities. New Requirements and Oversight of Securities Rating Agencies

The Dodd-Frank Act creates an Office of Credit Ratings at the SEC. The new office has the power to fine securities rating agencies for failing to perform their duties adequately. The act also empowers the SEC to withdraw the certification of a rating agency that issues inappropriate ratings repeatedly over time. In addition, the act requires the agencies to disclose the methodology they employ to rate securities and exposes the agencies to financial liability if they fail to conduct a reasonable investigation of relevant facts. The act directs the SEC to limit the ability of issuers of securities to select the rating agency that they believe will provide the highest rating of their securities. Increased Transparency and Accountability in Securities Markets The Dodd-Frank Act provides the SEC and the Commodity Futures Trading Commission with the authority to regulate assets like mortgage-backed securities. The act also mandates the collection and dissemination of data that can improve market transparency and help regulators identify and respond to emerging risks. In addition, the act requires hedge funds and private equity advisors to register with the SEC as investment advisers and to provide information about their trades and portfolios that the SEC deems necessary to assess systemic risk. In summary, the Dodd-Frank Act attempts to improve the performance of both the commercial banking industry and the “shadow banking sector,” which consists of institutions in the banking sector other than commercial banks, especially those that might be considered to be “too big to fail.” The act seeks to improve the ability of regulators (such as the SEC) to limit risks to the U.S. financial sector, to identify as early as possible any risks that do emerge, and to limit the damage caused by risks that are not fully mitigated. Time will tell if the Dodd-Frank Act is successful in achieving its lofty and important goals. Summary Regulation is justified in many industries by the presence of natural monopoly. In contrast, the primary justification for regulation in the financial sector is limited information. In the presence of limited information about the relevant risks and likely returns from different investments, financial resources may not flow to their highest value use. Regulations that attempt to ensure the dissemination of accurate information about the likely risks and returns of investments can help resolve this problem. Regulations that guard against undesirable effects of limited information (such as bank runs) also can promote the efficient allocation of financial resources. However, such regulations can induce undesirable behavior (such as excessive risk taking) that may require additional regulation to control. Thus, the design of welfareenhancing regulation in the financial sector is a subtle and complex exercise that is likely to require ongoing attention in the coming years. Questions and Problems 1. Why were bank runs fairly common around the time of the Great Recession? How might deposit insurance reduce the likelihood of bank runs? What are the potential drawbacks to deposit insurance? 2. Can bank runs occur in the United States any longer, now that deposits are insured up to $250,000? 3. A bank can invest in a (relatively) safe project or a risky project. Each project requires an initial investment of 100. The safe project generates a payoff of 110 with probability 0.9 and a payoff of 90 with probability 0.1. The risky project generates a payoff of 200 with probability 0.5 and a payoff of 0 with probability 0.5.

a. Which project has a higher expected net payoff? b. If the bank were insured against financial losses on any project it invests in (so the bank would always receive at least its initial investment of 100), would the bank invest in the safe project or the risky project? c. If the bank had to pay for insurance against financial loss, what is the most it would pay for the insurance on the risky project? 4. How, if at all, did securities rating agencies contribute to the financial crisis of 2007? Might the manner in which securities rating agencies are compensated for their services have affected the agencies’ activities and performance prior to the crisis? 5. Some contend that provisions of the Dodd-Frank Act extend government regulation unduly by constraining the activities of large financial institutions that are not commercial banks. Do you agree with this assessment? What is the primary rationale for regulating activities in the “shadow banking sector”? 6. The Dodd-Frank Act implements regulations that are intended to reduce the likelihood that large financial institutions will fail. Some suggest that, instead, the act should simply have stated that the government will not rescue financial institutions that fail. Do you agree that this would be a preferable policy? Why or why not?

Notes 1. Much of the discussion in this chapter reflects issues discussed in greater detail in Frederic S. Mishkin, The Economics of Money, Banking, and Financial Markets, 11th ed. (Boston, MA: Pearson, 2016). 2. Even well-established firms often operate with substantial debt. To illustrate, the online retailer, Amazon.com had more than $11 billion in debt as of June 2016. Wall Street Journal (online) “Financials: Amazon.com Inc.” (http://quotes.wsj.com/ AMZN/financials). 3. George A. Akerlof, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism,” Quarterly Journal of Economics 84 (August 1970): 488−500. The term “lemons” here refers to defective or low-quality projects in analogy to a used automobile that does not function properly. 4. The projects a bank can finance include home purchases by individual consumers. The purchase of a home can increase economic activity by promoting home construction, home repair, and the purchase of such household items as furniture and appliances. 5. An impending downturn in the economy can restrict employment and wage growth, thus limiting the sales of even highly valued new products and services. Consequently, the impending downturn can cause the failure of projects that would be highly successful under better economic conditions. 6. Bank Run, History (http://www.history.com/topics/bank-run). 7. Junk bonds are those rated BB or lower by the securities rating firm Standard & Poor’s, or rated Ba or lower by the rating firm Moody’s. Investment grade bonds are those rated BBB− or higher by Standard & Poor’s or rated Baa3 or higher by Moody’s. 8. Much of the ensuing discussion is drawn from https://en.wikipedia.org/wiki/History_of_banking_in_the_ United_States. 9. See https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bliley_Act. 10. See the SEC website: https://www.sec.gov/about/whatwedo.shtml. 11. Ibid. 12. The Economist, “Enron, The Real Scandal: America’s Capital Markets Are Not the Paragons They Were Cracked Up to Be,” January 7, 2002 (available at http://www.economist.com/node/940091). 13. Wikipedia, “The Enron Scandal” (https://en.wikipedia.org/wiki/Enron_scandal). 14. An even larger company, WorldCom, declared bankruptcy in July 2002. The company collapsed after it was discovered that it had engaged in even more pronounced exaggeration of its profit than had Enron. Additional details of WorldCom deceptive accounting practices are provided in Simon Romero and Riva D. Atlas, “WorldCom’s Collapse: The Overview,” The New York Times, July 22, 2002 (available at http://www.nytimes.com/ 2002/07/22/us/worldcom-s-collapse-theoverview-worldcom-files-for-bank ruptcy-largest-us-case.html).

15. Public Company Accounting Oversight Board website, http://pcaobus.org/About/pages/default.aspx. 16. John Weinberg, “The Great Recession and Its Aftermath,” Federal Reserve History, November 22, 2013 (http://www .federalreservehistory.org/Period/Essay/15). Low interest rates encourage borrowing to expand business operations. 17. Thomas Sowell, The Housing Boom and Bust (New York: Basic Books, 2009), p. 1. 18. Fannie Mae and Freddie Mac are government sponsored entities that were established to help local banks finance mortgages. 19. Ben S. Bernanke, “Causes of the Recent Financial and Economic Crisis,” Statement before the Financial Crisis Inquiry Commission, Washington, DC, September 2, 2010 (http://www.federalreserve.gov/newsevents/testi mony/bernanke20100902a.htm). 20. Weinberg, “The Great Recession and Its Aftermath.” 21. Lawrence Mishel, Josh Bivens, Elise Gould, and Heidi Shierholz, “The Great Recession,” in The State of Working America, 12th ed. (Washington, DC, and Ithaca, NY: Economic Policy Institute and Cornell University Press, 2012). Available at http://stateofworkingamerica.org/great-recession. 22. Bernanke, “Causes of the Recent Financial and Economic Crisis.” 23. For additional details, see Mishkin, The Economics of Money, Banking, and Financial Markets, chapter 12; and Matthew Ericson, Elaine He, and Amy Schoenfeld, “Tracking the $700 Billion Bailout,” New York Times (available at https://static01 .nyt.com/packages/html/national/200904_CREDITCRISIS/recipients.html). 24. H.R. 4173 (available at https://www.sec.gov/about/laws/wallstreetreform-cpa.pdf). 25. The ensuing discussion reflects information from the Federal Deposit Insurance Corporation, “Staff Summary of Certain Provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (formerly H.R. 4173/S. 3217)” (Washington, DC: FDIC, September 10, 2010). Available at https://www.fdic.gov/regulations/reform/summary.pdf.

III HEALTH, SAFETY, AND ENVIRONMENTAL REGULATION

19 Introduction to Health, Safety, and Environmental Regulation

The Emergence of Health, Safety, and Environmental Regulation As the review of regulatory costs in chapter 2 indicated, environmental and other social regulations have become an increasingly prominent part of the regulatory mix. Indeed, the costs of these regulatory agencies now exceed those associated with economic regulations because of both the recent growth in social regulation and the rise of deregulation for the economic regulations. If this book had been written before 1970, it likely would have included no discussion at all of health, safety, and environmental regulation. Beginning in the early part of the twentieth century, there had been, of course, several efforts in the health, safety, and environmental regulation areas, but for the most part, these effects were limited to the safety of food and drugs. Consumer advocates such as Ralph Nader, who launched a major crusade for auto safety, became prominent influences on the national agenda in the mid1960s. The rising public concerns with safety and environmental quality transformed national policy. The decade of the 1970s marked the emergence of almost every major risk or environmental regulation agency. The U.S. Environmental Protection Agency, the National Highway Traffic Safety Administration, the Consumer Product Safety Commission, the Occupational Safety and Health Administration, and the Nuclear Regulatory Commission all began operation in that decade. Although in some cases these agencies absorbed functions that had been undertaken at a more modest level by other agencies, the advent of these regulatory agencies marked more than a consolidation of functions; each agency also had its own legislative mandate that gave it substantial control over the direction of regulatory policies in its area. The early years of operation of these regulatory agencies were fairly controversial. Expectations were unrealistically high, and for the most part, these expectations regarding potential gains that would be achieved were not fulfilled. Members of Congress and engineers often predicted dramatic safety gains achievable at little cost, but these predictions often ignored the role played by economic incentives and decisions by enterprises and individuals. Firms could ignore regulations, and indeed had an incentive to ignore them when compliance was expensive and sanctions were low. Consumers might have chosen not to take certain precautions, such as wearing seat belts. In addition, there was substantial resistance to the increased expansion of regulatory control over decisions that were formerly left to business. Many illdesigned regulations became the object of ridicule and vehement business opposition. Over time these regulatory efforts have become a more generally accepted component of the federal regulatory efforts. In addition, due to the influence of regulatory oversight efforts and improved regulatory policies, regulatory agencies now strike a better balance between the benefits achieved by these policies and the costs they impose. Unlike for economic regulation, however, the economics literature has not advocated deregulation for

health, safety, and environmental regulation. Indeed, the recent emergence of concerns such as global climate change has increased the extent of this form of regulation. There is little doubt that actual market failures exist in the context of social regulation. In many cases, such as air pollution, no markets exist at all for the commodity being produced, and there is no market-based compensation of the victims of pollution. Markets could never suffice in instances such as this, so that our objective in the social regulation area will always be sounder regulation rather than no regulation whatsoever. Consider, for example, the current regulatory agendas of the various agencies. The U.S. Environmental Protection Agency (EPA) focuses on dealing with the major regulatory problems that will have long-run ecological consequences. Chief among these is that of climate change due to the accumulation of greenhouse gases. The scope of this problem is so great that any effective policy will ultimately require global solutions. While the hazards of climate change loom largest on the environmental agenda, there are other long-term concerns as well, such as assurance of an adequate quality water supply. The EPA is placing increased emphasis on environmental problems that will define the well-being of the world throughout the twenty-first century and beyond. None of these problems can be adequately addressed by free markets alone. At the Occupational Safety and Health Administration (OSHA), the task is to devise more effective enforcement mechanisms; to update and make more performance-oriented the technological standards for workplace design; and to address in a more comprehensive manner the health hazards imposed by workplace conditions, particularly for immigrant workers who may face language barriers. Finally, in the product safety area, regulation of automobiles and other products is continuing as before, but the emergence of autonomous vehicles is introducing new elements. The increased role of product liability generates multibillion dollar financial stakes, which has highlighted the importance of coordinating the various social institutions at work in promoting health and safety. The main question in this area is one over which no one institution has authority—how the responsibilities should be divided among federal regulatory agencies, the market, and the courts. Risk in Perspective Before addressing specific aspects of the efforts of different regulatory agencies, it is helpful to put the scale of the problems they are addressing in perspective. Reducing mortality risks is a prominent regulatory focus, but it does not encompass all risks of death. Table 19.1 lists the various causes of death in the U.S. population.1 The principal focus of the efforts of regulatory agencies is on accidents. All accidental deaths (unintentional injuries) make up only 5 percent of the total death rate, so that even a fully effective safety regulation effort would necessarily play a small part in reducing overall mortality rates. In the accident category, just over one-fourth of all accidental deaths are due to motor vehicle accidents, which are a major component not only of nonwork accidents but also of work accidents. A decade earlier, motor vehicles were much more dangerous, as they accounted for about half of all accidental deaths. Motor vehicle deaths are now exceeded by deaths from poisonings, which affect children as well as adults who accidentally overdose. Child poisonings are subject to safety cap regulations, and there is now increased vigilance with respect to the rising problem of opioid misuse by adults. The third leading mortality component is falls, which are concentrated almost entirely among the population over the age of seventy-five. These risks for the most part occur outside of the purview of government regulation. Similarly, drownings also tend to be agespecific, as a leading age group for drowning rates consists of children aged zero to four years. Boating and swimming accidents are monitored by various parties, such as lifeguards. Moreover, there are government

regulations of swimming pool slides, for example, that will influence drowning risks. However, for the most part, these risks also fall outside the domain of regulatory policies. Most, but not all, accidents are potentially under the influence of government regulation. Even a fully effective regulatory effort could not be expected to eliminate accidents, however, because individual behavior often plays a critical role. Studies assigning responsibility for job accidents, for example, generally attribute most of these accidents at least partly to worker behavior. Table 19.1 Causes of Death in the United States, 2013 Cause

Death Rate per 100,000 Population

All causes Heart disease Cancer Chronic lower respiratory diseases Unintentional injuries Poisoning Motor vehicle Falls Choking Drowning All other unintentional injuries Stroke (cerebrovascular disease) Alzheimer’s disease Diabetes mellitus Influenza and pneumonia Nephritis and nephrosis Suicide

821.5 193.3 185.0 47.2 41.3 12.3 11.2 9.6 1.5 1.1 5.7 40.8 26.8 23.9 18.0 14.9 13.0

Source: National Safety Council, Injury Facts, 2016 ed. (Itasca, IL: National Safety Council, 2016), p. 14. Reprinted by permission of the National Safety Council.

In contrast, many of the leading causes of death are more in the domain of individual behavior rather than regulatory action. The main determinants of heart disease are genetic factors as well as the individual’s diet and exercise. Similarly, many cancer experts believe that cancer risks are largely due to diet, smoking, genetics, and other person-specific factors. A longstanding concern in the health economics literature has been the importance of individual consumption decisions (such as diet, exercise, and smoking) on health status. Environmental exposures generally rank low among the total determinants of cancer. Strokes likewise are not generally the result of risk exposures subject to government regulation, although air pollution and cigarette smoking are prominent exceptions that will affect the propensity to have a stroke. Many causes of death that are less prominent than accidents, ranging from pulmonary disease to homicides, are also matters over which there is much public debate but little regulatory control. Measuring Mortality Risks The statistics in table 19.1 indicate how various causes of death affect the probability that one will die from a particular cause. However, societal concern with these various deaths may vary, depending on the length of life lost. Although risks to all lives merit serious societal concern, reducing infant deaths generates larger gains in life expectancy than do new experimental drugs that may extend terminal cancer patients’ lives by several months. Table 19.2 presents a ranking of twelve leading causes of death based both on the probability of death and on lost life expectancy. The first set of statistics pertains to the probability of death,

and in this ranking, cardiovascular diseases rank first, cancer ranks second, and “all accidents” rank fifth. Table 19.2 Rank Orders of Mortality Risks for Major Conditions

Matters are somewhat different if one looks at the extent of life lost. The second set of columns in table 19.2 indicates the rank order for lost life expectancy. These statistics indicate the length of life lost conditional on dying from a particular disease. Moreover, the estimates take into account the role of discounting, as the lost life expectancy is the discounted number of life years lost, recognizing the role of pertinent time lags between the time of exposure to the risk and the time of death. Focusing on lost life expectancy (LLE in the table), cardiovascular disease drops to twelfth, cancer drops to eighth, and the category of “all accidents” ranks sixth in importance. The final set of statistics in table 19.2 multiplies the probability of death by the lost life expectancy to obtain the expected life years lost, denoted by E(YLL) in the table. These rankings move auto accidents and “all accidents” much higher on the mortality risk scale and substantially narrow the absolute difference in the risk between prominent disease categories and accidents. Because government regulations extend lives but do not confer immortality, it is appropriate to consider the length of life at risk, and not simply the probability of death. How this should be done and what differential weight should be placed on risks with different length of life exposures remain among the most controversial policy issues. Infeasibility of a No-Risk Society These patterns do not suggest that regulation is unimportant, even though a fully successful regulatory

program will not drastically alter death risks. What they do indicate is that our expectations regarding the overall effects of regulation should be tempered by an appreciation of their likely impact, even under a bestcase scenario. Moreover, it should also be recognized that many of the most prominent sources of death risks are matters of individual choice, such as diet. Therefore, it seems inappropriate to adopt an extremist approach to government regulation in which we pursue a no-risk strategy while at the same time permitting risks of greater consequence to be incurred through individual action. Moreover, we need to place greater emphasis on such policies as hazard warnings and nutrition information that will foster better individual risk-averting decisions. An interesting perspective on the variety of risks that we face is provided by the data in table 19.3. That table lists a variety of activities that will increase one’s annual death risk by one chance in 1 million. This group includes some of the most highly regulated risks, such as the risk of accident of living within five miles of a nuclear reactor for fifty years and the risk of smoking 1.4 cigarettes are tantamount to the risks one would face by riding ten miles on a bicycle, eating forty tablespoons of peanut butter, drinking Miami drinking water for one year, or eating a hundred charcoal-broiled steaks. Table 19.3 Risks That Increase the Annual Death Risk by One in 1 Million Activity

Cause of Death

Smoking 1.4 cigarettes Drinking 0.5 liter of wine Spending 1 hour in a coal mine Spending 3 hours in a coal mine Living 2 days in New York or Boston Traveling 6 minutes by canoe Traveling 10 miles by bicycle Traveling 150 miles by car Flying 1,000 miles by jet Flying 6,000 miles by jet Living 2 months in Denver Living 2 months in average stone or brick building One chest X-ray taken in a good hospital Living 2 months with a cigarette smoker Eating 40 tablespoons of peanut butter Drinking Miami drinking water for 1 year Drinking 30 12-oz. cans of diet soda Living 5 years at site boundary of a nuclear power plant in the open Drinking 1,000 24-oz. soft drinks from banned plastic bottles Living 20 years near PVC plant Living 150 years within 20 miles of a nuclear power plant Eating 100 charcoal-broiled steaks Living within 5 miles of a nuclear reactor for 50 years (risk of accident)

Cancer, heart disease Cirrhosis of the liver Black lung disease Accident Air pollution Accident Accident Accident Accident Cancer caused by cosmic radiation Cancer caused by cosmic radiation Cancer caused by natural radioactivity Cancer caused by radiation Cancer, heart disease Liver cancer caused by aflatoxin B Cancer caused by chloroform Cancer caused by saccharin Cancer caused by radiation Cancer from acrylonitrile monomer Cancer caused by vinyl chloride (1976 standard) Cancer caused by radiation Cancer from benzopyrene Cancer caused by radiation

Source: Richard Wilson, “Analyzing the Daily Risks of Life,” Technology Review 81 (February 1979): 40–46. Reprinted with permission from Technology Review, copyright 1979.

The risks that we have chosen to regulate are not altogether different from those that we incur daily as part of our normal existence. This similarity does not mean that the government should abandon regulation or that we should stop drinking chlorinated tap water. But it does suggest that we will not have a risk-free environment no matter what regulations we pursue. Moreover, given the many risk-taking choices we make daily within the domain in which we institute regulations, it would not be appropriate to require that these regulations produce a risk-free environment.

The basic issue is one of balance. Society should pursue regulations that are in our best interests, taking into account both the beneficial aspects of regulations and the costs that they impose. These regulations may be stringent, or they may require that we have no regulation at all. The mere presence of a risk within the domain of a regulatory agency is not in and of itself a reason to institute a regulation. The key ingredient is that some market failure must warrant government intervention. Moreover, a regulatory policy must have some ability to influence the risk outcome of interest. Homeland Security The necessity of making difficult tradeoffs is especially pertinent with respect to homeland security. After the September 11, 2001, attack on the World Trade Center and the Pentagon, public support for greater vigilance to combat terrorism increased. Concerns with terrorist attacks continued after the Boston Marathon bombing, the rise of ISIS, and the spread of terrorist attacks throughout the world. Reducing terrorism risks is certainly not a controversial objective. But how we reduce these risks may entail other costs and compromise other fundamental concerns. What is most notable about this policy context is that many policy measures require that we sacrifice civil liberties to reduce the risks of terrorism. Two objectives that some claim can never be compromised, civil liberties and safety, were at odds. Because the perceived terrorism threat had risen from the level we thought it was before 9/11 and subsequent terrorist attacks, it would be efficient to strike a different balance between civil liberties and safety than before, with a greater level of precaution and fewer civil liberties. Figure 19.1 illustrates the policy choice. The pre-9/11 curve indicates the achievable combinations of civil liberties and expected terrorism losses. If the government increases the stringency of airport screenings and surveillance, it can reduce the expected terrorism losses. Before the 9/11 attack, the optimal policy choice was at point A, with a high level of civil liberties and a low level of terrorism losses. After the attack, society has perceived a greater level of risk, as shown on the post-9/11 curve in figure 19.1. If the level of civil liberties remained unchanged, then the terrorism losses would soar to point B. As a result, it becomes desirable to increase the vigilance of airport screening and other security measures to get to some new optimal tradeoff at point C.

Figure 19.1 Civil Liberties/Terrorism Loss Tradeoff

How far down on the post-9/11 curve we should go depends on two components of the policy choice problem. First, what is the shape of the post-9/11 tradeoff curve? If that curve is very flat and sacrificing civil liberties buys us very little added safety, then it will not be desirable to sacrifice many of our civil liberties. Second, what are society’s preferences with respect to being willing to trade off civil liberties for safety? Thus, even if sacrificing civil liberties will reduce terrorism risks substantially, if we place relatively little value on added safety compared to civil liberties, then it will not be desirable to move too far down from point B on the tradeoff curve. The homeland security example embodies the two key components of risk policy decisions. The nature of the available tradeoff opportunities and the preferences with respect to these components jointly determine the optimal policy choice. Neither our preferences nor the opportunities alone dictate the best decision. That people may be willing to compromise civil liberties at all is illustrated by the tradeoffs they are willing to make between travel time and the intrusions of airport security. Suppose you were asked the following survey question: One way of reducing terrorism risks to plane flights is better screening of passengers. The FBI has developed a profile of the chances that a passenger is a terrorist, taking into account the person’s age, race, gender, national origin, appearance, and baggage. Airlines could either screen all passengers, leading to additional delays in line, or they could screen passengers based on the profiling. People who are singled out will have to undergo an extra ten minutes of searches. You would not be singled out for such racial profiling. Would you favor terrorist risk profiling if the alternative was for you to wait in line an extra ten minutes so that all passengers could be screened? How would you feel if the wait was thirty minutes or sixty minutes?

Table 19.4 presents survey results that indicate how people’s reluctance to support a racial profiling policy diminishes with the extent of the wait. For a ten-minute wait, a majority of respondents oppose profiling and prefer that all passengers be screened equally. However, once the wait reaches an hour in line, about three-fourths of the sample favors targeting specific groups of passengers for screening. People are willing to tradeoff civil liberties against travel time, which is a much less compelling tradeoff than civil liberties against safety. Table 19.4 Attitude toward Use of Terrorism Risk Profiles Delay in Line Due to Screening Time

Percent Who Favor Screening Affecting Others

Percent Who Favor Screening Affecting the Respondent

10 minutes 30 minutes 60 minutes

40 60 74

46 66 74

Source: W. Kip Viscusi and Richard Zeckhauser, “Recollection Bias and the Combat of Terrorism,” Journal of Legal Studies 34 (January 2005): 45. Note: The reported statistics pool survey results from 2002, 2003, and 2004.

While these results indicate a willingness to favor screening if others are affected, would respondents be equally enthusiastic about a screening process in which they, too, were targeted for screening? The final column of table 19.4 presents the respondents’ support for demographic profiling if they also would be targeted for screening as part of the airline security effort. People are slightly more willing to support profiling if they would be targeted as part of the demographic profiling than to have a screening process only affecting others. The concern with civil liberties includes a substantial altruistic component and does not appear to be driven by self-interest. As in the case of screening that only affects others, the willingness to support demographic profiling of passengers affecting the respondent increases with the length of time for the screening process. The benefit derived from protecting civil liberties is bounded and has a travel time equivalent. The 9/11 attack also has raised public fears about airplane travel. Airline travel has traditionally been among the safest transportation modes, notwithstanding many people’s fear of flying and the recent experience with terrorist attacks. In 2001, the year of the 9/11 attack on the World Trade Center, there were 531 U.S. civil aviation accidental deaths.2 Even with this unprecedented catastrophe, the overall fatality risk per mile was still below that of automobiles in that year. Indeed, more people died on U.S. highways in September 2001 than in airplane crashes. Since that time, airline travel has been extremely safe. From 2007 through 2013, the only years in which there were any deaths from scheduled airline flights were 2009, when forty-nine people died after a commuter plane crashed, and 2013, when five people died.3 Annual fatality rates of zero from airline crashes are now the norm, not the exception. Yet, despite this remarkable record of safety, public fears of airplane crash risks remain high. Various factors are at work. Airplane crashes are highly publicized, dramatic events involving risks out of our control. By comparison, there is very little publicity given to how much people tend to travel by plane, which is the denominator in the risk probability calculation. Airplane risks also involve small probabilities, which people are prone to overestimating. Wealth and Risk Our demand for greater regulation in our lives has developed in part from the increased affluence of society.4 As our society has become richer, we have begun to place greater value on individual health status

and on efforts that can improve our physical well-being. These developments are not new. Concern about and improvement in sanitation levels and the cleanliness of food preparation, for example, were much greater beginning in the early part of the twentieth century than they were several hundred years ago. The influence of increased societal income is evident in the accident trends sketched in figure 19.2. These accident statistics give the accident rate per 100,000 population, and as can be seen, a general declining trend has taken place. Overall accident rates are down, other than a recent increase due to the rise in home accidents that is largely due to the incidence of falls among our aging population. The areas targeted by government safety standards have also exhibited improvement, but the gains predate the regulatory period. Work-related accidents have exhibited a steady decline since job accident death statistics became available in 1930. The main exception to the pattern of a fairly steady historical decline is that of motor vehicle accidents. Although the motor vehicle death rate is lower now than it was eighty years ago, motor vehicle accident deaths exhibited a period of increase through 1940, a flattening out through 1970, followed by a period of decline. This somewhat surprising pattern arises from the manner in which the risks are assessed. The accident statistics are on a population basis, which does not take into account the level of driving intensity. If one were to examine figures based on the risk per mile driven, then the real substantial safety gains that have been achieved on a continuing basis over time would be more apparent.

Figure 19.2 Age-Adjusted Death Rates, United States, 1910–2014, by Class of Injury Source: National Safety Council, Injury Facts, 2016 ed. (Itasca, IL: National Safety Council, 2016), p. 45. Note: a Death rate is defined as deaths per 100,00 population adjusted to the year 2000 standard population. Breaks in graph lines occur due to changes in classifications.

The existence of a downward accident trend provides a key element in terms of the historical perspective one should adopt when interpreting accident statistics. Regulatory agencies routinely announce this year’s accident rates, which, if all has gone well, are lower than they were the previous year. The agency officials then tout these statistics as evidence of the efficacy of the agency’s activities. Although this annual ritual may be effective political salesmanship, it ignores the fact that in all likelihood, the decline in the accident rate would have continued as it has throughout the twentieth century and early twenty-first century. The

main evidence of the effectiveness of an agency should be a shift in the accident trend to a risk level below what it otherwise would have been. The pattern of decline in accident rates also highlights the importance of technological progress. As a society, we have benefited from a variety of technological improvements, and these have been a tremendous boost to our well-being. New technologies are often highly controversial and lead to demands that the risks be stringently regulated, but one should also recognize that new technologies have brought about many substantial benefits over the long run.5 Policy Evaluation As a discussion of the various social regulation agencies will indicate, the stated objectives of these agencies are often quite narrow. The Clean Air Act, for example, forbids the EPA to consider cost when setting air pollution standards, and the Supreme Court has explicitly prohibited OSHA from basing its regulations on a benefit-cost analysis. Nevertheless, some balancing is required; otherwise, the cost associated with the regulations could easily shut down the entire economy. Moreover, as a practical matter, cost considerations and the need for balancing enter the process in a variety of ways. Agencies may phase in a particularly onerous regulation. Alternatively, the agency may choose not to adopt regulations that threaten the viability of an industry or employment in a local area. Regulatory Standards By far the most stringent standards that promote a balancing of societal interests are those that have been imposed through regulatory oversight mechanisms of the executive branch.6 The Ford administration instituted a requirement that the cost and inflationary impact of regulations be assessed. Under the Carter administration, these requirements were extended to require that agencies demonstrate the cost effectiveness of their regulatory proposals. The Reagan administration went even further, requiring that agencies demonstrate that the benefits of the regulation exceed the costs imposed, except when doing so would violate the legislative mandate of the agency. Moreover, even when there is such a legislative mandate, the agency must still calculate benefits and costs and submit these results to the Office of Management and Budget for its review. The Bush, Clinton, George W. Bush, and Obama administrations have continued these policies. The nature of the policy evaluation tools being discussed is not unique to the risk and environmental area. Procedures for assessing the benefits and costs of government policies have been in operation in the government for decades. Both the Army Corps of Engineers and the U.S. Department of the Interior have been using benefit-cost analysis to govern their design and evaluation of water resources projects for more than a half century. Benefit-Cost Analysis The importance of using economic analysis to assess risk regulations can be illustrated by examining some statistics on the efficacy of differing levels of stringency of arsenic regulation in the workplace. Table 19.5 provides pertinent statistics for three different levels of stringency of a standard—loose, medium, or tight. As the second column of the table indicates, lower levels of exposure to arsenic are associated with tighter standards. With increased tightness come added costs. The third column in table 19.5 gives one measure of the cost in terms of cost-effectiveness, in particular the cost per unit of benefit achieved. In this case the

measure is in terms of the cost per statistical life saved by the policy. This average cost per life saved ranges from $5.2 million to $23.5 million, reflecting a substantial but by no means enormous variation in the efficacy of the policy. Table 19.5 Cost-per-Life Values for Arsenic Regulation Stringency

Standard Level (mg/m3)

Average Cost per Life ($ million)

Marginal Cost per Life ($ million)

Loose Medium Tight

0.10 0.05 0.004

5.2 12.2 23.5

5.2 48.0 284.0

Source: W. Kip Viscusi, Risk by Choice: Regulating Health and Safety in the Workplace (Cambridge, MA: Harvard University Press, 1983), p. 124. Copyright © 1983 by the President and Fellows of Harvard College. Note: Dollar amounts have been converted to 2015 dollars.

In contrast, estimates of the marginal cost per life saved, such as those in the final column of table 19.5, indicate that successive tightening of the standard becomes very expensive in terms of the expected lives that are saved per dollar expended. In the case of the tight standard, the marginal cost imposed per expected life saved is $284.0 million, which (as we will see for the results in chapter 20) is out of line with what is considered a reasonable benefit value for a statistical life. In this case, the substantial acceleration in the marginal cost-per-life values was not as apparent when the agency focused on average cost per life. In some cases, the fallacy of focusing on averages rather than marginal changes is even more dramatic. In a policy decision involving one of the authors, OSHA was considering differing levels of stringency for a standard that would control ethylene oxide exposures, which occur primarily among hospital workers who are involved in the cleaning of surgical equipment. OSHA’s regulatory analysis included calculations of the average cost per case of cancer prevented, but the officials responsible for the calculation had not noted that the last incremental tightening of the standard produced no reduction in cancer cases whatsoever. The marginal cost per case of cancer prevented in this situation was actually infinite for the tightest level of the standard being considered. In contrast, the average cost per case of cancer remained fairly stable, thus disguising the inefficiency being created. Role of Heterogeneity Examination of the benefits and costs of regulatory actions is important not only with respect to the stringency of any particular regulation but also with respect to distinctions across different situations in terms of the stringency of the regulations. For example, pollution standards are generally set at a more stringent level for new enterprises than for existing ones, in part because the cost of compliance when introducing a new technology is believed to be less than the cost of adding pollution control devices to existing technology. There is the potential problem, however, of “new source bias.” Although we want regulatory standards for new pollution sources to be more stringent so that the marginal costs are equalized across different kinds of facilities, we could err too much in this direction, as we discuss later in this chapter and in chapter 21. In addition, often one might want to make a distinction across industries in terms of the level of the standard because differences in industry technology imply different marginal cost curves. If the marginal benefit curve were the same in all industries, higher marginal cost curves would shift the optimal level of environmental quality to a lower level, implying a less stringent regulatory regime. The statistics presented in table 19.6 illustrate how in practice one might approach such differentiation of

a regulation. The health impact of concern here is the effect of noise pollution in the workplace on workers’ hearing. In particular, the cases being examined are those involving a 25-decibel hearing threshold loss after twenty years of exposure to the noise. The statistics for the seventeen different industries represented indicate an order of magnitude variation in the cost per worker protected by differing levels of the 90decibel standard. Suppose that one had to pick between setting the standard level at 80 decibels and at 90 decibels. In addition, for concreteness, let us assume that we are able to determine that a case of hearing loss has a benefit value of $800,000. The average cost of a 90-decibel standard applied to all industries is $496,000 per worker protected, so that regulation is warranted on average. But the costs of protection vary substantially across industries. A 90-decibel standard is justified in this case for all industries listed above “fabricated metal products” in table 19.6. Moreover, a more stringent 80-decibel standard is warranted on average, as the average cost per worker protected is $705,000. But the costs become higher than $1 million per worker protected for five industries shown in the table. If the benefit value per protected worker is $800,000, the tighter 80-decibel standard can be justified in situations where the cost of protecting workers tends to be less. (In this instance there are also other policy options, such as protective equipment, so that hearing loss can be prevented for all workers.) Table 19.6 Industry Cost Variations for the OSHA Noise Standard Cost per Worker Protected ($ thousand) Industry

90 Decibels

80 Decibels

Electrical equipment and supplies Rubber and plastics products Stone, clay, and glass products Paper and allied products Food and kindred products Chemicals and allied products Transportation equipment Tobacco manufactures Printing and publishing Electric, gas, and sanitary services Furniture and fixtures Fabricated metal products Petroleum and coal products Primary metal industries Textile mill products Lumber and wood products Machinery, except electrical

79 158 221 259 313 334 363 434 450 571 626 801 897 909 947 951 972

163 284 400 325 746 550 463 834 897 788 630 784 1,072 1,551 1,647 1,264 1,022

496

705

Weighted average

Source: W. Kip Viscusi, Risk by Choice: Regulating Health and Safety in the Workplace (Cambridge, MA: Harvard University Press, 1983), p. 127. Copyright © 1983 by the President and Fellows of Harvard College. Note: Dollar amounts have been converted to 2015 dollars.

In general, policymakers should attempt to exploit such cost variations and take them into account when designing regulatory policies. Differentiated standards that recognize these cost differences will produce greater net benefits to society than those that do not. Uniform standards miss opportunities to promote safety and environmental quality in situations where it is cheap to do so and impose substantial burdens in situations where the cost of providing environmental quality and safety is higher. Society should take advantage of differences in the ability to produce environmental quality and safety, just as we take

advantage of other productive capabilities with respect to other goods and services that an economy provides. Role of Political Factors Although benefit-cost analysis provides a convenient normative reference point for the determination of social regulation policies, in practice other political forces may be more instrumental. In particular, the same kinds of economic interests that influence the setting of economic regulations in a manner that does not maximize social efficiency also are at work in determining the structure of risk and environmental regulations. The Stigler/Peltzman/Becker models have applicability to social regulation as well. Economic models of environmental policies The economic and political factors that led to the current regulatory structure have received very intense scrutiny by economists. Many of these explorations have been historical in nature, focusing on the factors that accounted for fundamental aspects of agency operations, such as key provisions of the regulatory statutes. A principal theme of this research is that a driving force behind the congressional voting over key environmental provisions that govern regulatory policy has been the economic stakes involved. A chief source of these differences is regional. Representatives of districts from the declining areas of the Northeast have in particular used regulatory policies to limit the degree to which the emerging economic regions of the United States could compete with them. Consider, for example, the results presented in the analysis of congressional voting by Crandall, which are summarized in table 19.7.7 What Crandall found is that the current levels of air pollution and water pollution have little effect on the way in which a congressional delegation voted. The influences that were of greater consequence were more closely related to the economic interests involved. Table 19.7 Determinant of Patterns of Congressional Voting on Environmental Issues

Hypothesis Sign

Explanatory Variable

Estimated Effect

? ? ? ? + − +

Air pollution Water quality Party Natural lands Income Income growth Frost Belt

No significant effect No significant effect Significant negative Significant negative Significant positive (1 out of 9) Significant negative Significant positive (3 out of 9)

Source: Robert Crandall, Controlling Industrial Pollution: The Economics and Politics of Clean Air (Washington, DC: Brookings Institution Press, 1983). Note: The +, −, and ? symbols indicate positive, negative, and uncertain effects.

The delegations with a larger share of Republicans in their districts were more likely to vote against environmental and energy policies, where the variable of interest in the analysis was the percentage of the delegation voting “right” on environmental and energy issues. This result is consistent with the stronger orientation of the Republican party toward business interests, which are more closely linked to the economic impacts of such regulation. Members of Congress representing districts with a large percentage of the

national parks also are likely to vote no, perhaps because further expansion in these parks will take more land out of the economic base. The income level is positively linked with votes for environmental and energy issues, reflecting the fact that environmental quality is a normal economic good for which one’s preferences will increase with one’s income status. The two key variables are the final ones in table 19.7. States with substantial income growth are more likely to vote against environmental controls because these controls will restrict the expansion of industry in these states. In contrast, the states in the Frost Belt of the North Central, New England, and Middle Atlantic states are more likely to vote for strict environmental controls because these controls will hit hardest on the newly emerging sources in other regions of the country. In particular, because the main structure of EPA policy imposes more stringent requirements on new sources of pollution rather than on existing sources, more stringent environmental regulation generally implies that there will be a differential incidence of costs on newly emerging industries and regions with substantial economic expansion. As a consequence, important distributional issues are at stake when voting on the stringency of such environmental policies. The result was that the air pollution regulations that were promulgated imposed a variety of requirements with substantial redistributional impacts. For example, the legislation required that firms install scrubbers to decrease their sulfur dioxide pollution. An alternative means of achieving the same objective would have been for firms and power plants to rely more on the western region’s low-sulfur coal. However, this option would have hurt the coal-producing regions of the Midwest and Northeast, and, as a result, the representatives from these regions opposed giving firms the more performance-oriented standard. By requiring that firms meet a technology-based standard that did not permit the substitution of types of coal to meet the pollution target, they could protect the economic interest of the eastern coal producers. Although other analyses indicate a variety of social regulations as being subject to the influence of such political factors, there remains a question in the literature as to the extent to which economic self-interest is the main determinant of regulatory policy. In an effort to explore these determinants more fully, Kalt and Zupan have developed a model in which they analyze two different sets of influences—economic selfinterest (as reflected in the standard capture theory models of regulation) and ideology (which is more closely associated with the more normative approaches to regulation).8 The focus of their analysis was on the voting for coal strip-mining regulations. The requirement to restore strip-mined land to its pre-mining state imposed substantial costs. The annual costs associated with this policy were believed to be $3.3 billion in 2015 dollars, where surface miners would bear roughly two-thirds of the cost and consumers would bear roughly one-third. The environmental gains were believed to be $3.1 billion per year in 2015 dollars. Overall, this policy would lead to a transfer from surface mine producers and coal consumers to underground producers and environmental consumers. Voting patterns Under the capture theory, factors such as altruism, public interest objectives, and civic duty are believed to be insignificant determinants of voting patterns. In contrast, the ideological models of voting use the social objectives of the political actors as the objective function that is being maximized in the voting process. A potential role for ideological voting enters because the market for controlling legislators meets infrequently. We vote for representatives every two years and for senators every six years. Moreover, the voters have very weak incentives to become informed, and it is often possible for representatives to shirk their responsibility to represent the voters’ interests in such a situation. Table 19.8 summarizes the factors considered in the Kalt and Zupan analysis. These results suggest that

both the capture and ideology models illuminate aspects of the determinants of the anti-strip-mining vote. In particular, members of Congress are more likely to vote in favor of anti-strip-mining regulations the higher the level of coal reserves in their state, the greater the environmental group membership in their state, the greater the unreclaimed value of the land that is stripped and not restored, and the greater the value of underground coal reserves. They are more likely to vote against anti-strip-mining regulations if there is a high regulation-induced mine cost for the state, a high amount of surface-coal reserves, or a high concentration of consumer groups in the state. Because it is the surface-coal industry that will lose from the regulation and the underground coal industry that will benefit, these influences follow the pattern one would expect. Table 19.8 Factors Affecting Voting Patterns for Strip-Mining Regulation Determinants of the Percentage of Anti-Strip-Mining Vote Explanatory Variable

Capture Model

Capture and Ideology

Pro-environmental vote Regulation-induced mine cost Surface reserves Underground coal reserves Split rights for strippable land Environmental group membership in state Unreclaimed value of land (stripped not restored) Coal consumption in state Surface coal mine groups Underground coal mine groups Environmental groups Consumer groups in state

Not applicable Significant negative Significant negative Significant positive Negative Significant positive Significant positive Negative Negative Positive Significant positive Significant negative

Significant positive Significant negative Significant negative Significant positive Positive Positive Significant positive Significant negative Positive Positive Negative Negative

The final column of estimates adds a capture theory variable, which is the representative’s proenvironmental voting record. This variable has a positive effect on voting in favor of anti-strip-mining legislation, and the magnitude of this effect is substantial. Kalt and Zupan interpret this result as indicating that ideology has an independent influence above and beyond capture. Indeed, the addition of this variable increases the percentage of the variation explained by the equation for the determinants of the anti-stripmining vote by 29 percent. Interpretation of this measure as reflecting simple capture concerns also raises some difficulties, however. It may be that the pro-environmental voting record serves to reflect the influence of omitted aspects of the capture model. In particular, the fact that a legislator has voted in favor of environmental measures in the past may be a consequence of the influence of a variety of economic self-interest variables pertaining to the stakes that her voters have in such legislation. Consequently this variable may serve as a proxy for a host of concerns that were not explicitly included in the equation. Thus, the pro-environmental voting record may not reflect concerns restricted to ideology, but may reflect the past influence of capture variables that will also be at work with respect to the anti-strip-mining vote. Interpretation of these models is also difficult because of the role of logrolling. If members of Congress exchange votes, agreeing to vote for measures they do not support in return for support of legislation in which they have a substantial interest, then the congressional voting patterns may be a misleading index of the impact of the political forces at work. To the extent that there is controversy in the economics literature over these issues, it is not with respect

to whether capture models are of consequence, but the extent to which these influences are at work. The extreme versions of these models claim that the capture theory accounts for the entire pattern of voting and regulation. Even if such economic self-interests are only major contributors to the outcomes and not the sole contributors, they should nevertheless be taken into account as important factors driving the determination of regulatory policy. Political factors are influential in the implementation of policies, not just in their statutory structure. The operation of the EPA hazardous waste cleanup effort known as Superfund reflects the pivotal role of political influences rather than objective assessments of the benefit-cost merits of cleanups.9 Consider the following effects based on statistical analyses that control for a broad series of factors influencing the risks and costs of cleanup. In areas where the state’s senators have a high League of Conservation Voters score for environmental policy, the target risk level after cleanup is lower, so that cleanup efforts are more diligent. When choosing the level of cleanup, there is an apparent effort by the EPA to be responsive to senators who have an established pro-environmental voting record. Similarly, in areas where local voter turnout is high, the EPA cleans up the site to a lower risk level. High county voting percentage also leads to greater EPA expenditures per expected cancer case avoided through the cleanup effort. Somewhat surprisingly, these political variables significantly affect cleanup decisions, whereas factors that ideally should be influential, such as whether the site poses a risk to current populations, do not have a significant effect on agency actions.

Summary and Overview of Part III Determining the optimal environmental policy typically involves only a straightforward application of benefit-cost analysis. Perhaps the main role for economists is in defining what these benefits are, particularly since environmental benefits typically are not explicitly traded on the market. As a practical matter, environmental policies tend to be governed by a host of political factors that bear little relationship to the normative guidelines that might be prescribed by economists. Economic analysis nevertheless has a role to play with respect to these forces as well, as it illuminates how the different payoffs to the political actors have motivated the environmental policies that have emerged over the past two decades. In subsequent chapters, we will explore a representative mix of issues arising in the environmental and safety area. This examination is not intended to be exhaustive, but we will address many problems that arise in this class of regulatory policies. Chapter 20 begins with a discussion of the task of setting a price on the regulatory impacts. For the most part, social regulation efforts deal with situations in which there are no existing markets available to set a price. We can draw marginal benefit curves and total benefit curves as convenient abstractions, but ultimately we need to know how much benefit society does place on environmental quality or reduced birth defects before we can make any policy judgments that are more than hypothetical. Over the past two decades, economists have devoted considerable attention to devising methodologies by which we can establish these prices. Indeed, the principal focus has perhaps been on the area that one might have thought would be least amenable to economic analysis, which is the economic value of reducing risk to human life. The implications of chapter 20 are pertinent to all subsequent discussions, inasmuch as they provide the best reference point for assessing the appropriate stringency of the regulatory policy. Chapter 21 focuses on specific cases of environmental regulation. The regulatory strategy in this case has been largely to set standards and to issue permits that allow firms to engage in specified amounts of pollution. The major difficulty is that compliance with these standards may be quite costly, and there is a need to strike a balance between the economic costs to the firm and the benefits that will be achieved for society. Environmental protection efforts are utilizing increasingly imaginative means to strike such a balance, and we explore such approaches in this chapter. In chapter 22 we turn to a series of product safety regulations. These regulations also deal with risks that are, at least in part, the result of market transactions. The diverse forms of product regulation include auto safety regulation, food and drug regulations, general product safety regulations by the Consumer Product Safety Commission, and the impact of the liability system on product safety. This diversity has created a need for coordination among these institutions. Moreover, in the product safety area in particular, the role of “moral hazard” has played a central role in economic analyses. Mandating seat belts may have seemed like an attractive regulatory policy, but if nobody uses the seat belts, we will not experience any gain in safety. Other more complex behavioral responses are also possible, and we will examine these as well. Perhaps the main message of the product safety analysis is that the outcomes of regulatory policy are not dictated by regulatory technology but instead involve a complex interaction of technological and behavioral responses. Job safety regulations, the focus of chapter 23, also involve markets that are in existence. In particular, workers incur job risks as part of the employment relationship, and in many cases, these risks are understood by the worker who is facing the risk. Moreover, in return for bearing the risk, the worker will receive

additional wages, as well as workers’ compensation benefits after an injury. The presence of a market makes regulation of job safety somewhat different in character from environmental regulations. Moreover, the main regulatory issues in the job safety area seem to be quite different as well. Because of the presence of a market before the advent of regulation, the initial wave of job safety regulation faced considerable resistance. In addition, perhaps the weakest link in the regulatory effort in the job safety area is the lax enforcement effort. In this area in particular, implementation aspects of regulatory policy loom particularly large. The concluding chapter 24 explores behavioral economics factors that intersect with all chapters in this part of the book. Insights from behavioral economics are most pertinent when diagnosing the presence and source of market failures relating to individual rationality. Decisions involving jobs, products, and the environment all are subject to such limitations. Economists are increasingly exploring potential policy improvements that can be generated by incorporating behavioral insights into policy design. Questions and Problems 1. Contrast the kinds of market failure that lead to regulation of automobile safety as opposed to regulation of automobile emissions. In which case does a market exist that, if it were operating perfectly, would promote an efficient outcome? What kinds of impediments might lead to market failure? 2. Officials of the EPA frequently argue that we should not discount the benefits of environmental regulation. Their argument is that it is acceptable to discount financial impacts, but it is not acceptable to discount health effects and environmental outcomes that occur in the future. What counterarguments can you muster? 3. What are the alternative mechanisms by which the government has intervened in the health, safety, and environmental area? How does the choice of mechanisms across agencies such as OSHA, EPA, and the FDA vary with the regulatory context? Do you see any rationale, for example, for the differences in regulatory approach? 4. If future generations were able to contract with us, they would presumably bargain for a higher level of environmental quality than we would choose to leave them without such compensation. To what extent should society recognize these future interests? Should our views be affected at all by the fact that future generations will be more affluent and will probably have a higher standard of living? How will this difference in wealth affect their willingness to pay for environmental quality, as opposed to that of current generations? What will be the equity effects in terms of income distribution across generations? 5. Should society react differently to voluntary and involuntary risks? Which risk would you regulate more stringently?

Recommended Reading Health, safety, and environmental regulation often has been a prominent concern of the Council of Economic Advisors, who have frequently provided comprehensive and excellent perspectives on these issues. Almost every recent annual report devotes a chapter to the regulatory issues considered in part III of this book. During the Obama administration, the regulatory focus was on policies relating to energy and climate change. The Economic Report of the President, February 2015 (Washington, DC: U.S. Government Printing Office, 2015) includes chapter 6 on the environmental and economic impacts of the Obama energy plan. In The Economic Report of the President, March 2013 (Washington, DC: U.S. Government Printing Office, 2013), chapter 6 outlines different approaches to addressing climate change, including market-based solutions, regulations, and the transition to clean energy sources. The Economic Report of the President, February 2012 (Washington, DC: U.S. Government Printing Office, 2012), chapter 8; The Economic Report of the President, February 2011 (Washington, DC: U.S. Government Printing Office, 2011), chapter 6; and

The Economic Report of the President, February 2010 (Washington, DC: U.S. Government Printing Office, 2010), chapter 9, likewise explore transformations of the energy sector to address climate change. Previous administrations had a broader focus in their exploration of regulatory economics issues. The Economic Report of the President, February 2004 (Washington, DC: U.S. Government Printing Office, 2004) includes chapter 9 on environmental regulation and chapter 11 on the role of the tort system, including the regulation-litigation interaction. The previous year’s report included a chapter on regulation, much of which focused on regulatory reform of health and safety regulation: see The Economic Report of the President, February 2003 (Washington, DC: U.S. Government Printing Office, 2003), chapter 4. Similarly, there is a discussion of environmental regulation in The Economic Report of the President, February 2002 (Washington, DC: U.S. Government Printing Office, 2002), chapter 6. For a comprehensive analysis of the use of value of a statistical life figures in regulatory policy practices throughout the world, see W. Kip Viscusi, Pricing Lives: Guideposts for a Safer Society (Princeton University Press, 2018). Notes 1. For a detailed perspective on the various causes of accidental death, see the National Safety Council, Injury Facts, 2016 ed. (Itasca, IL: National Safety Council, 2016). 2. The overall death toll of the 9/11 attacks, including those on the ground, was about 3,000. 3. National Safety Council, Injury Facts, 2016 ed. (Itasca, IL: National Safety Council, 2016), p. 156. 4. For further discussion of the relationship between wealth and risk, see W. Kip Viscusi, Fatal Tradeoffs: Public and Private Responsibilities for Risk (New York: Oxford University Press, 1992); and W. Kip Viscusi, Risk by Choice: Regulating Health and Safety in the Workplace (Cambridge, MA: Harvard University Press, 1983). 5. For an early discussion of the beneficial role of new technologies and their effect on longevity, see Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Books, 1988). 6. The U.S. Office of Management and Budget periodically issues reports that describe in detail the character of its oversight efforts. The most recent of these documents is U.S. Office of Management and Budget, 2015 Report to Congress on the Benefits and Costs of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities (Washington, DC: U.S. Government Printing Office, 2016). A useful historical document is U.S. Office of Management and Budget, Regulatory Program of the United States Government, April 1, 1990–March 31, 1991 (Washington, DC: U.S. Government Printing Office, 1990). 7. This discussion is based on Robert Crandall, Controlling Industrial Pollution: The Economics and Politics of Clean Air (Washington, DC: Brookings Institution Press, 1983). The important analysis by Pashigian appears in B. Peter Pashigian, “Environmental Regulation: Whose Self-Interests Are Being Protected?” Economic Inquiry 23 (October 1985): 551–584. 8. See Joseph P. Kalt and Mark A. Zupan, “Capture and Ideology in the Economic Theory of Politics,” American Economic Review 74 (June 1984): 279–300. 9. See James T. Hamilton and W. Kip Viscusi, Calculating Risks: The Spatial and Political Dimensions of Hazardous Waste Policy (Cambridge, MA: MIT Press, 1999).

20 Valuing Life and Other Nonmonetary Benefits

Establishing the appropriate degree of social regulation requires that we set a price for what the regulation produces. In the case of environmental regulation, we need to know what the value to society of additional pollution reduction will be before we can set the stringency of the standard. In the case of health and safety regulations, we need to know the value of preventing additional risks to life and health. Although one can sidestep these issues in part by relying on cost-effectiveness analysis, in which we calculate the cost per unit of social benefit achieved (such as the cost per expected life saved), the most that can be done with cost-effectiveness analysis is to weed out the truly bad policies. Ultimately, some judgment must be made with respect to the amount of resources society is willing to commit to a particular area of social regulation. In practice, this tradeoff may be implicit, as government officials may make subjective judgments with respect to whether a policy is too onerous. Implicit overall judgments come close to setting an implicit value on life, health, or pollution, but often these judgments may result in serious imbalances across policy areas. One reason for these imbalances is that taking tradeoffs into consideration in an ad hoc manner may be a highly imperfect process. The Occupational Safety and Health Administration (OSHA), for example, attempts to avoid regulatory actions that will lead to the shutdown of a particular firm. The Environmental Protection Agency (EPA) had similar concerns as it made an effort to phase in pollution requirements for the steel industry. When EPA policies would have serious repercussions for local employment, it has sought the advice of the residents in the affected area. Often the compromise that is reached is that the requirements will be phased in over a long period, an approach that will reduce the costs of transition and can better be accommodated, given the normal process of replacing capital equipment over time. This practice of phasing in requirements has also been followed for automobile regulation, where pollution control requirements and major safety innovations, such as air bag requirements, have been imposed with fairly long lead times so that the industry can adjust to the standards. The focus of this chapter is on how society can establish a more formal, systematic, and uniform basis for establishing tradeoffs between the resources expended and the benefits achieved through social regulation efforts. For most economic commodities, this would be a straightforward process. The U.S. Bureau of Labor Statistics gathers price information on hundreds of commodities, so that finding out the price of a markettraded good is a fairly trivial undertaking. In contrast, social regulation efforts for the most part deal with commodities that are not traded explicitly in markets. Indeed, from a policy standpoint, it is in large part because of the lack of explicit trade that we have instituted government regulation in these areas. Victims of pollution do not sell the right to pollute to the firms that impose these pollution costs. Future generations that will suffer the ill effects of genetic damage likewise do not contract with current generations, the researchers conducting genetic engineering experiments, or the firms that expose pregnant women to high

levels of radiation. Nevertheless, to the extent that it is possible, we would like to establish a market reference point for how much of a resource commitment we should make to prevent these outcomes, so that we can get a better sense of the degree to which various forms of social regulation should be pursued. We will use the valuation of the risks to life as the case study for considering how the government can value the benefits associated with regulations affecting health and the environment. Two approaches have been used. The first is to estimate the implicit prices for these social risk commodities that may be traded implicitly in markets. Most important is that workers receive additional premiums for the risks they face on the job, and the wage tradeoffs they receive can be used to establish an appropriate tradeoff rate. The second general approach is to ask people in interviews how much they value a particular health outcome. This methodology may have greater problems with respect to reliability, but it has the advantage that one can obtain tradeoff information regarding a wide range of policy outcomes. Policy Evaluation Principles Suppose that this evening you will be crossing the street and that you have one chance in 10,000 of being struck by a bus and killed instantaneously. We will offer you the opportunity to buy out of this risk for a cash payment now. For purposes of this calculation, you can assume that your credit is good and that, if necessary, you can draw on either your parents’ or your future resources. To put the risk in perspective, a probability of death of one chance in 10,000 is comparable to the average fatality risk faced each year by the average worker in the construction industry. How much would you be willing to pay for eliminating this risk? This kind of thought process is exactly what the government should go through when thinking about how far to push various social regulation efforts. In particular, the main matter of concern is society’s total willingness to pay for eliminating small probabilities of death or adverse health effects.1 Thus we are not interested in the dollar value of your future earnings that will be lost, although this, of course, will be relevant to how you think about the calculation. In addition, we are not interested in how much you are willing to pay to avoid certain death. The level of the probability of risk involved with certain death dwarfs that associated with small risk events by such an extent that the qualitative aspects of the risk event are quite different. It is noteworthy, for example, that society views suicide with disfavor, but the taking of small risks, such as the decision to drive a subcompact car rather than a larger car that offers greater safety, is generally viewed as being acceptable. Let us now take your response to the willingness-to-pay question that we have asked and convert it into a value per expected death. What we mean by this calculation is the value that you would be willing to pay to prevent a statistical death that occurs with low probability. To emphasize this dependence on probabilities, we will use the term value of a statistical life. This amount is straightforward to calculate; simply divide your willingness-to-pay response by the level of the risk that you are reducing:

This equation gives the amount you would be willing to pay per unit of mortality risk. For the specific values given in the example we considered, the value of a statistical life can be calculated as

or

Consequently, if your willingness-to-pay is $900, your value of a statistical life equals $900/(1/10,000), or $9 million. An alternative way of thinking about the value of risks to life is the following. Consider a group of 10,000 people in a sports arena, one of whom will be killed at random. Assume that such a death risk has no desirable features. There will be one expected death. If each person were willing to contribute $900 to eliminate the risk, then the amount that could be raised to prevent one expected death would be 10,000 multiplied by $900, or $9 million. This calculation is identical to that in equation 20.3. The value of a statistical life implicit in your response is consequently 10,000 times the amount of your response. Table 20.1 gives different estimates of the value of a statistical life, depending on the level of your answer. If there is no finite amount of money that you would be willing to pay to prevent this risk, and if you were willing to devote all of your present and future resources to eliminate it (presumably retaining enough for minimal subsistence), then it would be safe to say that you place an infinite value on risks to your life, or at least a value that is very, very large. Any finite response below this amount implies that you would be willing to accept a finite value of statistical life or make a risk-dollar tradeoff when confronted with a life-extending decision. When viewed in this manner, making a risk-dollar tradeoff does not appear to be particularly controversial. Indeed, one might appear to be somewhat irrational if one were willing to expend all of one’s future resources to prevent small risks of death, particularly given the fact that we make such tradeoffs daily, as some of the risk statistics in chapter 19 indicated. Table 20.1 Relation of Survey Responses to the Value of a Statistical Life Amount Will Pay to Eliminate 1/10,000 Risk ($)

Value of a Statistical Life ($)

Infinite Above 1,000 500–1,000 200–500 50–200 0

Infinite At least 10,000,000 5,000,000–10,000,000 2,000,000–5,000,000 500,000–2,000,000 0

For the finite value of life responses, a willingness to pay $1,000 to prevent a risk of death of one chance in 10,000 implies a value of statistical life of $10 million. A response of $500 to prevent the small risk implies a value of a statistical life of $5 million. Similarly, at the extreme end, a zero response implies a value of a statistical life of zero. Table 20.1 thus summarizes the relationship between various willingnessto-pay amounts and the value of a statistical life. When presented with this survey task, most students tend to give fairly low responses, at the lower end of the range of the table. When we examine the implicit values of a statistical life of workers based on the wages they receive for the risks they face in their jobs, we will show that their values of statistical life are much greater than those often given by students responding to the one in 10,000 death-risk question. The

median estimate of the value of a statistical life for a worker in the United States is about $9.6 million.2 Three explanations come to mind for the low responses students often give to these risk reduction questions. First, dealing with low-probability events such as this one is a very difficult undertaking. Second, even if the risk probability is understood, the threat may not be credible. It is, after all, only a hypothetical situation. Third, there is a tendency to think in terms of one’s immediate resources rather than one’s lifetime resources when answering this question. The current budget of a typical college student is substantially below that of an average blue-collar worker, but the student’s ultimate lifetime earnings will be greater. Willingness-to-Pay versus Other Approaches The procedure used to value risks to life, health, and environmental outcomes more generally is exactly the same as that used in other contexts in which we assess the benefits of a government program. In particular, the benefit value is simply society’s willingness to pay for the impact of the program.3 This outcome may be in the form of a lottery, as in the case where the probability of an adverse event is reduced through a beneficial risk-regulation effort. Although reliance on the willingness-to-pay approach may seem to gain us little in terms of enabling us to assess benefit values in practice, it does offer a considerable advantage in terms of preventing one from adopting a benefit assessment procedure that is not economically sound. The economic pitfalls that may be encountered are apparent from considering some of the alternative approaches to valuing life that have been suggested. For the most part, these approaches rely on various human capital measures related to one’s lifetime earnings.4 However, the kind of approach that is useful in assessing the value of training or education may be wholly inappropriate for assessing the implications of life-extending efforts. The first human capital measure one can consider is the present value of one’s lifetime earnings. This might be taken as a good gross measure of one’s value to the gross national product, and it is an easy number to calculate. The fallacy of using this measure is apparent in part from the fact that the elderly and people who choose to work outside the labor force would fare particularly badly under such a procedure. In addition, although one’s income level is clearly going to influence one’s willingness to pay for risk reduction, it need not constrain it in a one-to-one manner. Thus, when dealing with a small risk of death, such as one chance in 10,000, one is not necessarily restricted to being willing to spend only 1/10,000 of one’s income to purchase a risk reduction. One could easily spend 5/10,000 or more for small incremental reductions in risk. Difficulties arising from budgetary constraints are encountered only when we are dealing with dramatic risk increments. Moreover, if one were faced with a substantial risk of death, one might choose to undertake unusual efforts, such as working overtime or moonlighting on a second job, if one’s survival depended on it. A variant on the present-value-of-earnings approach is to take the present value of lifetime earnings net of the consumption of the deceased. This is a common measure used in court cases for compensating survivors in wrongful death cases, inasmuch as it is a reflection of the net economic loss to the survivors after the death of a family member. This type of calculation abstracts from the consumption expenditures of the individual who is deceased, and it is certainly the individual whose health is most affected who should figure prominently in any calculation of the benefits of pursuing any particular social regulation. A final approach that has appeared in the literature is to look at the taxes that people might pay. Focusing on tax rates captures the net financial contribution one makes to society, but it has the drawback of neglecting the income contribution to oneself or one’s family as well as the value that the person places on her own life.

Notwithstanding the inappropriateness of the various earnings approaches, this technique has not only appeared in the literature but is also the standard approach used by the courts; and it was the initial approach adopted by government agencies. Much of the appeal of the method is that it lends itself to calculation. A major policy event that led to a shift in the approach taken was the OSHA hazard communication regulation that was the subject of intense debate in the early 1980s.5 OSHA prepared its regulatory analysis, assessing the value of the risk reduction achieved by valuing these impacts according to the lost earnings of the individuals whose death or nonfatal cases of cancer could be prevented. OSHA justified this approach on the basis that it was much too sensitive an issue to value life, so that it would follow the alternative approach of simply assessing the costs of death. Because of OSHA’s overoptimistic risk assessment assumptions, the U.S. Office of Management and Budget (OMB) rejected the regulatory proposal. OSHA appealed this decision to then-Vice President George Bush, who had delegated authority over regulatory matters. Vice President Bush viewed the controversy as hinging on technical economic issues, and W. Kip Viscusi was asked to resolve the controversy. The failure of the OSHA analysis to show that the benefits exceeded the costs could be traced to valuing lives based on the cost of death rather than on the basis of society’s willingness to pay to reduce fatality risks. After making this change, the estimated benefits of the proposed regulation far exceeded the costs. The day after this economic analysis reached the Reagan White House, OSHA was permitted to issue the regulation. Because willingness-to-pay amounts generally exceed the present value of lost earnings by roughly an order of magnitude, using an appropriate economic methodology greatly enhances the attractiveness of social regulation efforts and makes these regulations appear more attractive than they would otherwise be. Indeed, the substantial size of the benefit estimates that can be achieved using the willingness-to-pay measure, rather than its economic soundness, may be the principal reason for the widespread adoption of this approach throughout the federal government. There also appears to be less reluctance to address the life-saving issues directly. Three decades ago, raising the issue of valuing risks to life appeared to be intrinsically immoral. However, once it is understood that what is at issue is the amount of resources one is willing to commit to small reductions of risk, rather than to prevent a certain death, then the approach becomes less controversial. Moreover, because the measure is simply the total willingness of society to pay for the risk reductions, it does not use economic pricing in any crass or illegitimate way, as would be the case with the various human capital measures noted earlier. Society has also become aware of the wide range of risks that we face, including those imposed by our diets and a variety of personal activities. The idea that it is not feasible to achieve an absolutely risk-free existence and that some tradeoffs must ultimately be made is becoming more widely understood. Variations in the Value of a Statistical Life One dividend of going through the exercise summarized in table 20.1 is that individuals will give different answers to these willingness-to-pay questions. There is no right answer in terms of valuing risks to life. Thus we are not undertaking an elusive search for a natural constant such as e or π. Rather, the effort is simply one to establish an individual’s risk-dollar tradeoff. Individuals can differ in terms of this tradeoff just as they can with respect to other kinds of tradeoffs they might make concerning various types of consumption commodities that they might purchase. It makes no more sense to claim that individuals should have the same value of a statistical life than it does to insist that everyone should like eating raw oysters. A major source of differences in preferences is likely to be individuals’ lifetime wealth. People who are

more affluent are likely to require a higher price to bear any particular risk. This relationship is exhibited in the substantial positive income elasticity in the demand for medical insurance, as well as in a positive relationship between individual income and the wage compensation needed to accept a hazardous job. The amount workers are willing to pay to avoid a given injury risk increases roughly proportionally with worker income, which is consistent with this pattern of influences.6 A recent review of sixty studies of the value of a statistical life throughout the world found that there is an income elasticity of the value of a statistical life of about 0.5 to 0.6.7 Some U.S. government agencies now assume an income elasticity of 1.0 for the value of a statistical life. Overall, there is likely to be substantial heterogeneity in individual preferences, and this heterogeneity will be exhibited in the choices that people make. Empirical evidence suggests that smokers are more willing to bear a variety of risks other than smoking in return for less compensation than would be required for a nonsmoker.8 Individuals who wear seat belts are particularly reluctant to incur job risks, a finding that one would also expect. Indeed, the estimates by Joni Hersch and W. Kip Viscusi indicate that, while the average worker values a typical lost-workday injury at $99,000, smokers value such an injury at $54,000, and workers who use seat belts value such an injury at $163,000. Thus, individual risk-money tradeoffs show considerable heterogeneity. If one examined a distribution of job-related risks, such as that provided in table 20.2, one would expect that the individuals who are in the relatively safe occupations listed in the table would generally be more averse to risk than those in the riskiest pursuits. In contrast, people who tend to gravitate to the high-risk jobs, who choose to skydive, or who smoke cigarettes are more likely to place a lower value on incurring such risks than those who avoid such pursuits. Table 20.2 Average Worker Fatality Rates by Occupation, 2014 Occupation

Fatal Injury Rate (per 100,000 workers)

Management, professional, and related Service Sales and office Natural resources, construction, and maintenance Production, transportation, and material moving Average for all civilian workers

1.3 2.6 1.1 11.5 9.0 3.4

Source: U.S. Bureau of Labor Statistics, Census of Fatal Occupational Injuries, “Fatal Occupational Injuries, Total Hours Worked, and Rates of Fatal Occupational Injuries by Selected Worker Characteristics, Occupations, and Industries, Civilian Workers, 2014” (http://www.bls.gov/iif/oshwc/cfoi/cfoi_rates _2014hb.pdf).

Although such substantial differences exist, from a policy standpoint, the extent to which we would use such distinctions is not quite clear. Should we provide individuals with less stringent government regulations to protect them if they have revealed by other activities that they are willing to bear a variety of risks to their well-being? Viewed somewhat differently, should we override the decisions of people who find a particular wage-risk tradeoff in the labor market attractive or who find the nuisance of wearing a seat belt to outweigh the perceived benefits to themselves? Although one should generally respect individual values in a democratic society, we may wish to distinguish situations in which individuals are believed to be irrational or where it is not feasible to educate people inexpensively with respect to the rational course of action. One danger of regulation of this type, however, is that we may impose the preferences of policymakers on the individuals whose well-being is supposed to be protected, an approach that may not necessarily be welfare-enhancing for those affected by the regulation.

The one instance in which the differences in the value of a statistical life should clearly be utilized is when assessing future impacts of regulatory programs. Because further benefits are deferred, discounting these benefits to bring them to present value reduces the current value of regulatory policies with long-run effects, such as reducing global warming. If, however, we recognize that future generations are likely to be wealthier, then much of the role of discounting will be muted. Consider, for example, the situation in which the income elasticity of the value of the benefits is 1.0. Let the benefit n years hence be b, the growth rate in income between now and the time when the benefits are realized be g, and the interest rate be r. The following equation gives the present value of the benefits, which simply equal the dollar benefit value b multiplied by a growth factor, which is the spread between the growth rate in income minus the interest rate:

Thus the growth in income will mute to a large extent the influence of discounting when weighing the consequences of policies in the future. Should one discount at all or simply treat all policy outcomes in different years equally, irrespective of the time when they transpire? This procedure has been advocated by the EPA because doing so will greatly enhance the attractiveness of its efforts, many of which have deferred effects. The fallacy of ignoring discounting altogether is apparent when one considers that in the absence of discounting, one would never take an action in which there will be a permanent adverse effect of any kind. The costs of such efforts will always be infinite, and such policies would never be pursued. Labor Market Model Most empirical estimates of the value of a statistical life have been based on labor market data. The general procedure is to estimate the wage-risk tradeoff that workers implicitly make as part of their jobs and to use the implications of this tradeoff as an estimate of the value of a statistical life. As the starting point for the analysis, consider figure 20.1. Sketched in this diagram are two curves, EU1 and EU2, which are constant expected utility loci for the worker. The combination of wages and risk on each curve gives the worker the same expected utility. The required wage rate is an increasing function of the risk, which is true for a wide range of individual preferences; all that is required is that one would rather be healthy than not. It is not necessary that one be risk-averse in the sense of unwilling to accept actuarially unfair financial bets. Higher wage rates and lower risk levels are preferred, so that the direction of preference is toward the upper left in the graph.

Figure 20.1 Worker’s Constant Expected Utility Locus for Wages and Risk

Workers do not have all wage-risk combinations to choose from, but instead are limited to those offered by firms. Figure 20.2 illustrates how the available set of job opportunities is constructed. Each particular firm has a constant expected profits locus. Thus, one firm will have a locus MM, where this isoprofit curve gives the locus of wage-risk combinations that give the firm the same level of profits. For example, if a firm lowers the risk level by investing in additional health and safety equipment, to maintain the same level of profits, the wage rate must go down. As a result, the wage that the firm can offer and maintain the same level of profits will be an increasing function of risk. The curvature of the MM isoprofit curve is dictated by the fact that additional safety reductions become increasingly difficult to achieve, so that as one moves to the left along the risk axis, the additional cost expenditures on the part of the firm become increasingly great. Consequently, the magnitude of the wage increase required for any particular risk reduction becomes greater. Curve NN is another example of an isoprofit curve for a different firm in the industry.

Figure 20.2 Derivation of Market Offer Curve

The outer envelope of the isoprofit curves for the entire industry provides the offer curve available to workers. Thus, a worker’s task is to select the point along the offer curve VV that gives the worker the highest level of expected utility. Points below this curve will be dominated by points along it, since a point below VV will be less desirable than a job that offers the same risk at a higher wage rate. The nature of market equilibrium is illustrated in figure 20.3.9 Worker 1 achieves his constant expected utility, EU1, at the point of tangency with the market opportunity locus VV, where his tangency point is at X. In contrast, worker 2 selects a higher risk-wage combination at point Y. Because of the aforementioned heterogeneity in individual tastes, workers will generally sort themselves along the part of the wage offer curve that best suits their preferences.

Figure 20.3 Equilibrium in the Market for Risky Jobs

How one interprets differences in the estimated value of a statistical life depends on people’s opportunities, not just their preferences. African American workers in the United States, for example, have estimates of the value of statistical life of about half to two-thirds that of whites, depending on the particular analysis and sample.10 However, this difference in risk-wage tradeoffs is not attributable solely to a greater willingness to bear risk. Mexican immigrants who are not fluent in English face even greater challenges, as they work on jobs that pose above average risks but receive no extra remuneration for these risks.11 These phenomena can be shown to be failures in efficient market operation, as disadvantaged workers often do not have access to the same kinds of jobs. In particular, the market offer curve VV facing African American workers and Mexican immigrants is lower and flatter than that facing whites, reflecting their quite different array of labor market opportunities. Estimates of the value of a statistical life in the labor market consequently reflect the economic forces of both supply and demand, which together will jointly determine the observed tradeoff rate. The task of empirical analysis in this area is to analyze the nature of the observed market equilibrium points reflected in data sets on worker behavior. Thus, if we observe points X and Y, the estimation of a linear relationship between wages and risk would yield the curve AA shown in figure 20.3. The slope of AA gives the estimated wage-risk tradeoff. In effect, what this curve does is indicate the terms of trade that workers, on average, are willing to accept between risk and wages. These terms of trade in turn can be used

to extrapolate the implicit value that workers attach to a statistical death. The details of the methodology vary, depending on the particular data set used for the estimation. In general, the statistical approach involves the use of a large set of data on individual employment behavior. Table 20.3 presents examples of international evidence on the value of a statistical life.12 A group of thirty studies from the U.S. labor market considered in the meta-analysis by Viscusi and Aldy has a median value of $9.6 million. These studies have served as a primary basis for the value of a statistical life used by U.S. regulatory agencies. Table 20.3 also presents similar estimates that have been generated for ten other countries. While the methodologies used in these different studies often differ, the results are broadly consistent with expectations, given the income level differences. The lowest estimates are for such countries as Taiwan, South Korea, and India. Japan and Switzerland have higher estimates of the value of statistical life. Table 20.3 Labor Market Estimates of the Value of a Statistical Life, Various Countries Country

Value of a Statistical Life ($ million)

Median value from thirty U.S. studies Australia Austria Canada Hong Kong India Japan South Korea Switzerland Taiwan United Kingdom

9.6 5.8 5.4–9.0 5.4–6.5 2.3 1.7–2.1 13.4 1.1 8.7–11.9 0.3–1.2 5.8

Source: Based on estimates in W. Kip Viscusi and Joseph E. Aldy, “The Value of a Statistical Life: A Critical Review of Market Estimates throughout the World,” Journal of Risk and Uncertainty 27 (August 2003): 5–76. Single representative studies are drawn from their table 4. Note: All estimates are in 2015 U.S. dollars.

While serving as chief economist for the World Bank, Larry Summers wrote a memo hypothesizing that poorer countries should be more willing to accept risks, such as being the site for hazardous waste storage. The memo generated substantial controversy. However, his economic assumption that the value of a statistical life does vary across countries in the expected direction is borne out by the data in table 20.3. Because these data reflect implicit values of a statistical life based on market decisions, they imply that in these poorer countries, people are already striking a different balance between safety and money than in more affluent nations. Empirical Estimates of the Value of a Statistical Life The general form of the estimation depends in part on the nature of the wage and risk information that is available, such as whether the data pertain to annual earnings or hourly wage rates. One form of estimating the equation is

The dependent variable in this analysis is the annual worker earnings, which is not as accurate a measure as the worker’s hourly wage rate, but for expositional purposes, it facilitates our task of indicating how one constructs the value of a statistical life estimates in the equation. The explanatory variables include the annual death risk facing the worker. In general, this information is matched to the workers in the sample based on their responses regarding their industry or occupation. The coefficient β1 in equation 20.5 indicates how annual earnings will be affected by an increase in the annual death risk. If the annual death risk were 1.0, then β1 would give the change in annual earnings required to face one expected death. Thus, for the equation as it has been set up here, β1 is the estimate of the value of a statistical life. In particular, it represents the tradeoff that workers exhibit between earnings and the risk of death. Studies in the United States have used a variety of different fatality-risk data sets that provide risk estimates that can be matched to workers based on their industry or occupation. While some studies have used life insurance data or workers’ compensation rate data, the most reliable estimates are those based on fatality rate estimates using data from the U.S. Bureau of Labor Statistics’ Census of Fatal Occupational Injuries. These data are based on a comprehensive census of all job-related fatalities verified based on multiple sources rather than extrapolating from a smaller sample of fatalities. The Census of Fatal Occupational Injuries data are also available by individual fatality, making it possible to construct refined risk measures by industry, occupation, age, gender, race, immigrant status, and the nature of the fatal injury. The other variables included in equation 20.5 are designed to control for other aspects of the worker and her job that will influence earnings. In general, the people who earn the highest incomes in our society also have fairly low-risk jobs. This observation, which can be traced back to the time of John Stuart Mill, reflects the positive income elasticity of the demand for health. By including a detailed set of other variables, including coverage of factors such as worker education and union status, one can successfully disentangle the premium for job risks as opposed to compensation for other attributes of the worker and her job. Most U.S. value of a statistical life estimates using labor market data range from under $5.2 million to $12.4 million. This heterogeneity is not solely a consequence of the imprecision of the statistical measures but instead stems from the fact that these studies are measuring different things. The value of statistical life estimates for samples of different riskiness are expected to be different because the mix of workers and their preferences across samples may be quite different. In addition, the degree to which different risk variables measure the true risk associated with the job may differ substantially across risk measures. Even with the current state of econometric techniques and the substantial literature devoted to this issue, economists cannot yet pinpoint the value of statistical life that is appropriate in every particular instance. However, we have a good idea of the general range in which such values fall, and from the standpoint of making policy judgments with respect to the ballpark in which our policies should lie, this guidance should be sufficient.

Value of Risks to Life for Regulatory Policies For the most part, regulatory agencies have used estimates drawn from the labor market value of a statistical life literature to value the benefits of regulations that reduce risks to life. Table 20.4 summarizes these unit benefit values from a wide range of regulatory impact analyses from 1985 to 2012.13 The early estimates of the value of a statistical life were as low as $1 million for branches of the U.S. Department of Transportation, which previously used the human capital measure for the value of risks to life. Adoption of the larger labor market estimates has gradually led to an increase in the values used and convergence throughout the federal government. For the more recent years, agencies have used estimates in the $8 million to $10 million range. While the figure used by the Federal Aviation Administration in 2012 in setting flight crew member duty and rest requirements was only $6.2 million, in 2015 the U.S. Department of Transportation subsequently increased its estimate of the value of a statistical life for all branches of the agency to $9.4 million. Table 20.4 Selected Values of Statistical Life Used by U.S. Regulatory Agencies

Year

Agency

Regulation

Value of a Statistical Life ($ million)

1985

Federal Aviation Administration Environmental Protection Agency Federal Aviation Administration Environmental Protection Agency Federal Aviation Administration Food and Nutrition Service (USDA) Consumer Product Safety Commission Food Safety Inspection Service (USDA) Food and Drug Administration Federal Aviation Administration Environmental Protection Agency Food and Drug Administration Environmental Protection Agency Environmental Protection Agency Environmental Protection Agency Consumer Product Safety Commission Environmental Protection Agency Environmental Protection Agency Department of Homeland Security

Protective Breathing Equipment (50 FR 41452)

1.3

Regulation of Fuels and Fuel Additives; Gasoline Lead Content (50 FR 9400)

2.2

Improved Survival Equipment for Inadvertent Water Landings (53 FR 24890)

1.9

Protection of Stratospheric Ozone (53 FR 30566)

6.3

Proposed Establishment of the Harlingen Airport Radar Service Area, TX (55 FR 32064)

2.7

National School Lunch Program and School Breakfast Program (59 FR 30218)

2.2, 4.6

Multiple Tube Mine and Shell Fireworks Devices (60 FR 34922)

7.4

Pathogen Reduction; Hazard Analysis and Critical Control Point Systems (61 FR 38806)

2.5

Regulations Restricting the Sale and Distribution of Cigarettes and Smokeless Tobacco to Protect Children and Adolescents (61 FR 44396) Aircraft Flight Simulator Use in Pilot Training, Testing, and Checking and at Training Centers (61 FR 34508) Requirements for Lead-Based Paint Activities in Target Housing and Child-Occupied Facilities (61 FR 45778) Medical Devices; Current Good Manufacturing Practice Final Rule; Quality System Regulation (61 FR 52602) National Ambient Air Quality Standards for Ozone (62 FR 38856)

3.5

Radon in Drinking Water Health Risk Reduction and Cost Analysis (64 FR 9560)

8.2

Control of Air Pollution from New Motor Vehicles: Tier 2 Motor Vehicle Emissions Standards and Gasoline Sulfur Control Requirements (65 FR 6698) Portable Bed Rails; Advance Notice of Proposed Rulemaking (65 FR 58968)

8.3

Arsenic in Drinking Water Rule (65 FR 38887)

8.3

National Primary Drinking Water Regulations: Ground Water Rule; Final Rule (71 FR 65573) Advance Information on Private Aircraft Arriving and Departing the United States (72 FR 53393)

9.1

1985 1988 1988 1990 1994 1995 1996 1996 1996 1996 1996 1997 1999 1999 2000 2000 2006 2007

4.0 8.2 7.3 8.2

6.6

3.3–6.6

2008 2010 2010 2010 2011 2011 2011 2011 2012 2012 2012

Consumer Product Safety Commission Federal Railroad Administration NHTSA Occupational Safety and Health Administration Environmental Protection Agency Federal Railroad Administration Occupational Safety and Health Administration Transport Security Administration Federal Aviation Administration Environmental Protection Agency Environmental Protection Agency

Standard for the Flammability of Residential Upholstered Furniture (73 FR 11701)

5.4

Positive Train Control Systems (79 FR 49693)

6.3

Federal Motor Vehicle Safety Standards; Roof Crush Resistance (75 FR 17590) Cranes and Derricks in Construction (75 FR 47905)

6.1 9.1

Proposed Toxics Rule (76 FR 24976)

8.9

Railroad Workplace Safety; Adjacent-Track On-Track Safety for Roadway Workers (76 FR 74585) General Working Conditions in Shipyard Employment (77 FR 18)

6.3

Air Cargo Screening (76 FR 51847)

6.1

Flightcrew Member Duty and Rest Requirements (77 FR 329)

6.2

Final Revisions to the National Ambient Air Quality Standards for Particulate Matter (77 FR 38889) Petroleum Refineries New Source Performance Standards (77 FR 56421)

9.3

8.9

9.7

Source: W. Kip Viscusi, “The Value of Individual and Societal Risks to Life and Health,” in Mark J. Machina and W. Kip Viscusi, eds., Handbook of the Economics of Risk and Uncertainty (Amsterdam: Elsevier, 2014), tables 7.2 and 7.3. Note: All estimates are in 2012 U.S. dollars. FR, Federal Register; NHTSA, National Highway Traffic Safety Administration; USDA, U.S. Department of Agriculture. Double entries for the value of a statistical life reflect different benefit-cost estimates.

It is useful to examine the government policies that have actually been pursued in the social regulation area to see the extent to which they conform with an appropriate value of a statistical life. Because of agencies’ restrictive statutory guidance, the amounts that are actually spent to reduce risks to life are often quite different and may bear little relationship to the benefit values. Table 20.5 summarizes a variety of key aspects of major regulations based on OMB regulatory reviews.14 These regulations covered such diverse issues as cabin fire protection for airplanes, grain dust regulations for grain handling facilities, and environmental controls for arsenic/copper smelters. Although government officials have not generated comparable figures for more recent years, the benefit-cost performance of agencies has improved over time. Table 20.5 Costs of Various Risk-Reducing Regulations per Expected Life Saved Regulation

Year

Agency

Cost per Life Saved ($ million)

Cost per Normalized Life Saved ($ million)

Cost per Year of Life Saved ($ million)

Unvented space heater ban Aircraft cabin fire protection standard Seat belt/air bag Steering column protection standards Underground construction standards Trihalomethane in drinking water Aircraft seat cushion flammability Alcohol and drug controls Auto fuel system integrity Auto wheel rim servicing Aircraft floor emergency lighting Concrete and masonry construction Crane-suspended personnel platform Passive restraints for trucks and buses Auto side impact standards Children’s sleepwear flammability ban Auto side door supports

1980 1985 1984 1967 1989 1979 1984 1985 1975 1984 1984 1988 1988 1989 1990 1973 1970

CPSC FAA NHTSA NHTSA OSHA EPA FAA FRA NHTSA OSHA FAA OSHA OSHA NHTSA NHTSA CPSC NHTSA

0.16 0.16 0.16 0.16 0.16 0.31 0.78 0.78 0.78 0.78 1.09 1.09 1.25 1.25 1.56 1.56 1.56

0.16 0.16 0.16 0.16 0.16 0.94 0.94 0.94 0.78 0.94 1.40 1.40 1.56 1.25 1.56 1.87 1.56

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.16 0.00 0.16 0.16 0.16

Low-altitude wind shear equipment and training Metal mine electrical equipment standards Trenching and excavation standards Traffic alert and collision avoidance systems Hazard communication standard Trucks, buses, and multipurpose passenger vehicles side impact Grain dust explosion prevention standards Rear lap/shoulder belts for autos Standards for radionuclides in uranium mines Benzene NESHAP (original: fugitive emissions) Ethylene dibromide in drinking water Benzene NESHAP (revised: coke byproducts) Asbestos occupational exposure limit Benzene occupational exposure limit Electrical equipment in coal mines Arsenic emission standards for glass plants Ethylene oxide occupational exposure limit Arsenic/copper NESHAP Hazardous waste listing of petroleum refining sludge Cover/move uranium mill tailings (inactive) Benzene NESHAP (revised: transfer operations) Cover/move uranium mill tailings (active sites) Acrylonitrile occupational exposure limit Coke ovens occupational exposure limit Lockout/tagout Asbestos occupational exposure limit Arsenic occupational exposure limit Asbestos ban Diethylstilbestrol (DES) cattle-feed ban Benzene NESHAP (revised: waste operations) 1,2-dichloropropane in drinking water Hazardous waste land disposal ban Municipal solid waste landfills Formaldehyde occupational exposure limit Atrazine/alachlor in drinking water Hazardous waste listing for wood-preserving chemicals

1988

FAA

2.50

2.96

0.16

1970 1989 1988 1983 1989

MSHA OSHA FAA OSHA NHTSA

2.65 2.81 2.81 2.96 4.06

3.12 3.43 3.43 7.49 4.06

0.16 0.16 0.16 0.31 0.16

1987 1989 1984 1984

OSHA NHTSA EPA EPA

5.15 5.93 6.40 6.40

6.24 5.93 15.76 15.76

0.31 0.31 0.78 0.78

1991 1988

EPA EPA

10.61 11.39

26.52 28.24

1.25 1.40

1972 1987 1970 1986 1984 1986 1990

OSHA OSHA MSHA EPA OSHA EPA EPA

15.44 16.54 17.16 25.12 38.06 42.74 51.32

38.53 41.34 20.75 62.71 95.16 106.70 128.08

1.87 2.03 0.94 2.96 4.52 5.15 6.08

1983 1990

EPA EPA

58.81 61.15

147.11 152.72

7.02 7.33

1983

EPA

83.62

208.73

9.98

1978 1976 1989 1986 1978 1989 1979 1990 1991 1988 1988 1987 1991 1990

OSHA OSHA OSHA OSHA OSHA EPA FDA EPA EPA EPA EPA OSHA EPA EPA

95.63 117.94 131.66 137.44 198.59 205.61 231.82 312.31 1,212.74 7,782.37 35,485.01 160,091.57 170,989.26 10,585,882.32

238.99 294.68 159.74 343.36 495.92 513.55 579.07 780.31 3,029.68 19,441.81 88,648.72 399,941.41 427,166.06 26,445,689.24

11.39 14.20 7.64 16.54 23.71 24.65 27.77 37.44 145.24 931.94 4,249.75 19,172.71 20,477.81 1,267,770.35

Source: W. Kip Viscusi, Jahn K. Hakes, and Alan Carlin, “Measures of Mortality Risk,” Journal of Risk and Uncertainty 14 (May/June 1997): 228–229. Note: Dollar amounts are in 2015 dollars. CPSC, Consumer Product Safety Commission; EPA, Environmental Protection Agency; FAA, Federal Aviation Administration; FDA, Food and Drug Administration; FRA, Federal Railroad Administration; MSHA, Mine Safety and Health Administration; NESHAP, National Emissions Standards for Hazardous Air Pollutants; NHTSA, National Highway Traffic Safety Administration; OSHA, Occupational Safety and Health Administration.

Three columns of data are of interest in table 20.5. The first column of data displays the cost per expected life saved by each of the programs. Some of these efforts, such as steering column protection for automobiles and other entries at the top of the table, are bargains. Their cost per expected life saved is well below $1 million. For concreteness, suppose that we took as an appropriate value of a statistical life a figure of $9.6 million in 2015 dollars. Then all regulations in the top part of the table above the EPA’s ethylene dibromide in drinking water regulation would pass a benefit-cost test. Similarly, all regulations in the bottom part of the table could not be justified on benefit-cost grounds. The next column of statistics gives the cost per normalized life saved, where all lives are converted into

accident equivalents based on the discounted number of life-years saved. Acute accident preventions save lives with much more substantial duration than do anticancer policies. The consequences of these adjustments are particularly great for the health-oriented regulations, which already tended to have low cost effectiveness. Once one adjusts for the duration of life lost, all regulations beginning with the EPA’s radionuclide regulation for uranium mines no longer pass a benefit-cost test. The effect of such adjustments is substantial. The EPA asbestos ban cost $205 million per life saved, but it imposed a cost of $514 million per normalized life saved. Whether one should adjust for age and how one should adjust represents an economic controversy that will be examined in chapter 21 with respect to EPA policies. But clearly there are considerable differences across agencies in the life extensions resulting from their regulations. The final column of statistics in table 20.5 gives the cost per expected life-year saved. The U.S. Food and Drug Administration is notable among regulatory agencies in focusing on life-years when assessing policies. Some of these estimated values are considerable, as the OSHA asbestos regulations in 1986 cost over $16 million per expected life-year saved, which is well in excess of a reasonable value for preventing the loss of a year of life for a typical worker. The wide distribution of different agencies in terms of the efficacy of their life-saving efforts is noteworthy. All regulations from the U.S. Department of Transportation—the National Highway Traffic Safety Administration and the Federal Aviation Administration—pass a benefit-cost test. Indeed, that department is exceptional in that it will not pursue any regulation that does not meet such a test of efficacy. In contrast, during the regulatory era shown in table 20.5, virtually every regulation by the EPA and OSHA fails a benefit-cost test because of the restrictive nature of their legislative mandates. An additional message of table 20.5 is that in general, it is not necessary to pinpoint the exact value of a statistical life that is appropriate for any government policy. For the most part, rough judgments regarding the efficacy of a regulation can tell us a great deal. We know, for example, if OSHA arsenic regulations save lives at a cost of $137.4 million per life, that such efforts are out of line with what the beneficiaries of such an effort believe the value of the regulation to be. Moreover, there is likely to be a wide range of other regulatory alternatives by OSHA or other agencies that are likely to be more cost-effective ways of saving lives. Unfortunately, OMB has limited influence; there is no record of OMB ever rejecting a regulation with a cost per expected life saved of under $100 million. Although the range in the value of a statistical life estimates for regulatory policies summarized in table 20.5 may seem to be substantial, in practice, many government policies are proposed but not issued because the value of life is even higher than many of the outliers in this table. For example, in 1984, the EPA proposed regulations for benzene/maleic anhydride that could cost over $1 billion per expected life saved. This regulation was rejected by the OMB as being too expensive. Calculating the costs, benefits, and appropriate reference values for the value of a statistical life often highlights such gross policy distortions. Widespread criticism of the checkered historical record with respect to the cost per expected life saved also may have prompted government agencies to structure policies without such imbalances between benefits and costs. The Obama administration was particularly focused on demonstrating that calculated regulatory benefits exceeded costs, as evidenced by the statistics in table 2.2 and table 2.3, where estimated benefits exceeded costs for all major executive branch agencies. Survey Approaches to Valuing Policy Effects In many circumstances, we do not have readily available market data that can be used to estimate either

implicit or explicit prices. How much, for example, is it worth to prevent genetic damage or to save an endangered species? In the absence of existing data on these issues, an approach that has been used in the benefit-valuation literature for several decades has been to run a survey in which individuals are polled with respect to these values. This approach is now the dominant methodology for assessing environmental benefits because of the paucity of good data on explicit or implicit environmental transactions. The actual procedures that have evolved for doing so in effect attempt to replicate the hedonic market estimate approach used to analyze quality-adjusted wage-risk tradeoffs and similar factors using survey data. For example, such studies would not ask people how much they valued a job injury but would instead ask how much wage compensation they would require to face extra risk. Similarly, assessment of an environmental amenity would focus on purchasing a reduction in certain risks in the environment rather than certain outcomes. The term contingent valuation has been used to describe such studies because they represent values that are contingent on the existence of a hypothetical market.15 Thus, they represent a hybrid between the initial survey approaches used in the literature and the market-based valuation econometric studies that began in the 1970s. More recent terminology refers to such studies as “stated preference studies,” thus drawing a distinction with the revealed preference approach underlying labor market estimates of the value of a statistical life. The objective of stated preference studies is to elicit benefit values by constructing survey questions concerning hypothetical situations. One could pose the valuation question in a variety of ways. In each case one must first give individuals information regarding the risk or other outcome to be valued. Respondents must have an accurate perception of what good is being valued. The first approach would be to ask individuals how much that particular benefit would be worth to them. This is a one-step procedure. A second approach would be an iterative one in which the individual first answered the open-ended question and then was asked whether he or she would be willing to pay a small amount more than the initial response. A third variant on this technique is that instead of asking open-ended questions, individuals could be given a series of bids, and they would then have to determine how high or low they would go. These bids could be given in either ascending or descending order. In the ascending case, an individual might first be asked whether he or she would be willing to pay $1 for improved air quality, and if the answer is yes, the respondent would be asked whether he or she would be willing to pay $2 for improved air quality, and so on, until the individual is not willing to increase the bid. A fourth approach is to utilize paired comparisons in which an individual is given an alternative product or other binary choices to make. Using interactive computer programs, one can then give an individual a succession of options to pick from to locate the point of indifference. All these variations in terms of the methodology are largely ones of process rather than economic content. The underlying issue is how we can best frame the survey questions to elicit the true underlying economic values that individuals have. In the case of market outcomes, we know from revealed preference theory that these values will be expressed in individual decisions, but in the case of surveys, the values that we elicit may be sensitive to the manner in which we attempt to determine individual preferences. More generally, considerable care must be exercised in the design of such survey studies so that they will give us reliable results. Often such studies rely on “convenience samples,” such as groups of students, but our ultimate objective is to ascertain the willingness-to-pay of actual beneficiaries of the project, not the willingness-to-pay of students in the class, whose responses may be biased in part by substantial demand effects (they may give the answers that they expect their professor wants to see). Perhaps the major guidelines in assessing these studies is to determine the extent to which they replicate market processes in a

meaningful manner. When interview studies first came into use in the literature, economists feared that individuals would misrepresent their true values for strategic reasons. Advocates of pollution control efforts, for example, might give responses that indicate enormous willingness-to-pay amounts, knowing that they will not be taxed on the basis of their response and hoping that a high response will tilt the policy in their favor. In practice, the strategic issue has not been a major problem for the survey studies. A more fundamental difficulty is that some individuals often may not give thoughtful or meaningful responses to the question, inasmuch as it does not involve a decision that they actually make. Moreover, because many of the decisions involve risks, some of which are at very low probabilities, the results will not reflect their underlying values but instead will be contaminated by whatever irrationalities influence one’s decisions involving lowprobability events. Valuation of Air Quality The nature of the performance of the survey approach varies from study to study, but some suggestions as to its likely precision are given by a study of air pollution valuation.16 Two approaches were used to value air quality. In the first, a hedonic rent-gradient equation for the Los Angeles area was estimated, analyzing the relationship of home sale prices to a variety of factors likely to influence house price (such as house age, area, school quality, public safety, and distance to the beach). In addition, this equation included measures of pollution in the Los Angeles area, including either total suspended particulates or nitrous oxide concentration levels. The authors found substantial housing price effects of pollution; controlling for other aspects of the housing market, higher pollution levels lowered the price of the house. A survey approach was also used to assess the amount that individuals would be willing to pay in terms of a higher utility bill to achieve cleaner air. The expressed willingness-to-pay for different levels of air quality was roughly one-third of the market-based estimates. These results suggest that at least in this case, overstatement of valuations in surveys may not be a problem, although this conclusion may not be true more generally. In addition, an exact correspondence may not exist between survey valuation estimates and market estimates. Comparisons that have been done for wage equations have yielded results that are more comparable to those obtained with market data, but in the job risk case, one is dealing with a risk that is currently traded in a market and that individuals may have already thought about in this context, increasing the accuracy of the survey responses. Supplementary Nature of the Survey Approach Overall, survey approaches to establishing the benefits of social regulation represent an important complement to analyses using market data. The survey methodology should still be regarded as best used to substitute for market valuations when they are unavailable. There may never be any general conclusions regarding the accuracy of such studies, because accuracy will vary from study to study, depending on the extent to which a realistic market context was created and the degree to which the individuals conducting the survey motivated the survey participants to give thoughtful and honest answers. Sensitivity Analysis and Cost Effectiveness Typically, it will not be feasible to place dollar values on all outcomes of interest. In such circumstances, one could undertake a cost-effectiveness analysis to analyze the cost per unit outcome achieved; such

indices may often be instructive. In addition, if there are multiple outcomes that one would wish to value but cannot, one can perform a sensitivity analysis assigning different relative weights to them to convert all of the health effects into a common cost-effectiveness index. Table 20.6 summarizes calculations of this type that formed the basis for resolving the debate over the OSHA hazard communication regulation. The three health outcomes involved are lost-workday job injuries, disabling illnesses, and cases of cancer. Suppose that, based on past studies on the relative valuation of cancer, we know that lost-workday job injuries have one-twentieth of the value of a case of cancer. In addition, suppose that the main uncertainty is with respect to the value of disabling illnesses, where our task is to assess how severe this outcome is compared with injuries and cancer. The calculations in this table explore two different sets of weights, one in which lost-workday injuries and disabling illnesses are given the same weight, and a second in which disabling illnesses are viewed as being five times more severe than lost-workday cases. Table 20.6 Cost-Effectiveness Measures for Hazard Communication Standard Lost-Workday Equivalents Weights: 1, 1, 20 Cost Effectiveness Net discounted costs less monetized benefits Total lost-workday equivalents (discounted) Net discounted cost/lost-workday injury equivalent

$2.632 × 9.5 × 104 $27,900

109

Weights: 1, 5, 20 Cost Effectiveness $2.632 × 109 24.7 × 104 $10,700

Source: W. Kip Viscusi, “Analysis of OMB and OSHA Evaluations of the Hazard Communication Proposal,” report prepared for Secretary of Labor Raymond Donovan, March 15, 1982. Note: The values listed are the relative weights placed on lost-workday cases (always 1), disabling illnesses (1 or 5), and cancers (always 20) in constructing a measure of lost-workday equivalents. These weights were used to convert all health effects into a common health metric.

The first row of table 20.6 gives the net discounted costs less benefits of other kinds from the project, which total $2.6 billion. The second row gives the discounted (at 5 percent) number of lost-workday injury equivalents prevented, where these lost-workday equivalents have been calculated using the two sets of weights indicated above. Finally, the third row of the table gives the net discounted cost per lost-workday injury equivalent prevented. These estimates are in the range of $10,000 to $30,000, which is in line with the general estimates of implicit values of nonfatal injuries that had been obtained in labor market studies at that time. The approach used here is to establish one class of outcomes as the unit of metric and to put the other outcomes in terms of them when calculating a cost-effectiveness index that can capture all the diverse impacts of a particular effort. In this case, the metric is that of lost-workday injury risk equivalents, but in other situations, the metric may be death risk equivalents prevented or expected number of birth defects prevented. Risk-Risk Analysis In the absence of a benefit-cost analysis for risk or environmental regulations, agencies will not be constrained regarding the stringency of these efforts. Because of the restrictive legislative mandates that these agencies have, which often require that they reduce risk irrespective of cost, the result is that many regulations that are promulgated generate considerable costs, sometimes as high or higher than $100 million

per statistical life saved. Other than wasting societal resources, is there any harm from such profligacy? Two classes of costs can be identified under the general heading of risk-risk analysis. First, a direct riskrisk tradeoff arises from regulatory efforts. An automobile recall, for example, may require that consumers drive their cars back to the dealer for the repair. Because all motor vehicle traffic is hazardous, requiring that people undertake extra driving will expose them to additional risk that may be more hazardous than the defect being repaired, if it is minor. Second, risk regulations stimulate economic activity, such as manufacturing efforts to produce pollution control equipment or construction efforts to carry away the waste at a Superfund site. All economic activity is dangerous, leading to worker injuries and illnesses. Roughly 4 percent of every dollar of production in industry is associated with the health and safety costs of that production.17 Regulations that stimulate substantial economic efforts to meet the regulatory objectives will necessarily create risks in the process of stimulating economic activity. Even if for some reason the regulatory agency chooses to ignore the dollar costs, a comprehensive tally of the risk consequences of the effort may suggest that it is counterproductive. The newest form of risk-risk analysis that has emerged has drawn on the negative relationship between individual income and mortality. Regulatory expenditures represent a real opportunity cost to society as they take away resources from other uses, such as health care, that might enhance individual well-being. As a result, a mortality cost is associated with these regulatory efforts. The OMB raised this issue with OSHA, suggesting that some of the more expensive OSHA regulations may in fact do more harm than good through these mortality effects. Although the theoretical relationships are not controversial, the exact value of the regulatory expenditure that will lead to a statistical death remains a matter of debate. One approach has been to examine studies that directly link changes in individual income with mortality. Analysis by OMB economists Randall Lutter and John Morrall indicates that a statistical life may be lost for an income decrease on the order of $16 million to $21 million.18 These and similar estimates were controversial because of the difficulty of sorting out the income-mortality relationship. Higher income will promote a healthier life, but being healthy also enables one to earn more income. An additional problem is that the order of magnitude of the effect was surprising. If safety expenditures are sufficiently worthwhile for people to be willing to express a value of a statistical life of over $9 million, is it really plausible that doubling the monetary amount per expected life saved will be counterproductive and increase one’s mortality risk? Another approach is to avoid these statistical issues and establish a formal theoretical link between the value of a statistical life from the standpoint of saving statistical lives and the amount of money spent by the government that will lead to the loss of a statistical life through the adverse health effects of making society poorer.19 This approach, developed by W. Kip Viscusi, leads to a value that is a factor of ten greater than the value of a statistical life. Thus, for a value of a statistical life of $9.6 million, $96 million in government expenditures, or about $100 million per expected life saved, will lead to the loss of a statistical life. Using this estimate to assess the consequences of some of the regulations in table 20.5 yields some disturbing results. For the statistics in this table, the counterproductive expenditure level is $96 million per expected life saved. As a consequence, all regulations in table 20.5 at or below the OSHA coke oven occupational exposure limit will impose such high costs that the regulation will kill more people than it saves. While such a conclusion may seem bizarre, consider the situation in which government policies yield no beneficial effects. Squandering resources on such efforts will divert funds that could have been used for a wide range of health-enhancing expenditures. The risk-risk tradeoff rate can also be used to assess the net health impact of policies. Due to the risk-risk tradeoffs involved, the high cost per life saved of the OSHA arsenic occupational limit regulation leads to

the loss of two lives for every expected life saved. The contrasts are even greater for the regulations at the bottom of the table. Fortunately, the high costs per expected life saved usually do not arise from expending hundreds of billions of dollars on regulations such as the EPA hazardous waste listing for wood-preserving chemicals. The exorbitant cost per expected life saved figure is driven by the small denominator of few expected lives saved rather than an enormous cost numerator. What these and many other examples of counterproductive regulations indicate is that there are real opportunity costs of regulations and that inordinate expenditure levels (as reflected in the regulations at the bottom of table 20.5) have adverse health consequences that outweigh the beneficial effects of the regulation. Economic research continues in the effort to pinpoint the expenditure level per expected life saved that becomes counterproductive. However, what we do know at this point is that the general principle involved suggests that regulatory agencies should be cognizant of the harm that is done when they fail to take costs into account. The concern of economists with cost is not a professional bias—it ultimately has a link to individual welfare. Such links in turn involve our health and are just as real as the concerns that motivate the government regulations. Summary Perhaps the most difficult policy issues arising in the social regulation area will always stem from the setting of appropriate prices for the outcomes achieved. Because social regulation efforts deal in large part with outcomes that are not the result of explicit market transactions, there will always be a need to establish the value of these efforts. As a society, we cannot allocate unlimited resources to any particular area of concern, however important it may seem. Because additional gains to health, safety, and the environment come at a diminishing rate for additional expenditures of money, we would quickly exhaust our resources long before we ran out of opportunities for spending. The general economic approach to formulating a benefit assessment is not particularly controversial, but some of the empirical methodologies for establishing such values are still in their development stage. As the discussion in subsequent chapters indicates, in many instances, the absence of a specific empirical estimate for the benefit value is not the most pressing policy problem. Instead, a more fundamental difficulty is that the importance of making tradeoffs at all has not even been recognized. In these cases, substantial gains could be made by noting that we are not in an unconstrained situation and that there must be some balancing among the competing objectives. Questions and Problems 1. This chapter’s discussion of the value of a statistical life has focused on estimates from the labor market. Economists have also estimated risk-dollar tradeoffs based on price data for risky products. Smoke detector purchases, differences in riskiness of cars, and seat belt use decisions are among the contexts that have been considered. Can you think of any other market situations in which, if you had perfect data, it would be possible to infer an implicit risk-dollar tradeoff? 2. Environmental damage resulting from oil spills, such as that inflicted by the Exxon Valdez, is subject to quite specific environmental penalties. In particular, the companies responsible for the damage are required to pay an amount sufficient to compensate society for the environmental loss that has occurred. In economic terms, this compensation must be sufficient to put society at the same level of utility we would have had if it had not been for the accident. Can you think of methodological approaches for determining the appropriate compensation amount for oil spills such as the Exxon Valdez, which led to the death of thousands of fish and birds, as well as oil residues on thousands of miles of Alaskan beaches?

3. Would you use the same value of a statistical life to assess the regulatory benefits in situations in which risks are incurred voluntarily, as opposed to situations in which they are incurred involuntarily? For example, would you treat smoking risk regulation policies and nuclear hazard risk regulation policies the same from the standpoint of benefit assessment? Put differently, should the government place a lower value on smokers’ lives? 4. Suppose we were faced with two policy alternatives. Under one alternative, we will be saving identified lives, in particular Kip, Joseph, and David. Under a second policy option, we know that we will be saving three lives at random from the population, but we do not know whose lives they will be. Should we attach the same benefit value to each of these instances? 5. A variant on question 4 pertains to the girl trapped in a well. The mayor has to decide whether it is worth the $20 million in rescue costs to get her out of the well. Given your knowledge of the value of a statistical life, what would you recommend? Is this situation different than that in regulatory contexts? 6. Suppose there are two policy options. Policy 1 affects a population of 10,000, of whom 100 will die, so that the risk of death per person is one in 100. The second policy will likewise save 100 individuals, but from a population of 1 million, so that the individual risk is one in 10,000. From the standpoint of regulatory policy, should we exhibit any preference for one policy over the other?

Notes 1. This general approach to valuation of risks to life can be traced back to the work of Thomas C. Schelling, “The Life You Save May Be Your Own,” in Samuel B. Chase, ed., Problems in Public Expenditure Analysis (Washington, DC: Brookings Institution, 1968), pp. 127–162. 2. This figure is the median estimate in 2015 dollars from the survey of the labor market literature by W. Kip Viscusi and Joseph E. Aldy, “The Value of a Statistical Life: A Critical Review of Market Estimates throughout the World,” Journal of Risk and Uncertainty 27 (August 2003): 5–76. 3. This principle is the same as in all benefit contexts. See Edith Stokey and Richard J. Zeckhauser, A Primer for Policy Analysis (New York: W. W. Norton, 1978); and Harvey S. Rosen and Ted Gayer, Public Finance, 10th ed. (New York: McGraw-Hill, 2014). 4. Variations in the human capital approach are articulated in E. J. Mishan, “Evaluation of Life and Limb: A Theoretical Approach,” Journal of Political Economy 79 (July–August 1971): 706–738. 5. The debate over the hazard communication regulation and over the value of life itself was the object of a cover story in The Washington Post Magazine. See Pete Earley, “What’s a Life Worth?” Washington Post Magazine, June 9, 1985, pp. 10– 13, 36–41. 6. The role of these income effects is explored in W. Kip Viscusi and William N. Evans, “Utility Functions That Depend on Health Status: Estimates and Economic Implications,” American Economic Review 80 (June 1990): 353–374. 7. See the survey by Viscusi and Aldy, “Value of a Statistical Life.” 8. This relationship is documented in Joni Hersch and W. Kip Viscusi, “Cigarette Smoking, Seatbelt Use, and Differences in Wage-Risk Tradeoffs,” Journal of Human Resources 25 (Spring 1990): 202–227. Their estimates have been converted to 2015 dollars. 9. For further elaboration on these market processes and the econometric basis for estimation of the value of life, see W. Kip Viscusi, Employment Hazards: An Investigation of Market Performance (Cambridge, MA: Harvard University Press, 1979); Sherwin Rosen, “The Theory of Equalizing Differences,” in Orley Ashenfelter and Richard Layard, eds., Handbook of Labor Economics (Amsterdam: North Holland, 1986), pp. 641–692; W. Kip Viscusi, “The Value of Risks to Life and Health,” Journal of Economic Literature 31 (December 1993): 1912–1946; and W. Kip Viscusi, “The Value of Individual and Societal Risks to Life and Health,” in Mark J. Machina and W. Kip Viscusi, eds., Handbook of the Economics of Risk and Uncertainty (Amsterdam: Elsevier, 2014), pp. 385–452. 10. See W. Kip Viscusi, “Racial Differences in Labor Market Values of a Statistical Life,” Journal of Risk and Uncertainty 27 (December 2003): 239–256. 11. Joni Hersch and W. Kip Viscusi, “Immigrant Status and the Value of Statistical Life,” Journal of Human Resources 45

(Summer 2010): 749–771. 12. A more comprehensive review of the literature is provided in Viscusi and Aldy, “Value of a Statistical Life.” 13. The procedure used to construct these estimates and a more extensive compilation of these figures is presented in Viscusi, “The Value of Individual and Societal Risks to Life and Health.” See especially tables 7.2 and 7.3. 14. For a recent analysis of regulatory tradeoffs, see John F. Morrall, “Saving Lives: A Review of the Record,” Journal of Risk and Uncertainty 27 (December 2003): 221–237. 15. For a survey of this literature, see Ronald G. Cummings, David S. Brookshire, and William D. Schulze, Valuing Environmental Goods: An Assessment of the Contingent Valuation Method (Totowa, NJ: Rowman & Allanheld, 1986). 16. David S. Brookshire, Mark A. Thayer, William D. Schulze, and Ralph C. d’Arge, “Valuing Public Goods: A Comparison of Survey and Hedonic Approaches,” American Economic Review 72 (March 1982): 165–177. 17. In particular, this analysis develops estimates of the health and safety costs per dollar of output in each industry. See W. Kip Viscusi and Richard J. Zeckhauser, “The Fatality and Injury Costs of Expenditures,” Journal of Risk and Uncertainty 8 (January 1994): 19–41. 18. See, for example, Randall Lutter and John F. Morrall III, “Health-Health Analysis: A New Way to Evaluate Health and Safety Regulation,” Journal of Risk and Uncertainty 8 (January 1994): 43–66. Their estimates have been converted to 2015 dollars. 19. This approach is developed in W. Kip Viscusi, “Mortality Effects of Regulatory Costs and Policy Evaluation Criteria,” RAND Journal of Economics 25 (Spring 1994): 94–109. The expenditure level that leads to a counterproductive outcome equals the value of a statistical life divided by the marginal propensity to spend on health, which was 0.1. Thus, multiplying the value of a statistical life by a factor of ten generates the counterproductive expenditure amount. Also see Randall Lutter, John F. Morrall III, and W. Kip Viscusi, “The Cost-Per-Life-Saved Cutoff for Safety-Enhancing Regulations,” Economic Inquiry 37 (October 1999): 599–608.

21 Environmental Regulation

The range of activities in the area of environmental regulation is perhaps the most diverse of any regulatory agency.1 The U.S. Environmental Protection Agency (EPA) has programs to regulate emissions of air pollution from stationary sources, such as power plants, as well as from mobile sources, such as motor vehicles. In addition, it has regulations pertaining to the discharge of water pollutants and other waste products into the environment. These pollutants include not only conventional pollutants, such as the waste by-product of pulp and paper mills, but also toxic pollutants. For situations in which its regulations of discharges and emissions are not sufficient, the EPA also undertakes efforts to restore the environment to its original condition through waste treatment plants and the removal and disposal of hazardous wastes. Insecticides and chemicals are also in the general jurisdiction of the agency’s efforts. Moreover, the time dimension of the agency’s concerns is quite sweeping because the environmental problems being addressed range from imminent health hazards to long-term effects on the climate of Earth that may not be apparent for decades. In this chapter, we do not attempt to provide a comprehensive catalog of environmental regulations, although we draw on various examples in this area. The focus instead is on the general economic frameworks that are available for analyzing environmental problems. The structure of these problems generally tends to be characterized by similar economic mechanisms for different classes of pollutants. In each case, externalities are generated that affect parties who have not contracted to bear the environmental damage. A similar economic framework is consequently applicable to a broad variety of environmental problems. We begin with an analysis of the basic economic theory dealing with externalities and then turn to variations of this theory to analyze the choices among policy alternatives. The issues we address include current policy concerns. Should the EPA pursue various kinds of marketable permit schemes or rely on technology-based standards?2 In addition, there is a continuing concern with long-term environmental risks associated with climate change. How should we conceptualize the economic approach to regulating these and other risks that pose new classes of environmental problems? Finally, we review the character of the enforcement of environmental regulation, as well as the ultimate impact of environmental policy on environmental quality. The Coase Theorem for Externalities The fundamental theorem in the area of externalities was developed by Ronald Coase.3 The generic problem that he considered was that of a cattle rancher. Suppose that farm A raises cattle, but that these cattle stray onto the fields in farm B, damaging farm B’s crops. The straying cattle consequently inflict an externality on

farm B. What Coase indicated is that assessing these issues is often quite complex. Among the factors that must be considered from an economic standpoint are the following. Should the cattle be allowed to stray from farm A to farm B? Should farm A be required to put up a fence, and if so, who should pay for it? What are the implications from an economic standpoint if farm A is assigned the property rights and farm B can compensate farm A for putting up a fence? Alternatively, if we were to assign the property rights to the victim in this situation, farm B, what would be the economic implications of assigning the property rights to farm A?4 The perhaps surprising result developed by Coase is that from an economic efficiency standpoint, the fencing outcome will be the same, irrespective of the assignment of property rights. If we assign the right to let cattle stray from farm A, then farm B will bribe farm A to construct a fence if the damage caused to farm B’s crops exceeds the cost of the fence. Thus, whenever it is efficient to construct a fence, farm B will compensate farm A and contract voluntarily to purchase the externality so as to eliminate it. Alternatively, if we were to assign the property rights to farm B, farm A could construct the fence to prevent the damage. If the cost of such a fence exceeded the damage being inflicted, farm A could contract with farm B to compensate farm B for the damage imposed by the straying cattle. In each case, we obtain the same result in terms of whether or not the fence is constructed, irrespective of whether we give farm A or farm B the property rights. From an equity standpoint, the results are, however, quite different. If we assign the property rights to farm A, then farm B must compensate farm A to construct the fence, or alternatively, farm B must suffer the damage. In contrast, if we were to assign the property rights to farm B, the cost of the fence construction or the cost of compensation for the damage would be imposed on farm A. The outcome in terms of whether the crops will be trampled or the fence will be constructed will be the same, regardless of the property rights assignment. However, the well-being of each party and the cash transfers that take place will be quite different under the two regimes. It is for that reason that there are often considerable legal and political battles over the assignment of property rights. Economists generally have little of a conclusive nature to say about which situation is more equitable. Coase observed that we should not be too hasty when judging which property rights assignment is most fair. From an equity standpoint, one should take into account the reciprocal nature of the problem. In this situation, farm A inflicts harm on farm B. However, to avoid the harm to farm B, we must harm farm A. The objective from an efficiency standpoint is to avoid the more serious harm. The Coase Theorem as a Bargaining Game What Coase did not explore in detail was the nature of the bargaining process that would lead to the efficient outcome that he discussed. To address these issues, it is useful to cast the Coase theorem problem in the context of a simple bargaining game. For concreteness, let us suppose that the property rights are being assigned to the pollution victims, so that it is the firm that must pay for the damage or control costs. Table 21.1 summarizes the generic components of this and other bargaining games. The company in this situation has a maximum offer amount that it is willing to give the pollution victims for the damage being inflicted. The factors driving the maximum offer are the expenditures that the firm would have to make to eliminate the externality or the cost that would be imposed on the firm by the legal rules addressing involuntary externalities. The maximum amount that the firm is willing to pay will be the minimum of either the control costs or the penalty that will be imposed on the firm if it inflicts the externality.

Table 21.1 Coase Theorem Bargaining Game Game Component

Condition

Feasible bargaining requirement

Maximum offer ≥ Minimum acceptance

Bargaining rent

Bargaining rent = Maximum offer − Minimum acceptance

Settlement with equal bargaining power

From the standpoint of the individuals bearing the accident costs, the minimum amount they are willing to accept in return for suffering the impacts of the pollution is the amount of compensation that restores their level of utility to what it would have been in the absence of pollution. We will refer to this amount as the minimum acceptance value. There is a potentially feasible bargaining range if the maximum offer the firms are willing to make exceeds the minimum acceptance amount, which is the inequality listed in the top row of table 21.1. If this condition is not satisfied, no bargain will take place, as there is no feasible bargaining range. When the minimum amount that will be accepted by the pollution victims exceeds the maximum amount firms are willing to offer, no contractual solution is possible. Firms will select the minimum-cost alternative of either installing the control device or paying the legally required damages amount. The absence of a feasible bargaining range does not imply that the Coase theorem is not true or that the market has broken down. Rather, it simply indicates that there is no room for constructive bargaining between the two parties. In such situations, the resolution of the bargaining game will be dictated by the initial assignment of property rights. An essential component of the bargaining game is the bargaining rent. This rent represents the net potential gains that will be shared by the two parties as a result of being able to strike a bargain. As indicated in table 21.1, the bargaining rent is defined as the difference between the maximum offer amount and the minimum acceptance value. This definition is quite general and pertains to other bargaining situations as well. For example, suppose that you were willing to pay $21,000 for a new Honda Civic, but the cost to the dealer for this car is $17,000. There is a $4,000 spread between your maximum offer and the minimum acceptance amount by the dealer, which represents the bargaining rent available. The objective of each of you is to capture as much of the rent as possible. You would like to push the dealer as close to the minimum acceptance amount as possible, and the dealer would like to push you to your reservation price. Much of the bargaining process is spent trying to ascertain the minimum offer and maximum acceptance amounts because these values are not generally disclosed. Moreover, in the process of trying to learn these values, one may reveal considerable information regarding one’s bargaining skill and knowledge of the other party’s reservation price. A bid for the car that is substantially below the cost to the dealer, for example, does not indicate that one is a shrewd and tough bargainer, but rather usually suggests that the buyer does not have a well-developed sense of the appropriate price for the car. In a situation in which the parties are equally matched with equal bargaining power, they will split the economic rent. This symmetric bargaining weight situation provides a convenient reference point for analyzing the bargaining outcome. As indicated in table 21.1, if such symmetry exists, then the settlement outcome will simply be an average of the maximum offer and the minimum acceptance amount, which is equivalent to the minimum acceptance amount plus half of the economic rent at stake.

Pollution Example To illustrate these concepts, let us consider the pollution problem summarized in table 21.2. The upstream pulp and paper mill emits discharges that impose $500 of environmental damage. The citizens can eliminate this damage by constructing a water purification plant for a cost of $300. Finally, suppose that the company could eliminate this pollution through primary treatment at the plant for a cost of $100. Table 21.2 Property Rights Assignment and the Bargaining Outcome Basic aspects of the pollution problem Primary treatment of effluent costs $100

Water purification costs $300

Environmental damage costs $500

Bargaining with victim-assigned property rights Bargaining equation Outcome

Maximum offer by company = $100 < Minimum acceptance by citizens = $300. Company installs controls. No cash transfer.

Bargaining with polluter-assigned property rights Bargaining equation Outcome

Maximum offer by citizens = $300 > Minimum acceptance by company = $100. Citizens pay company $100 to install controls and also pay company $100 share of rent if equal bargaining power.

To see the impact that differences in the property rights assignment make, consider first the situation in which the citizen victims of pollution are assigned the property rights. In this context, it is the company that must bribe the citizens for the right to pollute. The maximum amount the polluting firm is willing to pay for this pollution privilege is $100—the cost of installing a treatment facility. The citizens, however, have a reservation price of $300—the lesser of the costs of the water pollution treatment and the environmental damage. Because the maximum offer amount is below the minimum acceptance value, no profitable bargain can be made by the two parties. The result will be that the company will install the pollution treatment system, and there will be no cash transfer between the parties. The second situation considered is one in which the polluter has been assigned the property rights. In this situation, the maximum offer by the citizens to the firm will be $300. This amount exceeds the $100 cost of installing water pollution treatment for the company, which is the company’s minimum acceptance amount. As a result, a profitable bargain can be arranged between the two parties, with a total bargaining rent of $200. The outcome will be that the citizens will pay the company $100 to install the pollution control device. Moreover, if the bargaining power of the two parties is equal, the citizens will also pay the firm an additional $100 as the company’s share of the bargaining rent. Utilization of this bargaining game framework to analyze the Coasean pollution problems provides a more realistic perspective on what will actually transpire than did the original Coase paper, which assumed that the purchase price for the transfers will equal the minimum acceptance amount by the party holding the property rights. In each case, the pollution control outcome is the same, as the company will install the water treatment device. However, in the case where citizens do not have the property rights and bargaining power is equal, not only will they have to pay for the water treatment, they will also have to make an additional $100 transfer to the company that they would not have made if they had been given the property rights. The difference in the equity of the two situations is substantial. The citizens must spend $200 if they do

not have the property rights—$100 for the treatment cost and $100 to induce the company to install it. If the citizens have the property rights, the cost is $100 to the company for treatment. In each case, the water treatment is the same. Long-Run Efficiency Concerns What should also be emphasized is that this short-run equity issue is also a long-run efficiency issue. Ideally, we want the incentives for entry of new firms into the industry to be governed by the full resource costs associated with their activities. If firms are in effect being subsidized for their pollution by citizens paying for their pollution control equipment, then there will be too much entry and too much economic activity in the polluting industries of the economy. We will return to this point in the context of the debate over standards versus taxes. This long-run efficiency point is often ignored by policymakers and by economists who focus on the short-run pollution outcome rather than on the long-run incentives that the property rights assignment may create. Transaction Costs and Other Problems One factor pertaining to the bargaining process that Coase emphasized is that substantial transaction costs may be involved in carrying out these bargains. Although we can generate an efficient outcome through a contractual solution without the need for any regulation, achieving this outcome may be quite costly. If the actions of a large number of citizens must be coordinated, then the cost may be substantial. These coordination costs are likely to be particularly large in situations with free-riders. Some individuals may not wish to contribute to the pollution control effort in hopes of obtaining the benefits of controls without contributing to their cost. It has often been remarked that there is also a potential for strategic behavior. Some parties may behave irrationally in the bargaining process. However, by modeling the contractual components of the externality market in table 21.2 using an explicit model of the bargaining structure, we capture these aspects in the context of a rational game theory model. It may, of course, be true that people are irrational, but this is true of any economic context and is not a phenomenon unique to externality bargaining contexts. For example, people may misperceive the probability of a particular bargaining response or may not assess the reservation price of the other party correctly. Perhaps the greatest caveat pertains to the degree to which we can distinguish discrete and well-defined assignments of the property rights. Even when property rights have been assigned, the use of these property rights is often limited. Moreover, when the courts must enforce these rights, there is often imperfect information. The courts, for example, do not know the actual damages the citizens may incur. Moreover, they may not know with perfect certainty the pollution control and treatment costs. In addition, acquiring this information also incurs costs, and in the context of most judicial settings, the information being provided to the court can have substantial errors. The net result is that in actual practice, we do not generally turn the market loose and let people contract out of the externalities that are imposed. The victims in the eastern United States who suffer the consequences of acid rain generated by power plants in the Midwest cannot easily contract with these electric power plants. Even more difficult would be attempting to contract with the automobile users in the Midwest to alter their behavior. The bargaining costs and free-rider problems would be insurmountable. Indeed, in many cases, we cannot even identify the party with whom we might strike a bargain. Unlabeled drums of toxic waste in a landfill do not provide a convenient starting point for externality contracts.

Despite the many limitations of the voluntary contractual approach to externalities, the Coase theorem does serve an important purpose from the standpoint of regulatory economics.5 In particular, by assessing the outcome that would prevail with an efficient market given different assignments of the property rights, one can better ascertain the character of the impact of a particular regulatory program. To the extent that the purpose of government regulation is to eliminate market failures and ensure efficiency, the implications of the Coase theorem provide us with frames of reference that can be applied when assessing the character of the different situations that will prevail under alternative regulatory regimes. These concerns will be particularly prominent with respect to market-oriented regulatory alternatives that involve the explicit pricing of pollution. Smoking Externalities An interesting application of the Coase theorem is to cigarette smoking. Environmental tobacco smoke has become an increasingly prominent public concern and a classic externality issue. Many nonsmokers find cigarette smoke unpleasant, and government agencies (such as the EPA and the Occupational Safety and Health Administration) have concluded that there may be some adverse health effects as well, though the extent of these effects remains controversial. Whether the health risks of environmental tobacco smoke are large or small, real or imagined, is not essential for addressing these exposures and is not critical to how the Coase theorem will operate in this instance. What is important from the standpoint of the Coase theorem problem is that nonsmokers would be willing to pay a positive amount of money to avoid being exposed to environmental tobacco smoke. Similarly, smokers would be willing to pay to be able to smoke in public places where they generate environmental tobacco smoke. As in the case of the Coase theorem problem, in many respects, the externalities are symmetric. Smoking will make the smoker better off and the nonsmoker worse off, whereas restricting smoking will make the smoker worse off and the nonsmoker better off. This is the classic Coase situation. Applying Coasean logic, one might expect the nonsmokers in restaurants to walk over to the smokers’ tables and attempt to strike a bargain to get them to stop smoking. Doing so, however, is unpleasant and consequently costly. However, other economic mechanisms can reflect these concerns. If the restaurant does not have a suitable policy with respect to smoking, customers can eat elsewhere. In effect, the market operation in this context will be through the price system. The smoking policy of the restaurant is a local public good in much the same way as the music, the lighting, and the overall restaurant environment are. When customers are free to patronize different restaurants, the major remaining concern presumably would be with those who discover that they have made a mistake after arriving at the restaurant for the first time and find it difficult to go elsewhere. Workplaces have responded similarly to the concerns of workers— many of them have banned smoking, and others have instituted smoking areas. The government has also become active in this area, as hundreds of local governments have enacted various kinds of smoking restrictions. While some national regulations have been proposed, many state regulations have been enacted.6 Smoking bans for hospitals led the way, but smoking restrictions for other environments have followed. At present, most states have enacted smoking restrictions that ban smoking in all public places, including bars and restaurants. Enclosed arenas also have tended to be prominent regulatory targets, while malls and open spaces have not. The overall pattern is consistent with what the Coase theorem would suggest: Areas where the harm is greatest emerge as the first candidates for regulation. The difference is that the mechanism is not private Coasean bargains, which would be quite

costly to organize, but rather coordinated regulatory action. Interestingly, even a majority of smokers support smoking bans for hospitals and indoor sporting events, so that for some forms of smoking restrictions, there are common rather than conflicting interests. The reliance on regulatory solutions even in situations when both smokers and nonsmokers may support restrictions highlights the important role of regulations to implement desirable social policies in situations where there are costly impediments to individual Coasean bargains. While private Coasean bargains to solve regulatory problems are not the norm, the Coase framework provides a useful approach to guide our thinking about which regulation contexts the market will be expected to work in and which it will not work in. Moreover, if it is believed that the market will not work, should one then inquire: What are the efficiency effects on both parties? What are the losses to the parties from the current situation, and what will be the losses with regulation? This is the essential message of the Coase theorem that is pertinent to all such externality contexts. A final set of externalities associated with smoking pertain to insurance. If smoking is risky, as is the scientific consensus, then presumably the adverse health consequences will have widespread consequences for insurance costs. Health costs will clearly be higher. However, because smokers will die sooner under this scenario, their early departure will save society pension and Social Security costs. A comprehensive tally of these effects appears in table 21.3. As indicated by the summary of the insurance externalities in table 21.3, the cost per pack generated by smokers is particularly high for health insurance. However, offsetting savings arise from the higher mortality rates of smoking, chiefly the lower pension and Social Security costs. Because smokers die sooner, they are also less likely to get long-term diseases (such as Alzheimer’s), thus diminishing some of their medical expenses later in life. On balance, smokers save money for society in terms of the net externality cost. This result does not mean that smoking is not consequential for the individuals whose lives are at risk or for the particular insurance programs whose costs are affected. Nor does it mean that the death of smokers is a desirable social outcome. However, the result does suggest that many externalities often involve competing effects with fairly complex ramifications. Table 21.3 External Insurance Costs per Pack of Cigarettes with Tar Adjustments Cost per Pack ($) Costs Total medical care Sick leave Group life insurance Nursing home care Retirement and pension Fires Taxes on earnings Total net costs

0.580 0.013 0.114 −0.239 −1.259 0.017 0.425 −0.319

Source: W. Kip Viscusi, Smoke-Filled Rooms: A Postmortem on the Tobacco Deal (Chicago: University of Chicago Press, 2002), table 4. The discount rate is 3 percent.

The medical costs associated with smoking led to a wave of lawsuits by the states that in 1998 produced a $206 billion out-of-court settlement by the cigarette industry with forty-six states and separate agreements for $37 billion with four other states. Setting aside the merits of the suits and the particular settlement

amount, how would one wish to structure the costs imposed on the cigarette industry in order to provide appropriate economic incentives, assuming that this payment reflects the financial externality generated? Many opponents of the cigarette industry want the cost to be imposed through a lump-sum imposition on the companies so that the costs will come directly out of corporate profits; however, a profit tax will not affect the marginal cost of production or the product price. Companies can also reorganize under federal bankruptcy laws to limit the effect of extremely large lump-sum taxes. Another possibility that offers the promise of greater penalty revenues for the states is to link the payment to the level of cigarette sales, which is the approach that has been taken. The net effect is that consumers will, in effect, pay for most of this cost through higher cigarette prices rather than having the cost paid directly by the corporations. Such incentive effects are exactly what we would want to promote in order to foster efficient economic behavior whereby the parties generating costs will be cognizant of the economic consequences of their actions. Boosting the price of cigarettes in this manner is exactly analogous to proposals that firms should pay pollution taxes to reflect the environmental damage they generate, as such taxes will induce more efficient behavior. Special Features of Environmental Contexts It should also be noted that the character of the markets that would emerge if we set up a market for pollution may be quite unusual in environmental contexts. Most existing water pollution regulation is based on the assumption that the usability of water tends to follow a step function, such as the one indicated in figure 21.1.7 Initially, the water quality is quite high, and we will label the water pristine. After a certain level of pollution, the water is no longer pristine, but you can still drink it. After another increase in pollution, the usability of the water for drinking declines, but you can swim in the water with appropriate vaccinations. As the pollution level increases further, water is suitable for fishing but no longer for the other uses. Finally, with a very high level of pollution, even the fishing option disappears. At this high pollution level, there is no additional marginal cost being imposed on the citizenry from additional pollution if we assume for concreteness that all beneficial uses of the water have disappeared. The citizens could then sell an infinite number of pollution rights without suffering any additional damage beyond what they have already suffered. Moreover, at any particular step of the declining water-quality curve in figure 21.1, there is no loss to the citizenry, so that the marginal costs to them of selling additional pollution rights will be zero.

Figure 21.1 Changes in Water Usage as a Function of Pollution

This character of environmental contexts—known formally as an example of nonconvexities—suggests that instead of always dispersing the risks, it may be profitable to concentrate the risks in a particular location. For example, are we better off siting hazardous wastes throughout the United States, or should they be concentrated in one area? If they are concentrated, society can adapt by prohibiting residential housing and commercial operations near the facility, so that a large environmental risk can be present without imposing substantial costs on society. In contrast, dispersing hazardous wastes on a uniform basis throughout the United States may appear more equitable, but it will impose larger risks to society at large because it is more difficult to isolate such a large number of individual risks. The main difficulty with concentrating the risk in this manner involves the appropriate compensation of those who are unlucky enough to have been selected to be put at risk. The option of concentrating the risk is particularly attractive in theory, but in practice, it implies that one group in particular will bear a substantial part of the costs. The NIMBY—not in my backyard—phenomenon looms particularly large in such contexts. It is these kinds of equity issues and the potential role for compensation of various kinds that are highlighted by application of the Coase theorem and the implications that can be developed from it. Siting Nuclear Wastes The nuclear waste repository debate has highlighted the practical importance of these efficiency and equity concerns. The government had invested $9 billion to develop the Yucca Mountain site in Nevada as the central repository for unspent nuclear fuel. The alternative was to continue to scatter the wastes more diffusely across sixty-eight different sites. A central, safe location that embodied a large investment to ensure a low risk level had considerable appeal from an efficiency standpoint. In 2004 a U.S. court of appeals focused on the risk issues and ruled that the facility’s protections extending for 10,000 years were too short. Matters became muddied even further as the NIMBY concerns entered the political debate

through the opposition of prominent Nevada legislators. The problem that prompted this controversy was that some nonzero level of risk could emerge under longer time horizons than the 10,000-year period for which safety would be assured. A National Academy of Sciences panel concluded that in 270,000 years, a person standing just outside the fence could be exposed to sixty times the allowable radiation dose, which is a standard divorced from any benefit-cost balancing. This allowable dosage threshold is not linked to a specific risk probability, but there is the belief that the risk level is not zero. As a result, the panel recommended that the safety standard be extended for 300,000 years. How might economists have approached these nuclear waste siting issues differently? A useful starting point would be to assess the technological risk tradeoffs involved. The Yucca Mountain site may not be risk free forever, but scattering nuclear wastes across the country poses more substantial, immediate risks. So when judging any waste siting policy, the comparison should be with the available policy alternatives, not a costless, risk-free world that does not exist. The scenarios under which the Yucca Mountain site could become risky have a certain element of science fiction about them. After all, 270,000 years away is a pretty long time. A lot could happen on the technological front over that period, making it possible to address the nuclear waste risks more effectively and more cheaply. The cost of remedial measures to address problems that may develop surely will go down over time, so that risk estimates based on current capabilities will be too high. Discounting also will all but eliminate these far-distant risks as a matter of concern. Suppose we adopt a modest discount rate of 3 percent. Then a dollar of benefits 270,000 years from now has a present discounted value of (1/1.03)270,000. To see the effect of discounting, consider the following example. Instead of having only one person exposed to radiation at the Yucca Mountain fence, suppose we crammed 300 million people up against the fence. Also assume a worst case of radiation exposure that leads all of them to experience fatal cases of cancer. (Note that the future risk could have been eliminated by not letting people live in close proximity to the site.) On a discounted basis, the result of having 300 million people exposed to risk at the site would be the equivalent of a 1 in 100,000 chance of cancer today for a single person. Quite simply, any reasonable discounting of effects that are hundreds of thousands of years away will all but eliminate them from the analysis. But what if we do not worry about discounting, and suppose that for another $9 billion, Yucca Mountain could be made safe for 300,000 years. Wouldn’t that be the risk-reducing choice? Such an investment may not be safer from a net health standpoint. As the risk-risk analysis in chapter 20 indicates, there is an opportunity cost of expenditures so that, according to some estimates, a conservative assessment is that at least one statistical death will occur for every $100 million that we divert from the usual bundle of consumer purchases. Spending another $9 billion of taxpayer money will be that much less that is not spent to improve people’s standard of living and will lead to ninety expected deaths today. So, if we abstract from financial considerations and focus strictly on health, the question becomes whether the remote discounted value of the small risks 270,000 years from now outweigh the 90 immediate deaths. The economists’ framework for conceptualizing these issues is consequently quite different from that of scientists. The questions posed are not in terms of the period of time over which Yucca Mountain will be completely safe. Instead, economists ask: What is the magnitude of these risks? What are the costs and benefits of reducing the risks? How do these policy options compare with other available choices? Will changes over time alter the costs of risk reduction? And, what are the opportunity costs in terms of money and lives if we adopt a particular strategy?

Selecting the Optimal Policy: Standards versus Fines Lawyers and economists generally have different answers to the question of how one should structure regulatory policy. In situations where we would like to prevent an externality, the answer given by lawyers is to set a standard prescribing the behavior that is acceptable. The usual approach by economists is somewhat different, as they attempt to replicate what would have occurred in an efficient market by establishing a pricing mechanism for pollution. As we will see, each of these approaches can potentially lead to the efficient degree of pollution control, depending on how the standards and fees are set. When analyzing these pollution control options, we will assume that society approaches the control decision with the objective of providing for an efficient degree of pollution control. When tightening the pollution control standard, we should consequently not do so past the point where the marginal benefits accruing to society from this tightening no longer exceed the marginal costs. In practice the standard-setting guidelines administered by the EPA are much more stringent than would be dictated by benefit-cost balancing. In the case of the Clean Air Act, for example, the EPA is required by law to set ambient air quality standards irrespective of cost considerations. Moreover, not only is the EPA required to ignore costs, ensuring safety is not sufficient. The agency’s legislation requires it to provide a “margin of safety” below the zero-risk level. The result is that standards are generally set at excessively stringent levels from the standpoint of equating marginal benefits and marginal costs, but usually informal efforts are made to achieve balancing based on affordability. Figure 21.2 illustrates the character of the compliance costs with the degree of pollution control. By making allowances with respect to the availability and affordability of technologies, the EPA and other risk regulation agencies attempt to limit the stringency of their regulations to a point such as PC*, where the cost function begins to rise quite steeply. Such informal considerations of affordability may limit the most extreme excesses of regulatory cost impacts.

Figure 21.2 Technology-Based Standard Setting

Setting the Pollution Tax The shortcomings in the market that give rise to the rationale for government regulation stem not only from the character of the cost function. They also stem from the relationship of these costs to the benefits of controlling environmental externalities that would not otherwise be handled in an unregulated market context. Figure 21.3 indicates the nature of the market equilibrium when the externality is not priced but rather is inflicted involuntarily on the citizenry. The focus of this curve is on the marginal benefits and marginal costs of the production of gasoline, where the externality consists of air pollution. The market is governed by the relationship of the demand for gasoline, given by the marginal benefit curve MB. When setting the quantity level that will be produced, the market will be guided by the marginal cost curve MC(private), which reflects the private marginal cost of gasoline, leading to a production of gasoline given by Q0, whereas the socially optimal level of gasoline production is Q*. The prevailing market price for gasoline is given by P0. To achieve efficient pricing of gasoline, what is needed is an optimal tax that raises the price of gasoline to the amount P*. Alternatively, this purpose can be achieved by constraining the quantity of gasoline produced to Q*, where market forces will drive the price of gasoline up to the point P*.

Figure 21.3 Market Equilibrium versus Social Optimum

Focusing on either prices or quantities can each achieve the desired result. In the case of the quantity restrictions, the revenues accruing from the higher price of gasoline will go to the companies producing gasoline, whereas under a tax scheme, the taxes will go to the government. The choice between taxes and quantity constraints is not simply a question of administrative feasibility. There are also important dollar stakes involved in terms of the transfers among the various market participants. Because market outcomes will produce too much of the externality, some form of government intervention is potentially warranted. If we adopt the usual approach in which we wish to establish the appropriate pollution control standard, the objective is to equalize the marginal benefits and marginal costs of pollution reduction. Role of Heterogeneity Figure 21.4 illustrates the marginal cost curve for pollution reduction to two firms. Firm 1 has a higher control cost for pollution, as is reflected in its higher marginal cost curve MC1. Firm 2 has a lower pollution reduction marginal cost curve given by MC2. (Again the marginal benefit curve is MB.) In situations where the cost curves differ and we can make distinctions among firms, the optimal solution is to have a differential standard in different contexts. Thus we should set a tighter standard when the marginal cost curve is lower, and we can achieve pollution control level PC2, compared with the looser standard of PC1 for the higher-cost firm.

Figure 21.4 Differences in Control Technologies and Efficiency of Pollution Outcome

Distinctions such as this arise often among industries. It may be easier for some industries to comply with pollution requirements given the character of their technologies. If it is easier for chemical plants to reduce their water pollutant discharges than it is for dye manufacturers, then we should set the standard more stringently in that case to recognize the difference in the marginal costs of compliance. Perhaps more controversial are the distinctions that regulatory agencies make among firms in a given industry depending on the character of their technology. For new facilities that can incorporate the new pollution equipment as part of the plant design, the marginal cost curve for compliance is generally less than it will be for an existing facility that must retrofit the pollution control equipment onto its existing technology. It is consequently optimal from an economic standpoint to impose stricter standards on new sources than on existing sources because of the differences in the marginal cost curves. This economic principle has given rise to what many observers have identified as a “new source bias” in the policies of the EPA and other government agencies.8 A new source bias is efficient, but one must be careful when determining the extent to which one will have biased policies that set differential standards. For firms such as those in figure 21.4, one can justify the differing degrees of stringency indicated by the difference in marginal costs. The danger is that we often move beyond such distinctions because of political pressures exerted by the representatives from existing and declining industrial regions that are attempting to diminish competition from growth areas of the economy.9 Economics provides a rationale for some new source bias, but it does not necessarily justify the extent of the new source bias that has been incorporated into EPA policy.

Standard Setting under Uncertainty Setting the optimal standard is most straightforward when compliance costs and benefits arising from policies are known. In the usual policy context, there is substantial uncertainty regarding these magnitudes. Figure 21.5 illustrates the familiar case in which the cost uncertainty is likely to be greater than the benefits uncertainty. For most policies with comparatively small impacts on the nation’s environment, the marginal benefit curve MB will be flat. Firms’ marginal cost curves for pollution control are not flat but rather tend to slope upward quite steeply. Moreover, there may be considerable uncertainty regarding the degree of compliance costs because the technologies needed to attain compliance may not yet have been developed. As is illustrated in figure 21.5, the optimal degree of pollution control ranges from PC0 for the marginal cost curve given by MC0, to the intermediate case of PC1 for a marginal cost curve of MC1, to a very high level of pollution control at PC2 for a marginal cost curve MC2. When the marginal cost curve can be between MC0 and MC2, the standard consequently could have a very substantial range, depending on how we assess compliance costs.

Figure 21.5 Standard Setting with Uncertain Compliance Costs

If we assess these costs incorrectly, then we run the risk of imposing costs that may not be justified. On one hand, if we set the policy on the basis of a marginal cost curve of MC1, where the true marginal cost curve is governed by MC0, then a needless cost will be imposed by the regulation. The shaded area in figure 21.5 that lies above line MB gives the value of the excess costs that are incurred because the regulation has been set too stringently. On the other hand, there could also be a competing error in terms of forgone

benefits if the standard is set too leniently at PC1 when the regulation should have been set at PC2. If the true marginal cost curve is MC1, and it is believed to be MC2, there will be a loss in benefits from inadequate regulation. This outcome is illustrated in figure 21.5 by the shaded area that lies below line MB, between PC1 and PC2. Although setting standards intrinsically must address this problem of uncertain compliance costs, if we were to set a pollution fine equal to the level of the marginal benefit curve in figure 21.5, then firms could pick their quantity of pollution control on a decentralized basis after the pollution had been priced. This approach not only accommodates differences at a particular point in time in terms of technologies, it also accommodates uncertainty regarding the present technology and regarding future technological development. If the uncertainty with respect to cost is greater than with respect to benefits, as most regulatory economists believe, then a fee system is preferable to a standards system. Pollution Taxes The operation of a pollution tax approach to promoting optimal pollution control is illustrated in figure 21.6. Proposals that the United States adopt a carbon tax are the most prominent examples of such pollution taxes. Suppose that we set the price of pollution equal to the marginal benefits given by the horizontal curve in that diagram. This optimal fine will lead the firm to install the pollution control equipment needed to achieve the level of pollution control given by PC*. The amount of pollution reduced is indicated on the horizontal axis, as is the amount of pollution remaining. In addition, the shaded portion of figure 21.6 indicates the total fine that firms must pay for their pollution. From the standpoint of short-run efficiency, achieving the pollution control level PC* through a standard or the fine system is equivalent. From the standpoint of the firms that must comply with this standard, however, the attractiveness of standards is much greater than that of fines. With a standard, the only costs incurred are the compliance costs, whereas under the fine system, firms must pay both the compliance costs and the fine for all pollution that remains above the optimal control point.

Figure 21.6 Setting the Optimal Pollution Penalty

This difference in outcomes raises two classes of issues. The first is whether the fine has any role to play other than simply being a transfer of resources from firms to the citizenry. In terms of the short-run efficiency, the fine does not alter the pollution control outcomes. However, from the standpoint of long-run efficiency, we want all economic actors to pay the full price of their actions.10 If they do not, the incentive for polluting industries to enter the market will be too great. In effect, society at large will be providing a subsidy to these polluting industries equal to the value of the remaining pollution. Imposition of fines consequently has a constructive role to play from a standpoint of providing correct incentives for entry into the industry and long-run efficiency, even though it will not alter the degree of pollution control by an existing firm. A second observation with respect to the penalty proposals is that the imposition of costs on firms can be altered to make its impact more similar to that of a standard by making the fine asymmetric. In particular, if we impose a fine only for pollution levels below the standard PC*, then the purpose of the fine is to bring firms into compliance with the standard. For situations in which firms choose to pay the fine rather than install the necessary control equipment, it may be an index that the original standard was not set appropriately, given the firm’s particular cost curves. Thus, fines may provide a mechanism to introduce flexibility into an otherwise rigid standard system that does not recognize the heterogeneity in compliance costs that does in fact exist. Prices versus Quantities

In situations of regulatory uncertainty, should the government use a price approach in terms of pollution taxes or a quantity approach in which the government specifies a pollution quota? Suppose that policymakers are aware of the marginal benefit curve for pollution reduction but are uncertain about the compliance costs of the regulation, which will become known only after the policy has been enacted. An analysis by Martin Weitzman concluded that the answer hinges on the relative slopes of the marginal benefit and marginal cost curves.11 A price instrument is preferable if the absolute value of the slope of the marginal cost function exceeds the absolute value of the marginal benefit function. However, if the absolute value of the slope of the marginal cost function is less than that of the marginal benefit function, a quantity instrument will generate greater net social benefits. Figure 21.7 illustrates the first, more common situation in which the absolute value of the slope of the marginal cost curve exceeds that of the marginal benefit curve. The marginal benefit curve is indicated by MB, and the expected marginal cost curve at the time of the policy decision is EMC. The tax level set at T is where the MB curve intersects the EMC curve, and the level of pollution control PCC&T under the cap and trade policy (that is, a policy that sets an upper limit on an organization’s pollutants but permits trades to obtain further capacity) is also at this intersection if the total amount of allowable pollution is set at the level where MB = EMC. Suppose that the actual marginal cost curve MCH exceeds the levels based on EMC. Neither taxes nor cap and trade policies will provide the efficient level of control indicated by PCEFF. However, the tax policy leads to pollution control of PCTax, which has an associated deadweight loss indicated by the triangle A, whereas the deadweight loss of B under cap and trade is much greater.

Figure 21.7

Regulation When the Absolute Value of the Slope of the Marginal Cost Curve Exceeds the Absolute Value of the Slope of the Marginal Benefit Curve

Figure 21.8 illustrates the alternative situation in which the absolute value of the slope of the marginal benefit curve exceeds that of the marginal cost curve. The cap and trade policy leads to pollution abatement of PCC&T, which is above the efficient level of abatement with a marginal cost curve MCH, generating a deadweight social loss of B. However, the deadweight loss A under the tax approach is much greater because the comparatively steep MB curve leads to a very inappropriate tax level when cost is uncertain.

Figure 21.8 Regulation When the Absolute Value of the Slope of the Marginal Benefit Curve Exceeds the Absolute Value of the Slope of the Marginal Cost Curve

Market Trading Policies Although there has been substantial support for various kinds of fee systems in the economics literature for at least three decades, policymakers have been slow to implement these concepts.12 Four types of emissions trading options that have been employed for decades are netting, offsets, bubbles, and banking. In each case, firms must apply to the EPA to be permitted to use these mechanisms, and the requirements on such systems are stringent because of a continuing suspicion among environmentalists of market outcomes that enable firms to buy their way out of meeting a pollution control standard.

Netting The first of the mechanisms is netting. Under the netting system, a firm can alter its current plant and equipment in a manner that increases the pollution emissions from one source at the plant, provided that it also decreases the emissions from other sources, so that the net increase does not equal that of a major source. These trades cannot take place across firms but are restricted to within firms. Such trades have occurred in several thousand instances. Early estimates of estimated cost savings from having this flexibility range from $25 million to $300 million in terms of the permitting costs and from $500 million to $12 billion in terms of emission control costs.13 For this as well as for the other market trading systems described here, the adverse environmental effect is believed to be minimal. Offsets The second most frequent market trading activity is offsets. Under an offset option, firms will be permitted to construct new facilities in a part of the country that exceeds the EPA’s maximum permissible level of pollutants. However, before the company can build a plant in such an area, it must purchase pollution offsets from some existing facility in that area that provides for more than an equivalent reduction of the same pollutant. Moreover, the party selling these offsets must already be in compliance with EPA standards. Although there were 1,800 offset purchases soon after offset policies were adopted, for the most part, these involved internal market trades rather than external transactions. Bubbles The third policy option was introduced with great fanfare in December 1979 by the Carter administration. Under the bubble concept, a firm does not have to meet compliance requirements for every particular emissions source at a firm. Ordinarily, each smokestack would have to comply with a particular standard. Instead, the firm can envision the plant as if it were surrounded by an artificial bubble. The compliance task then becomes that of restricting the total emissions that will emerge from this bubble to a particular level. This option enables the firm to have some flexibility in terms of what sources it will choose to control. If there are two smokestacks, for example, as shown in figure 21.9, then the firm will choose to achieve the greatest pollution reduction from smokestack 1, as these costs will be lower than for pollution reduction in smokestack 2. More than a hundred such bubbles have been approved by the EPA, with early cost savings to DuPont and other firms totaling $435 million.

Figure 21.9 EPA Bubble Policy Standard for Total Emissions

Banking The final option is banking. Under the banking policy, firms in compliance with their standards can store pollution rights over time and then use these rights in the future as an offset against future pollution. The use of this policy option has been fairly infrequent. The Expanding Role of Market Approaches A major policy shift occurred in the 1990s. President Bush, for example, declared a commitment to increase reliance on market trading options,14 and some programs of this type were implemented. The EPA has not, however, replaced the thrust of its policy standards effort with a tradable pollution permit system. Nevertheless, permits have attractive economic features, as firms with the highest compliance costs can purchase them, thus fostering an efficient degree of pollution control. The first advantage of tradable pollution rights is that they enable the EPA to equalize the opportunity costs of pollution control. Second, they encourage innovations to decrease pollution, whereas a rigid standard only encourages a firm to meet the standard, not to go any further. Pollution rights systems also create less uncertainty for firms that must make fixed capital investments. Changing technology-based standards over time poses a risk that a firm’s capital investments will become obsolete. The disadvantage of pollution rights is that we must set the number of such rights. Establishing the quantity of such rights is not too dissimilar from setting an aggregate pollution level. It requires a similar kind of information, and it probably relies on more imperfect forms of information than would establishing a penalty scheme. However, a fee system for all pollution generated imposes such substantial costs that it currently faces political opposition.

Other criticisms of pollution rights systems pertain to whether the market participants are really trading a uniform good. The impact of pollution depends on the character of the pollutants, the stack height, and similar idiosyncratic factors. These pollutants also may interact with other pollutants in the area, so that their consequences may differ. Enforcing marketable permit systems may be more difficult than when the EPA mandates a particular technology for which officials can readily verify compliance. This concern may be of less consequence because many EPA standards (such as its water discharge requirements) are in terms of discharge amounts that must be monitored and reported on a monthly basis to the EPA. The final concern that has been raised relates to market power. Will some large players, such as public utilities, buy up all pollution rights? Thus far, such concerns have not been of practical consequence. By far the greatest resistance to the marketable permit scheme is the general suspicion of markets among noneconomists. Their counterargument often takes the following form: “Should the government also sell rights to murder?” A more appropriate question to use is: Which policy approach will be most effective in reducing pollution at less cost? Cap and Trade in Action: The SO2 Allowance Trading System Market-based approaches to environmental problems are not hypothetical constructs but have been employed in a variety of air pollution contexts in the United States as well as for CO2 emissions in the European Union.15 In 2005, the European Union established the first cap and trade program designed to control greenhouse gas emissions. The initial phase of the program was hindered by releasing too many allowances for carbon dioxide emissions, leading the price of carbon permits to plummet to virtually zero within three years. Subsequent phases reduced the emissions cap, but the confounding influence of the economic crisis beginning in 2008 makes it difficult to reliably assess the policy’s impact. The most widely studied U.S. effort is that for SO2 allowance trading. Sulfur dioxide emissions react in the atmosphere, forming sulfuric acid, which in turn leads to damage to forests and the acidification of ecosystems. To limit these emissions, Title IV of the Clean Air Act Amendments of 1990 introduced the SO2 allowance trading program. This cap and trade program established an initial allocation of permits called “allowances” that power plants were permitted to buy and sell. The initial total number of allowances was not governed by equating marginal benefits and marginal costs but rather by identifying the level of emission reduction at which marginal costs escalated, as shown in figure 21.2. Regulated facilities with pollution exceeding their allocated allowance could buy additional allowances, while those whose emissions were under their initial allocation could sell their allowances. The EPA monitored the emissions of facilities and ensured that they did not exceed the amount warranted by the initial allocation combined with any purchased allowances. The performance of this market followed the usual economic expectations. Establishment of a price for pollution led firms to reduce their pollution levels in various ways, including shifting to low sulfur coal (which is less polluting) and the installation of scrubbers (which are devices that desulfurize emissions). These shifts were enhanced by the impact of railroad rate deregulation, which reduced freight rates, making it economically desirable to ship the low sulfur coal from the Powder River Basin area of Wyoming and Montana to the power plants in the eastern United States. Because of the flexibility of the cap and trade approach compared to command and control regulations, these incentives were generated on a decentralized basis, avoiding the inefficiencies of command and control approaches that impose uniform standards. Estimates of the cost savings for the allowance trading system compared to the command and control approach range from 15 percent to 90 percent.

The temporal pattern of sulfur dioxide allowance prices shown in Figure 21.10 illustrates the substantial fluctuation over time in SO2 prices. Allowance prices were fairly stable until 2004. The sharp increase in allowance prices that occurred through 2006 was due to proposed air pollution regulations (the Clean Air Interstate Rule). This prospective increase in regulatory stringency led to a peak in SO2 allowance prices in 2005. However, subsequent legal challenges that culminated with overturning the rule in 2008 returned prices to levels comparable to those in 2004. The continued decline in SO2 allowance prices is attributable to a change in regulatory policy under the Obama administration, in which the policy approach imposed statespecific emissions caps, which limited interstate trading.

Figure 21.10 U.S. Sulfur Dioxide Allowance Prices, 1994–2010 Source: Cantor Fitzgerald Market Price Index for current year vintages.

The price trajectory in figure 21.10 reflects the changing economic value of the allowances but does not address the more fundamental economic issue of whether the benefits of this innovative policy exceeded the costs.16 The present value of the costs is $0.5 billion to $2.0 billion. This cost figure exceeds the ecosystem benefits of $0.5 billion, which were touted as the main rationale for the SO2 restrictions. However, the reduced pollution level had additional, much more substantial dividends as well, the most important of which was lowering the mortality costs associated with the reduction of fine particulates known as PM2.5. The total estimated benefits of the SO2 allowance trading program are $59 billion to $116 billion, which are amounts that dwarf the cost estimates because of the health benefits associated with the mandated 50 percent reduction in SO2 emissions. Global Warming and Irreversible Environmental Effects Whereas the environmental policies of the 1970s focused primarily on conventional air and water pollutants, and efforts of the 1980s turned to toxic chemicals and hazardous waste, attention in the 1990s shifted to the

long-term character of Earth’s climate. Given the long-term nature of the global warming problem and the modest impact of current policy measures, it is likely that climate change will retain its prominent position for the foreseeable future. The accumulation of carbon dioxide and other trace gases in Earth’s atmosphere in effect has created a greenhouse around Earth. Scientists have estimated that this change in Earth’s atmosphere is expected to increase global warming. In 2007, the Intergovernmental Panel on Climate Change (IPCC) estimated that there would be likely temperature increases on the order of 1.1 to 6.4 degrees Celsius, or 2.0 to 11.5 degrees Fahrenheit.17 The 2014 assessment released by the IPCC updated these estimates, focusing on the linkage of the temperature increase to different policy pathways over this century.18 Even for the optimistic stringent mitigation scenario, the IPCC estimates that by the end of the twenty-first century (2081–2100), the average temperature will increase by between 0.3 and 1.7 degrees Celsius relative to the 1986–2005 period. Under the least optimistic scenario with high greenhouse gas emissions, the temperature increase would be 2.6–4.8 degrees Celsius. Scientists continue to debate the magnitude and timing of the effect. Some global warming is inevitable irrespective of current efforts to impose environmental controls because of the irreversible nature of the generation of the greenhouse gases. We have already taken the actions that will harm our future environment. The extent of the future warming is uncertain because of both the substantial uncertainty regarding climatological models and the uncertainty regarding factors such as population growth and our pollution control efforts in the coming decades. Even more problematic is the effect that global warming will have on society. Although the temperature will rise by several degrees, for northern regions, this trend may be a benefit, and for southern regions it will generally be a disadvantage. The warming in winter may be beneficial and will occur to a greater extent than the warming in the summer, which will have an adverse effect. Russia and Canada may benefit from longer growing seasons. Some have even questioned the desirability of a temperature change. Will global warming, for example, be tantamount to getting on a plane in Boston and arriving in Los Angeles? U.S. retirement patterns suggest that warmer weather may in fact be preferable. Change of any kind will necessarily lead to the imposition of some adjustment costs. Climatologists also predict an increase in damage from natural disasters, such as hurricanes. The average sea level will rise, and there may be droughts in interior lands. The Stern Review, The Economics of Climate Change, prepared for the British government, highlighted the substantial economic costs associated with greenhouse gas emissions.19 The report estimated an annual cost of global warming of 5 percent of global gross domestic product (GDP) now and forever. Under more adverse scenarios, the cost range could be 20 percent of global GDP or more. The policy costs to reduce greenhouse gas emissions to three-fourths of current levels by 2050 would be substantial as well, on the order of an average estimate of 1 percent of GDP. While economists have not questioned the seriousness of the climate change phenomenon, the economic assumptions in the Stern analysis have been challenged. William Nordhaus focuses on two principal critiques, the assumed rate of discount and the characterization of preferences.20 The Stern Review’s estimate of the present value of the costs imposed by climate change is inflated to a loss of 20 percent of GDP now and forever by the use of an inordinately low discount rate of 0.1 percent per year, which magnifies the influence of future losses. To demonstrate the impact of a discount rate near zero, Nordhaus suggests the following “wrinkle experiment,” in which damages equal to 0.1 percent of net consumption are imposed annually beginning in 2200 and continue forever. If we could remove this wrinkle through a current policy, it would be worth $30,000 billion or 56 percent of one year’s world consumption to do so. Nordhaus also questions the assumed utility function in the Stern Review, since in economic models, the rate of time discount and the elasticity of the marginal utility of consumption cannot be chosen

independently. Nordhaus suggests that an interest rate of 1.5 percent is more in line with market interest rates and savings rates. Policy Options for Addressing Global Warming A variety of approaches could be used to address global warming, some of which involve market-based solutions of the type generally favored by economists and others that take more of the command and control approach often preferred by policymakers.21 Political feasibility concerns may lead to a portfolio of policies that in combination are effective and ideally not unduly burdensome for the benefits that are generated. The policy mix that the Stern Review suggested to reduce greenhouse gases to double its preindustrial level are similar to the policy options that other economists have envisioned, with the major differences being in terms of emphasis and stringency. The measures included in the Stern Review include reduction of deforestation, reduction of demand for carbon-intensive goods and services, adoption of low carbon technologies, and carbon capture and storage. Carbon taxes establish a price on carbon dioxide emissions, such as a tax per ton of emissions from oil/petroleum, coal, and natural gas. Setting the efficient price requires that one estimate the marginal benefit of emission reductions, or what has been termed the “social cost of carbon.” Policies could impose carbon taxes on upstream fossil fuel suppliers or on the downstream final emitters at the point of energy generation. These taxes in turn will have price effects, leading consumers to reduce their purchase of carbon intensive products. Instead of setting a price on carbon, policymakers could focus on quantity through a cap and trade system that provides a fixed number of tradable permits that firms are permitted to buy and sell to address their carbon emissions. The price of carbon will emerge from this trading system, thus providing the same kinds of economic incentives as does a carbon tax. The principal challenge is that of selecting the number of permits to issue and which sources of pollution to cover. There also have been proposals to combine elements of both emission taxes and tradable permits.22 An alternative to these market-based systems is the use of clean energy standards to serve as technology standards for generating lower emissions. Although the primary focus of advocates of this approach has been on electricity generation by utilities, such standards also could be applicable to appliances and motor vehicles. Selecting the appropriate standard is often problematic, and inappropriate selection of the standard may generate inordinate social costs. Social Cost of Carbon A critical input to selecting the appropriate carbon tax, the number of carbon permits to issue, or the level of standards to be set is the social cost of carbon (SCC), or the present value of the costs imposed by the emission of another ton of carbon. It is not feasible to establish efficient policy levels under any of these policy options without some understanding of the marginal benefits of emission reductions. Economists estimate the SCC based on what are known as integrated assessment models. The models incorporate information along different points associated with an economic activity, the greenhouse emissions associated with these activities, the change in the global climate due to these emissions, and the resulting economic damages. The estimates of the SCC differ greatly, with a 2010 National Research Council report suggesting that the range could be from $1 to $100 per ton of carbon dioxide emissions.23 In an effort to provide more precise policy guidance, the Interagency Working Group on the Social Cost

of Carbon established a consensus SCC figure to be used for agency policy analyses.24 However, even this consensus document did not pinpoint a specific SCC figure, since its level hinges on the assumptions adopted for the analysis. That study drew on three different integrated assessment models that they used to (1) draw from a set of population, economic, and energy trajectories that were used to forecast economic growth and associated greenhouse gas emissions; (2) draw from a distribution of the effect of global warming associated with different levels of atmospheric carbon concentrations; and (3) draw from three possible discount rates of 2.5 percent, 3.0 percent, and 5.0 percent. In its 2013 report, the Interagency Working Group suggested that agencies undertake sensitivity analysis for policies using four different SCC estimates based on discount rates of 2.5 percent, 3 percent, 5 percent, and 3 percent coupled with the 95th percentile of the SCC estimate across the three integrated assessment models, which is a worst-case scenario. The resulting set of possible SCC values is $12, $43, $65, and $129 (in 2007 dollars). Adoption of the 3 percent discount rate assumption usually used in regulatory analyses leads to selection of the $43 figure. Such SCC estimates consequently provide the basis for establishing a carbon tax for determining the marginal benefits associated with other climate change policies. Note that these SCC values pertain to the economic benefits to the entire world, not just to citizens of the United States. The domestic share of benefits is only 7–23 percent of these amounts. A pivotal benefit assessment issue is that of standing, in particular, whose valuations should be counted.25 The perspective of U.S. statutes and executive orders is that of a domestic orientation, emphasizing the benefits to U.S. citizens. An alternative approach to benefit assessment is to take a global perspective, recognizing that reducing global warming is a global public good that will require global cooperation. An intermediate position is to assign the dominant role to the benefits to U.S. citizenry but also to recognize that these benefits include altruism to other nations and the value of reciprocity, as U.S. climate change initiatives will encourage other countries to reduce their carbon emissions. Whether such a value will equal the global SCC estimate is unclear. Assessing the Merits of Global Warming Policies Although a precise assessment of the optimal policy relating to global warming is not possible, one can frame the issues and obtain a sense of the types of concerns that are being addressed in the context of what will prove to be an ongoing policy debate.26 Figure 21.11 sketches the marginal cost curve MC for addressing global warming by controlling the emission of greenhouse gases. This has been the approach taken by economists such as William D. Nordhaus.27 The first of the three policy options is reducing chlorofluorocarbons, such as bans on the use of freon in refrigerators. These are the low-cost options that the United States and other countries have already adopted. The second policy option shown is reforestation, which will reduce carbon dioxide and thus reduce global warming, since forests convert carbon dioxide into oxygen. The third policy option shown is the imposition of a global carbon tax, which will penalize usage of oil or coal to produce energy, thus recognizing the environmental externalities they impose.

Figure 21.11 Establishing the Optimal Global Warming Policy

Also shown in figure 21.11 are two marginal benefit curves, designated “MB (low)” and “MB (high).” The dependence of the efficient policy choice on the MB curves highlights the importance of the SCC in establishing the optimal policy level. The purpose of illustrating the two curves is to indicate how the policy might change depending on our uncertainty regarding the ultimate societal implications that global warming will have. What is clear from this figure is that even in the case of the low marginal benefit curve, some actions are clearly worthwhile. Elimination of chlorofluorocarbons, as has been done, and the imposition of some global carbon tax are clearly efficient, even when the low-benefit scenario prevails. If benefits are at a higher level, then policies of reforestation and a steeper global carbon tax are also worthwhile. Whereas in most environmental contexts, it is the marginal costs that are more uncertain than the marginal benefits, in this long-run environmental context, benefits also pose substantial uncertainty. This uncertainty is at a very fundamental level. There is even a debate over whether on balance, global warming will be beneficial or adverse to our economy. However, even at the very low level of losses from global warming that are assumed in figure 21.11, some policies to reduce greenhouse gases are desirable. How Should We React to Uncertainty? Although further study to resolve these uncertainties is clearly a desirable policy alternative, if we had to take action today, an economic issue arises as to whether the substantial uncertainties imply that we should err on the side of caution or on the side of reckless abandon. As we continue to study the climate change issue, there will also be calls for policy action. One approach that has gained widespread support is the “no regrets” option. We should clearly adopt policies, such as energy conservation, that would be desirable irrespective of what we ultimately learn about the implications

of climate change. Whether we should go beyond the “no regrets” policy is more controversial. Some insight into resolving this problem is provided by examining the classic irreversible development decision situation.28 Figure 21.12 illustrates the basic irreversible investment paradigm. A developer must choose the degree of current development, where the benefits and costs of this development at the present time are known. There is, however, uncertainty regarding the degree to which environmental preservation will be valued in the future. There is some probability p that the preservation will have a high value, and there is some probability 1 − p that the preservation will have the same value that it does at the present time. In this situation of uncertainty, how should one choose the extent to which one will develop the scarce resource, such as conversion of a national forest into a shopping center and suburbs?

Figure 21.12 Irreversible Environmental Decisions

In general, the answer is that one should err on the side of underdevelopment in such situations. Moreover, the greater the probability that preservation will have a high value and the greater the increase that this value will be, the more one should alter one’s current decision from what one would select based on a myopic assessment of the benefits and costs of the development policy. This principle for underdevelopment does not generalize to every situation in which irreversible decisions have to be made. For example, companies installing pollution control equipment might rationally choose to overinvest in such equipment if they expect the standard to be tightened in the future. Much depends on the character of the problem and the nature of the uncertainty. However, for problems like global warming, the main uncertainty is with respect to the potential increase in the benefits of pollution control above current permissible levels based on the current benefits associated with pollution control. The general policy maxim with respect to the prospect of possible increases in benefit levels is that conservatism is the best policy. Moreover, it is noteworthy that this conservatism arises wholly apart from the presence of any risk aversion. Society does not choose to err on the side of caution because we are unwilling to engage in risks. Instead, the bias arises because the expected payoffs from development in the future may be much less than they are today, and we should take this possible change in values into account. Multiperson Decisions and Group Externalities Externality problems become particularly complex in the context of group decisions. In the case of an individual firm and the citizenry, one has to worry only about the actions of one economic actor. However,

in actual practice, many of the most important externalities arise from the decentralized decisions of a variety of actors or, in the case of climate change, the policies of a variety of countries. In these contexts, some coordination mechanism is often desirable to promote behavior that will be collectively beneficial to society. Environmental problems are becoming increasingly global in scope and require international cooperation. In 1987, more than 190 countries signed the Montreal Protocol on Substances that Deplete the Ozone Layer to phase out chemical products that were depleting the ozone layer. The agreement took effect in 1989. The thinning of the ozone layer boosts the risk of skin cancer. Because of this powerful linkage, this successfully implemented policy is estimated to have saved 6.3 million U.S. lives by 2165.29 Chlorofluorocarbons, which were widely used in air conditioners and refrigerators, were among the many substances that have been successfully phased out. The December 2015 Paris Agreement likewise was an international agreement in which 195 countries committed to keeping the global temperature increase below a 2 degrees Celsius increase from the preindustrial levels. On June 1, 2017, the Trump administration indicated a plan to withdraw from the Paris accord. The economic structure of these and similar international agreements follows that of a multiperson group externality problem because the actions by any one country have implications for all others as well. The Prisoner’s Dilemma The standard situation in bargaining theory where uncoordinated action gives rise to an inferior outcome is that of the Prisoner’s Dilemma. Suppose that there are two partners in crime, each of whom has been captured by the police. The prisoners are held separately, preventing cooperation. The police offer to lighten each prisoner’s sentence if he will incriminate the other. The prisoner must make a risky decision, based on what he believes the other is most likely to do. Following the standard scenario, the prisoners each choose his preferred strategy of talking to the police. Talking is a dominant strategy for each party, given any particular behavior on the part of the other prisoner. However, if both prisoners had agreed not to talk, they would have been better off than they will be after they both incriminate each other. The outcome is consequently Pareto inferior (that is, each gets a lower valued payoff) compared to the situation in which both of them remain silent. The N-Person Prisoner’s Dilemma Various social situations also arise in which there are incentives for individual behavior that do not lead to optimal group outcomes. Figure 21.13 illustrates a multiperson Prisoner’s Dilemma, using a methodology developed by Nobel laureate Thomas C. Schelling, where the particular context being considered is the purchase of a large or small car.30 We will suppose for concreteness that consumers prefer large cars to small cars. Thus, for any given number of small cars on the market along the horizontal axis, the consumer’s payoff received for using a large car exceeds that for a small car. The result is that because everybody has a dominant strategy to purchase a large car, we end up at the equilibrium, 0. This equilibrium is not a social optimum, however. In particular, if we could constrain everyone to purchase a small car, we could reach the outcome at point S, which has a higher value than 0. The reason some constraint is needed is that this equilibrium is not stable. Any individual driver has an incentive to break away and purchase a large car, leading to an unraveling until we reach the stable equilibrium at 0. Thus, some government regulation is required.

Figure 21.13 The Multiperson Prisoner’s Dilemma

Applications of the Prisoner’s Dilemma Group externalities such as this arise in a variety of contexts. In international whaling, exercising some restraint in terms of the number of whales that are caught in any year will maximize the value of the whaling population. Thus, even if one were simply concerned with the commercial value of the whales, some limitation on whaling is optimal. However, from the standpoint of the individual fisherman, it is always optimal to catch as many whales as you can. If all whaling vessels follow their dominant strategy, as most of them have, the result is that the whaling population will be overfished and that we will have a dwindling number of whales. In this instance, the optimal strategy is to provide for some restraint but not a complete abolition of whaling activities. Achieving this moderation in the degree of whaling has proven to be a longterm international regulatory problem. The international whaling example has proven to be more than a hypothetical case. In 1994 the U.S. government proposed that the Georges Bank fishing area off New England be closed so that the species could revive. This fishing ground, which had formerly been one of the richest in the Atlantic Ocean, was the source of fish such as cod and haddock. This fishing area also served as the principal source of livelihood for fishing villages in New England, such as Gloucester. Restraints on fishing proved to be ineffective, which led the federal government in 1994 to propose the more drastic step of closing these fishing grounds altogether so that the fishing stocks could revive. Unfortunately, the difficulty in monitoring and enforcing appropriate fishing restrictions has proven to be so great that the government was led to a much more costly and disruptive regulatory policy option that has led to the abandonment of a fishing fleet and the shutdown of a major industry throughout much of the New England area. To promote marine diversity, in 2016 President Obama declared a section of Georges Bank a national monument—the Northeast Canyons and Seamounts Marine National Monument. Mining and fishing are severely restricted, but commercial fishing

for American lobster and red crabs will be permitted for seven years. Similar classes of issues arise in the context of vaccinations. If a critical mass in society has received an inoculation, it is not optimal to get vaccinated because the risk of contracting the disease will generally be much less than the expected health loss due to an adverse reaction to the vaccine. We clearly need some coordinating mechanism to ensure that a sufficient portion of the population has been vaccinated, but given the fact that society has established such a vaccination requirement, each of us has an incentive to be exempted from the vaccination. Similarly, homeowners who are doing battle against Japanese beetles will be able to reduce their efforts if all their neighbors use insecticides. However, it is essential to establish a sufficiently broad insecticide use to control the beetle population. The initial insecticide user may obtain little benefit unless a sufficient number of her neighbors also use the insecticides. At low and high levels of communitywide insecticide use, the individual incentive to use insecticides will be lacking. There is no voluntary incentive for an unassisted market process to begin generating the decentralized decisions needed to reach the social optimum. The general result that pertains in situations with group externalities is that some form of coordination is often worthwhile. This coordination often takes the form of explicit regulations. Hockey players are required to wear helmets, traffic rules require that we drive on the right side of the road, and daylight saving requirements establish uniform changes in the time schedule for everyone. Individually, the payoff of shifting to daylight saving time is quite low if no one else in society shifts, but if we can all coordinate our actions, we will all be better off. Enforcement and Performance of Environmental Regulation Enforcement Options and Consequences The promulgation of regulations does not ensure that firms will comply with them. As a result, the EPA and other regulatory agencies couple the issuance of regulations with vigorous enforcement efforts. In the case of major sources of air and water pollution, the EPA attempts to inspect the emissions source at least once per year. Moreover, in the case of water pollution discharges, the EPA requires by law that the firms submit a record of the nature of the discharge to the EPA and that each firm report its compliance status with the pollution permit that it has been given. The enforcement task with respect to conventional pollutants is generally viewed as being the simplest. Next in terms of the degree of difficulty is enforcement with respect to toxic chemicals. These chemicals are often more difficult to monitor than are conventional pollutants because of specific chemical testing that must be undertaken. In the case of the EPA Toxics Release Inventory (TRI), the nature of the enforcement is through the public disclosure of information on the chemicals that a facility is discharging into the environment. Although the TRI effort emerged as part of the movement toward community right-to-know efforts, the main audience for the information has been journalists and investors rather than the general public. The release of information about toxic emissions generates the prospect of EPA enforcement and adverse stock market impacts, which often leads firms to reduce their emissions, particularly for very dangerous chemicals.31 The nature of the pollution source also affects the feasibility of effective enforcement. Hazards that arise on a decentralized basis (such as toxic wastes, radon in consumers’ homes, and asbestos in buildings) often impose substantial enforcement problems because of the large number of pollution sources involved and, in

the case of toxic chemical dumping, the difficulty of monitoring the party responsible. Enforcement of environmental regulations pertaining to chemicals and pesticides varies in effectiveness, depending on the nature of the regulation. The process of screening chemicals and regulating the chemicals that are being sold and used commercially is quite effective because of the ability to monitor mass-produced consumer goods. The EPA also can readily monitor the hazard warnings attached to these products. Much more difficult to monitor is the manner in which the products are used. The disposal of chemical containers and the dilution of insecticides are among the decentralized activities that pose almost insurmountable problems. The best that the EPA can achieve in these instances is to provide risk information to foster the appropriate safety-enhancing action on the part of the product users. In these various inspection contexts, the EPA has several enforcement tools that it can use. Not all of these involve fines, but they do impose costs of various kinds on the affected firms. The EPA can inspect a firm. It can request that the firm provide data to it. It can send the firm letters, or it can meet with the firm’s managers to discuss pollution control problems. Most of the EPA’s contacts with firms are of this character. In terms of sanctions, two classes of financial penalties can be levied. The first consists of administrative penalties that are usually modest in size and of limited applicability. The main sanction that the EPA has is not the penalties that it can assess but instead the penalties that can be assessed through prosecution of the polluter by the U.S. Department of Justice. In severe, flagrant, or persistent cases of violations of EPA standards, the EPA frequently refers the case to the U.S. Department of Justice for civil or criminal prosecution. The costs associated with the prospective litigation, as well as the possibility that substantial fines may be imposed, often provide a compelling enforcement sanction. Hazardous Wastes Public opinion polls typically rank the cleanup of hazardous wastes as one of the most important environmental problems. Beginning in the 1980s, the EPA became much more concerned with toxic substances and hazardous wastes. This cleanup effort, knows as the Superfund Program, has sought to eliminate the risks posed by these chemical waste sites, which chiefly consisted of cancer hazards to the surrounding population. What is perhaps most striking about this environmental policy area is the substantial mismatch between the public’s concern with the environmental risks and the level of the hazards that are addressed by this environmental cleanup effort. The source of the difficulty can be traced in part to the legislative mandates under which the EPA operates. There is no stipulation that the EPA balance the benefits to surrounding populations against the costs of cleanup. Instead the focus is on risk alone. Moreover, since the cleanup costs will be borne largely by the potentially responsible parties, which are private firms rather than the citizens affected by the hazard, there will be considerable political pressure for uncompromising cleanup remedies, such as removing the contaminated waste from the site and incinerating it. The policy trigger for cleanup is that a site must be cleaned up if it poses a potential lifetime cancer risk of at least 1 in 10,000, and cleanup is at the EPA’s discretion, provided the lifetime risk is at least 1 in 1,000,000. Recall from table 19.3 that many routine daily activities pose a risk from a single event of one chance in a million. Eating forty tablespoons of peanut butter, traveling ten miles by bicycle, and smoking 1.4 cigarettes all pose a one-in-a-million fatality risk. If one were to undertake such activities over one’s lifetime rather than in a single episode, then the overall risk would be even greater and would dwarf the risk posed by many hazardous waste sites that have been targeted for cleanup. Before deciding on the level of the hazard, the EPA must first ascertain who lives near the site and will be

exposed to the risk. In addition to examining current populations, the EPA assumes that a risk exists if some future population might be exposed to the risk, even if such a chance is unlikely. U.S. Supreme Court Justice Stephen Breyer, for example, noted that at one Superfund site involving a case in which he ruled, a modest cleanup effort could make the dirt at the site clean enough so that children could eat the dirt for seventy days per year without increasing their risk.32 However, the EPA spent an additional $9.3 million to clean up the site so that children would be able to eat the dirt without risk for up to 245 days per year. What was noteworthy about the site is that no children lived near the site, which was a swamp. Similar unrealistic assumptions may affect the risk estimates at other sites, such as the North Carolina Superfund site that was a vacant lot at the time, but the analysis assumed that a factory would be built there in the future and that during their lunch break, workers would swim in a nearby creek, exposing them to the contaminated water. For the EPA to find a risk, there need not be a population actually exposed to the hazardous waste site. If a person could potentially move to that area and have some potential for exposure in the future, then the EPA will treat the risk as being just as consequential as would be the case if the site had a large exposed population. The net result is that whether there are in fact exposed populations plays no role in triggering an EPA cleanup, a fact that will have important consequences for the efficacy of cleanups. Note that the EPA policy trigger for cleanup is whether a real or hypothetically exposed future individual has reached a critical lifetime risk threshold. This focus on individual risks consequently ignores the size of the total population at risk. Densely populated areas in close proximity to a Superfund site receive the same policy weight as a single hypothetically exposed future individual. Because minority populations tend to be disproportionately concentrated near hazardous waste sites, the practical effect of this approach is to give inadequate weight to such sites in the priority setting process. In contrast, consideration of economic benefits in terms of the expected number of cancer cases prevented would give greater weight to these highly populated sites. When calculating the individual cancer risk at a site, the EPA uses conservative risk estimates. To see how conservative biases enter the analysis, it is useful to consider the components of the calculation. Lifetime excess cancer risk is given by

The denominator terms are not controversial, as the EPA uses an average body weight assumption, while the averaging time component simply controls for the proper units in the calculation. The key components are the five elements in the numerator of the calculation. For each of these individual variables, the EPA uses a conservative assumption, typically the 95th percentile of the distribution. Thus, there is only one chance in twenty that the exposure duration could be as great as the assumed value. By using such upper-bound values for each of the five parameters in the numerator, the result is a cascading of conservatism bias, so that the resulting estimate is well beyond the 99.99th percentile of the true risk distribution. Economists instead would generally recommend calculating the expected number of cancer cases based on the mean values of the risk. If there is political support for being very protective in the cleanup actions, that concern can be expressed through a high unit benefit value on the cancer cases prevented. The current EPA practice distorts regulatory priorities by shifting the policy emphasis toward dimly understood risks that may pose no threat to existing populations.

What will be the consequence of ignoring cleanup costs, mismeasuring the magnitude of the risk, and failing to account for the size of exposed populations? One would expect hazardous waste cleanup efforts to be very ineffective. Table 21.4 summarizes the cost effectiveness of various Superfund cleanup efforts measured in terms of the cost per expected case of cancer prevented. After ranking the sites from the most to the least cost effective, James T. Hamilton and W. Kip Viscusi calculated the cost effectiveness at these different levels.33 U.S. Supreme Court Justice Stephen Breyer and others have hypothesized that a 90–10 principle is in effect, whereby agencies expend 90 percent of their resources to clean up the last 10 percent of the risk. In the case of the Superfund Program, the drop-off in efficacy is much more stark. The 5 percent of Superfund cleanup efforts that are the most effective address the hazards that will eliminate 99 percent of the human health risks. As indicated by the statistics in table 21.4, less than 1 percent of the cancer cases are eliminated for the least effective 95 percent of the expenditures. Table 21.4 Summary of Superfund Cost Effectiveness Percentage of Remediation Expenditures, Ranked by Cancer Cost Effectiveness

Cumulative Percentage of Total Expected Cancer Cases Averted

Marginal Cost per Cancer Case Averted ($ million)

5 25 50 75 95

99.47 99.86 99.96 99.97 99.98

238 1,815 10,565 46,341 395,335

Source: James T. Hamilton and W. Kip Viscusi, Calculating Risks?: The Spatial and Political Dimensions of Hazardous Waste Policy (Cambridge, MA: MIT Press, 1999), table 5.6. Note: Cost figures have been converted to 2015 dollars. The following assumptions are used: average exposure concentrations and intake parameters, 3 percent discount rate and no growth factors for cost, 3 percent discount rate for cancers, and a ten-year latency period for the development of cancer.

The drop-off in cost effectiveness is enormous. By the fifth percentile, the cost per case of cancer prevented is $238 million, and the median Superfund cleanup expenditure prevents cases of cancer at a cost of $10.6 billion per case.34 Even these estimates, high as they are, understate the actual cost per case of cancer prevented because they are based on conservative EPA health risk assumptions and conservative assumptions about the degree to which populations will in fact be exposed to the risk. If EPA policy decisions are not responsive to economic efficiency, what is it that drives them? The factors that appear to be most influential are political. The voting rate for the county, for example, is particularly influential in determining whether a site is cleaned up and the stringency of cleanup. The cleanup of hazardous waste has also been the focal point of the environmental equity movement, many of whose members have suggested that minorities are disproportionately exposed to hazardous waste. Much of this problem can be traced to the fact that minorities have less political leverage than do more affluent white populations. It is also noteworthy that targeting cleanups based on the economic efficiency concerns would do more to advance environmental equity than the current politically based process. Minority sites have higher benefit-cost ratios, or lower cost levels per case of cancer prevented. Economic efficiency concerns are in fact supportive of environmental equity by making cleanup of hazardous wastes equally meritorious, irrespective of the political clout of the affected population. Contingent Valuation for the Exxon Valdez Oil Spill One of the most controversial areas on the frontier of environmental economics is the use of contingent

valuation techniques to value environmental damages. Using this approach, researchers design survey questions to elicit the values that people attach to scarce environmental resources for which no good market values exist. The debate over the soundness of the technique reached its peak with respect to the Exxon Valdez oil spill, for which damage levels in the billions of dollars considerably raised the stakes of the economic debate over this methodology. Calculating the losses to fishermen, the costs of cleaning up the shoreline, and related financial allocations is fairly straightforward. However, how should one calculate the environmental damages suffered by the entire U.S. citizenry because of the Exxon Valdez oil spill? Controversies over the use of contingent valuation continue to rage and have led to a government report that sought to provide some guidance; contributors to the report included two Nobel laureates in economics, Kenneth Arrow and Robert Solow.35 To get some sense of the nature of this enterprise, consider the contingent valuation study undertaken by the state of Alaska as part of the litigation over the environmental damage. The implications of this survey were not simply of academic interest. They served as critical inputs to the litigation process and would have played a substantial role in the court deliberations had the case not been settled out of court. The U.S. Department of Justice also undertook a series of contingent valuation studies, and the Exxon Corporation solicited numerous economists as well to assess the damages and to comment on the validity of the other parties’ assessments. The survey developed for the state of Alaska reflects the general character of the contingent valuation approach.36 The objective was to determine how much the public should be compensated to offset the loss they suffered because of the spill. After asking respondents about their general views on a variety of policy issues, the survey asked respondents if they were acquainted with the Exxon Valdez oil spill, which occurred in Prince William Sound, Alaska, in March 1989. As a result of this incident, 11 million gallons of crude oil spilled into the water. At that point, the respondents indicate whether they recalled hearing of the Exxon Valdez oil spill. Although the survey proceeds regardless of whether they had heard of this spill, some debate remains as to whether the valuations should matter if people have not heard of the spill and suffered a welfare loss. The alternative perspective is that the value assigned should be based on what it would be if people had full information regarding it. The survey then undertook a substantial educational effort regarding the character of the spill. Respondents considered maps and photos indicating the area on Prince William Sound that was contaminated by the spill. In addition, they were shown photos of wildlife in the area, including sea ducks, murres, seagulls, and sea otters. After viewing a picture of the tanker sailing through the sound, respondents then considered a variety of maps indicating the extent of the spill, which affected about 1,000 miles of shoreline. They also viewed a series of photos showing the oiled shore and the cleanup activity. Despite these efforts, the effect on wildlife was significant. The survey informed the respondents that “22,600 dead birds were found” and that scientists estimate that “the total number of birds killed by the spill is between 75,000 and 150,000.” This death total included 5,000 bald eagles. Respondents also learned that 580 otters and 100 seals were killed by the spill. They received information about how long it would take for these populations to return to normal. One of the critiques of contingent valuation studies is that respondents may not be sensitive to whether they have learned that 100 birds or 10,000 birds have been killed, as they may give the same willingness-to-pay answer to prevent either incident. The unresponsiveness of the willingness-to-pay values to the extent of the environmental damage has been designated the “embedding” problem. It may be that respondents are not in fact expressing their preference for the particular environmental good specified in the survey but rather are simply voicing support for the environment more generally. Incorporating a detailed series of rationality tests in a survey

can help test for whether this potential problem is in fact pertinent. After learning of the damage caused by the Exxon Valdez spill, respondents are asked how much they would be willing to pay for an escort ship policy that would prevent such spills from occurring over the next ten years. Without such a program, one spill would be expected on average, according to the survey. The price mechanism would be a one-time tax on both oil companies and on households, where the household tax would be levied through higher federal income taxes. Respondents then considered a variety of possible costs for the program, such as $60 per household in higher taxes, and were asked whether they would vote for such an effort. The median response for the households was a willingness to pay for the escort program on the order of $49 per household, or a total value for the United States of $2.8 billion. Some critics might think that this willingness to pay is inordinately large for a comparatively modest escort program. Although the approach taken in the Alaskan survey is one possible survey methodology, there are others as well. Surveys can differ considerably with respect to the level of detail that is presented to the respondents about the spill. Moreover, how the effects of the spill are presented can be influential. If the respondents were to consider the percentage of birds in the local population that were affected as opposed to the absolute number, their view might be different. Moreover, a variety of policy contexts could be used. One might, for example, ask how much one would be willing to pay to reverse the effects of the spill through an ambitious cleanup operation. For prospective scenarios, a wide range of policy options could be suggested to respondents as being potentially effective. Moreover, payment mechanisms other than higher federal taxes could come into play, such as higher gasoline prices. What is essential, however, is that the payment mechanism be credible and that respondents indicate their true willingness to pay for prevention efforts, rather than simply naming some hypothetical dollar figure to impress an interviewer who has spent half an hour showing them pictures of dead birds and sullied shorelines. Because no reliable market prices exist for many natural resources and because these valuations are critical both in court cases and for policy decisions, the controversies over how outcomes should be valued will continue to rage for many years to come. Kling, Phaneuf, and Zhao provide an instructive categorization of the pertinent criteria to use when assessing the validity of stated preference methods: criterion validity, convergent validity, construct validity, and content validity.37 Criterion validity pertains to whether the stated preference for willingness-to-pay amounts are the same that would prevail if actual payments were elicited. Convergent validity examines how stated preference values correlate with other measures, such as whether the stated preference method would yield values comparable to those generated by a revealed preference approach. Construct validity considers the rationality tests based on economic theory. These rationality and consistency tests have been the principal focus of tests of stated preference approaches. As the payment amount falls, does the proportion of those willing to purchase the environmental good increase? Similarly, if the quantity of the good is greater, people should be willing to pay more, thus responding to scope. Since environmental goods are normal goods, the income elasticity should be positive, with some economists claiming that it should also be above 1.0 if the environment is a luxury good. And willingness-to-pay and willingness-to-accept values should be the same for small changes in either direction. Finally, content validity requires that the survey accord with best economic practices, including the description of the scenario, survey valuation structure, and econometric analysis. Although all surveys need not include explicit examination of all these criteria, demonstrating the validity of the stated preference estimates from a variety of different perspectives will bolster our confidence in the results. Senior Discount for the Value of a Statistical Life

In 2003, the EPA generated a national controversy with respect to the value it applied to the reduced risks for senior citizens. The context of this controversy was a proposed air pollution policy called the Clear Skies Initiative. For many air pollution reduction efforts, the benefits are concentrated at the tails of the population, as children and the elderly are most at risk. Should the same benefit value be applied to each age group? The elderly have a much shorter life expectancy at risk, so some age adjustment might seem reasonable. Based on willingness-to-pay survey results, the EPA applied a 37 percent senior discount to their benefits, leading to an outcry from senior citizen groups, such as AARP: “Seniors on sale, 37% off.” The EPA administrator resigned shortly after this controversy emerged. Table 21.5 summarizes the reduced fatalities and the associated benefits. Two different benefit estimates were made, one based on long-term exposures and the other based on short-term exposures. If decreased expected fatalities are valued using the EPA’s uniform value that at the time was $6.1 million, then one obtains the benefit estimate in the “Constant Value of Life” column. The EPA also showed alternative benefit estimates adopting a 37 percent senior discount, and this change decreases benefits by $13.5 billion based on long-term exposures and by $7.2 billion based on short-term exposures. The stakes involved in terms of the level of benefits were quite substantial. Table 21.5 Age Group Effects on the Benefits from the Clear Skies Initiative Benefits of Reduced Mortality ($ billion undiscounted) Constant Value of Life

Value with Senior Adjusted

Consumption-Adjusted Value of Life

Base estimates—long-term exposure Adults, 18–64 1,900 Adults, 65 and 6,000 older

11.6 36.6

11.6 23.1

11.6 37.1

Alternative estimate—short-term exposure Children, 0–17 30 Adults, 18–64 1,100 Adults, 65 and 3,600 older

0.2 6.7 21.9

0.2 6.7 14.7

0.1 6.7 22.3

Age Group

Reduced Annual Fatalities in 2010

Source: The reduced annual fatalities figures are from the EPA’s Technical Addendum: Methodologies for the Benefit Analysis of the Clear Skies Initiative (Washington, DC: U.S. Environmental Protection Agency, 2003), Table 16. The 37 percent senior discount is from the EPA’s Technical Addendum: Methodologies for the Benefit Analysis of the Clear Skies Initiative (Washington, DC: U.S. Environmental Protection Agency, 2002), p. 35. The $6.1 million figure per life is from the EPA’s Technical Addendum: Methodologies for the Benefit Analysis of the Clear Skies Initiative (Washington, DC: U.S. Environmental Protection Agency, 2003), p. 26. Estimates in the final column are drawn from Thomas J. Kniesner, W. Kip Viscusi, and James P. Ziliak, “Life-Cycle Consumption and the Age-Adjusted Value of Life,” The B.E. Journal of Economic Analysis & Policy 5 (January 2006): article 4.

To address this sensitive issue, it is useful to go back to first principles. The appropriate benefit value is the willingness to pay for the risk reduction. This amount could decline for those with a shorter life expectancy, but it also might remain high because of increases in wealth with age. What matters is this willingness-to-pay value, not the quantity of life per se. Theoretically, the value of a statistical life should rise and then eventually fall over the life cycle. The main open question is empirical. How much does this value decline for those with short life expectancies? The empirical evidence is still emerging. One survey of respondents in the United Kingdom indicated that the willingness to pay dropped by 37 percent for senior citizens, while a survey in Canada showed a comparatively flat relationship with age. Labor market evidence on workers’ value of a statistical life is consistent with the inverted U-shaped relationship, but estimates differ in terms of the steepness of this decline at the upper end of the age distribution. One set of estimates that account for changes in the level of

consumption over time indicates that the value of a statistical life for those in their sixties is below their peak lifetime value, but it is still above comparable values for people in their twenties and thirties. Using these values in the final column of table 21.6 boosts the overall benefit values, even if there is a “senior discount” relative to one’s peak value of a statistical life. While a definitive set of estimates of the appropriate senior discount has not emerged, economic estimates of the role of such heterogeneity in benefit values are likely to reach a consensus in much the same way as agreement has been reached on the general range of the average value of a statistical life. Table 21.6 National Pollution Emission Trends

But what should we do about equity concerns? If we value the lives of seniors less, is that fair? Alternatively, is it fair to value young people’s lives at the same amount as seniors because doing so places a higher value on each year of life for seniors than each year of life for those at the young end of the age distribution? Viewed in this manner, the seniors’ claim to be treated “fairly” may seem less compelling. However, if senior citizens do have a high willingness to pay for risk reductions, it would be inefficient and

unfair not to count their benefit values. Evaluating Performance The objective of regulatory policy is not simply to promulgate and enforce regulations but also to improve environmental outcomes. Assessing the impact of regulations is complicated by the fact that we observe trends in environmental quality, but we do not know what these trends would have been in the absence of regulation. Nevertheless, examination of pollution trends reveals the kinds of progress reflected in more formal statistical analyses. Table 21.6 summarizes the pollution trends from 1970 to 2014 for seven principal categories of air pollution emissions. One category not shown is that of lead pollution, which has been all but eliminated by EPA regulation. Because of some early data gaps with respect to monitoring emissions, let us focus on the period beginning in 1990. Since 1990, all pollutant categories other than ammonia have exhibited steady progress. Particulate matter PM10 (particulate matter less than 10 micrometers in diameter usually arising from fuel combustion, industrial processes, and motor vehicles) has exhibited improvement since 1990. Modest gains are also evident for the more dangerous PM2.5 pollutants (particulate matter less than 2.5 micrometers in diameter, such as small particles of dust, soot, and smoke). The other pollution categories displayed more consistent improvement, including sulfur dioxide emissions (chiefly arising from stationary fuel combustion and industrial processes), nitrogen oxide emissions (arising primarily from highway motor vehicles and cold-fired electric utility boilers), carbon monoxide emissions (primarily arising from highway motor vehicles), and volatile organic compounds (primarily from fossil fuel consumption and chemical manufacturing). Although a precise test of the EPA’s impact on these various pollution measures has not yet been undertaken, it is clear that some progress has been made. Because one would have expected an increase in pollution levels with an expanding economy and a growing population, the fact that any decrease in the pollution took place, much less the dramatic declines that have occurred, is evidence of some payoff to society from the costs that have been incurred. Summary Environmental problems represent a classic situation in which an externality is being imposed involuntarily. What is most noteworthy about this situation is that the optimal level of pollution is not zero. The fact that an externality is being imposed without a voluntary contract does not mean that the activity should be prohibited. Whether we are talking about second-hand smoke or toxic waste disposal, the efficient level of pollution is generally not zero. However, the efficient level of pollution is also generally not going to be what arises in a market context because the party generating the pollution has inadequate incentives to reflect the social cost imposed by its decisions. Our review of the Coase theorem indicates that the main focal point should be the efficient pollution level, which is the level that would arise under a voluntary contractual situation if parties could contract costlessly. Examining pollution problems in the context of the bargaining problems used to illuminate the Coase theorem also sheds light on the distributional impacts involved. Assignment of property rights not only has distributional implications but also affects the long-run efficiency aspects of the system. Similar concerns arise with respect to the choice of standards versus fines. Each of these approaches can provide for the same degree of short-run efficiency that can be achieved through a Coasean contractual

outcome. However, standards differ from fines in terms of the total costs that will be borne by firms and in terms of their long-run efficiency. Moreover, some other features distinguish the relative attractiveness of these options. Further exploration of the potential role of market trading options is long overdue, but in some contexts, standards may be preferable, so that a particular class of policy options may not always be dominant. The same kinds of methodologies that we apply to analyzing conventional pollutants, such as air pollution, can also be applied to analyzing global warming, as well as to more complex externalities, such as the group decisions that lead to overfishing. Examination of these various contexts as well as the policies that have been developed to address them suggests that considerable insight can be obtained by assessing how efficient markets would deal with externalities, if such markets existed. Questions and Problems 1. Consider the following basic problem regarding a driver and a pedestrian in an accident situation. The driver makes a decision regarding his degree of care, but the pedestrian has no such decision to make. Payoffs to each party are as follows:

Driving Speed

Total Benefit to Driver

Expected Cost Driving Speed to Pedestrian

Rapid Moderate Slow

170 100 90

160 40 10

Suppose that instead of an anonymous driver-pedestrian relationship, we had a two-person society, one driver and one pedestrian. a. If the driver could undertake voluntary bargains that would be enforceable, what driving speed would result? b. If both parties have equal bargaining power, what is the predicted settlement amount (that is, amount of transfer from pedestrian to driver)? 2. Suppose that a pulp and paper mill discharges water pollutants that impede the value of the stream for swimming. It would cost the mill $5,000 to install pollution abatement equipment to eliminate pollution, and doing so would result in an additional $10,000 in swimming benefits to the residents downstream. a. If the residents are assigned the property rights, and if each party has equal bargaining power, what will be the predicted outcome and the dollar transfer between the two parties? b. If the firm is assigned the property right to pollute, what will be the predicted outcome and the income transfer between the two parties? 3. The U.S. Department of Transportation has just rerouted the interstate highway through your yard, so that you now have to sell your house. The government proposes that it compensate you an amount equal to the market value of your home. Is this fair? Is it efficient? Answer the same questions supposing that instead of the government wishing to purchase your house, some stranger decided that she wants to live in your house. Would it be possible for her to evict you and to pay the market value? Would your answer change if your reservation price for selling the house could be accurately determined, so that you would experience no utility loss from such an eviction? How do you believe the functioning of society would change if such a compensation mechanism were instituted? 4. The discussion in the chapter regarding the desirability of taxes and regulatory standards focused primarily on the shortrun issues. However, these different policies also have important dynamic implications, particularly regarding the incentives for innovation. Which type of governmental approach will foster greater incentives to innovate in a beneficial way from the standpoint of decreased environmental and health risks?

5. Suppose that the government must undertake an irreversible policy decision regarding the extent of air pollution regulation. The government is making this decision in a situation of uncertainty, however. In particular, there is some probability p that the benefits will remain the same as they are this year for all future years, but there is some probability 1 − p that benefits will be less in all future years. If we take into consideration the multiperiod aspects, should we err on the side of overregulation or underregulation, compared to what we would do in a single-period choice? 6. Figure 21.13 illustrates a multiperson Prisoner’s Dilemma for a situation in which the payoff curves for the two kinds of cars do not intersect. However, externalities may cause the payoffs to intersect because the desirability of different activities may change differentially for the two different decisions. If these payoff curves intersected, with the bottom payoff curve intersecting the top from below, what would be the nature of the market equilibrium that would prevail? Would this equilibrium be efficient? 7. Environmentalists argue that because the actions we take today will have an irreversible effect on climate change, we should take action now and err on the side of excessive restrictions. Some economists, however, have argued that because of the opportunity to acquire additional information, we should postpone a decision until we learn more about the merits of taking a regulatory action. Which strategy do you find more compelling and why? 8. Class Exercise: A useful class exercise is to develop a stated preference survey to determine society’s willingness to pay to preserve some local environmental amenity. Examples include rare species of birds or plants in the area, or freedom from the noise of Jet Skis at a local lake. What information would you provide respondents about the environmental amenity? What is the payment mechanism that you would establish for people’s willingness to pay? Are there any tests that you can incorporate in the context of your survey to ensure its validity, such as transitivity tests or tests for whether people are willing to pay more for broader environmental commodities that should have a greater value? 9. Climate change policies have impacts that affect the entire world and require international collaboration to ensure that policies will be effective. However, the funding of the U.S. policies is derived from payments by U.S. citizens. When assessing the benefits of climate change initiatives, should the government use the social cost of carbon estimate for the world or only the domestic share of the social cost of carbon? What will be the implications for the stringency of climate change regulations in each instance?

Notes 1. Environmental policies have been at the forefront of economic policies more generally. Many recent annual editions of the Council of Economic Advisers, Economic Report of the President, include extensive discussions relating especially to policies pertaining to climate change. For example, chapter 6 of the 2015 report is “The Energy Revolution: Economic Benefits and the Foundation for a Low-Carbon Energy Future,” and chapter 6 of the 2013 report is “Climate Change and the Path toward Sustainable Energy Sources.” 2. For a comprehensive analysis of these policy options, see Ted Gayer and John K. Horowitz, “Market-Based Approaches to Environmental Regulation,” Foundations and Trends in Microeconomics 1, no. 4 (2005). 3. See Ronald H. Coase, “The Problem of Social Cost,” Journal of Law and Economics 3 (October 1960): 1–44. The Coase theorem has given rise to a large body of work in the field of law and economics. See, in particular, Richard A. Posner, Economic Analysis of Law, 9th ed. (New York: Wolters Kluwer Law & Business, 2014). 4. One observant student noted that the manure left by the stray cows on farm B may be a positive externality, as manure serves as a fertilizer that enhances hay production. For concreteness, we will assume that the net externality is negative. 5. The role of the Coase theorem in regulatory contexts is also elucidated in A. Mitchell Polinsky, An Introduction to Law and Economics, 4th ed. (Boston: Aspen, 2011). 6. For an exploration of citizen preferences with respect to smoking restrictions, see Joni Hersch, Alison Del Rossi, and W. Kip Viscusi, “Voter Preferences and State Regulation of Smoking,” Economic Inquiry 42 (July 2004): 455–468. 7. Figure 21.1 is a bit of a simplification of our current understanding of water-quality levels. Although the EPA formerly used a water-quality ladder similar to that shown in this diagram, it is now believed that such a rigorous step function does not in fact hold. Rather, the EPA considers the following four different dimensions of water quality: drinking, swimming, fishing, and aquatic uses. The scores on these various dimensions are correlated in the same direction as would be the case if a water-quality ladder existed, and it remains the case that water that is not safe for any of these uses will exhibit the kind of

nonconvexity that we will discuss. 8. For more detailed exploration of new source bias, see Robert W. Crandall, Controlling Industrial Pollution: The Economics and Politics of Clean Air (Washington, DC: Brookings Institution Press, 1983). 9. See B. Peter Pashigian, “Environmental Regulation: Whose Self-Interests Are Being Protected?” Economic Inquiry 23 (October 1985): 551–584. 10. A calculation of the optimal pollution tax coupled with a fixed cost levy to promote long-run efficiency to recognize these long-run incentive issues is a nontrivial economic problem. For an analysis of it, see Dennis W. Carlton and Glenn C. Loury, “The Limitations of Pigouvian Taxes as a Long-Run Remedy for Externalities,” Quarterly Journal of Economics 95 (November 1980): 559–566. 11. Martin L. Weitzman, “Prices vs. Quantities,” Review of Economic Studies 41 (October 1974): 477–491. 12. It should be emphasized, however, that the economists in these administrations have long advocated this approach. For early advocacy of this approach, see the Council of Economic Advisors, Economic Report of the President (Washington, DC: U.S. Government Printing Office, 1990), chapter 6. 13. Robert W. Hahn and Gordon L. Hester, “Where Did All the Markets Go? An Analysis of EPA’s Emissions Trading Programs,” Yale Journal on Regulation 6 (Winter 1989): 109–153. 14. For a discussion of the position of the Bush administration, see the 1990 Economic Report of the President, chapter 6. 15. For a more detailed discussion of this policy effort, see Richard Schmalensee and Robert N. Stavins, “The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment,” Journal of Economic Perspectives 27 (Winter 2013): 103–122. 16. These data reported in Schmalensee and Stavins are drawn from several different studies and are in 2000 U.S. dollars. 17. Intergovernmental Panel on Climate Change, Susan Solomon et al., eds., Summary for Policymakers, Climate Change 2007: The Physical Science Basis (Cambridge: Cambridge University Press, 2007). 18. Intergovernmental Panel on Climate Change, Rajendra K. Pachauri and Leo Meyer, eds., Climate Change 2014: Synthesis Report (Geneva: IPCC, 2014). 19. Nicholas Stern, The Economics of Climate Change: The Stern Review (Cambridge: Cambridge University Press, 2007). 20. William D. Nordhaus, “A Review of the Stern Review on the Economics of Climate Change,” Journal of Economic Literature 45 (September 2007): 686–702. 21. For fuller exploration of these options, see William D. Nordhaus, “To Tax or Not to Tax: Alternative Approaches to Slowing Global Warming,” Review of Environmental Economics and Policy 1 (Winter 2007): 26–44; and Joseph E. Aldy and Robert N. Stavins, “The Promise and Problems of Pricing Carbon: Theory and Experience,” Journal of Environment and Development 21 (April 2012): 152–180. 22. Warwick J. McKibbin and Peter J. Wilcoxen, “The Role of Economics in Climate Change Policy,” Journal of Economic Perspectives 16 (Spring 2002): 107–129. 23. National Research Council, The Hidden Costs of Energy (Washington, DC: National Academy Press, 2010). 24. Interagency Working Group on the Social Cost of Carbon, Technical Support Document: Social Cost of Carbon for Regulatory Impact Analysis, under Executive Order 12866 (Washington, DC: U.S. Government Printing Office, February 2010; May 2013); and Michael Greenstone, Elizabeth Kopits, and Ann Wolverton, “Developing a Social Cost of Carbon for US Regulatory Analysis: A Methodology and Interpretation,” Review of Environmental Economics and Policy 7 (Winter 2013): 23–46. 25. Ted Gayer and W. Kip Viscusi, “Determining the Proper Scope of Climate Change Benefits in US Regulatory Analyses: Domestic versus Global Approaches,” Review of Environmental Economics and Policy 10 (Summer 2016): 245–263. 26. This discussion of the greenhouse effect, particularly the graphical exposition, is based most directly on the article by William D. Nordhaus, “Global Warming: Slowing the Greenhouse Express,” in Henry Aaron, ed., Setting National Priorities (Washington, DC: Brookings Institution Press, 1990), pp. 185–211. 27. Ibid. 28. An early exposition of the irreversible environmental choice problem appears in Kenneth J. Arrow and Anthony C. Fisher, “Environmental Preservation, Uncertainty, and Irreversibility,” Quarterly Journal of Economics 88 (May 1974): 312–

319. 29. Achievements in Stratospheric Ozone Protection, April 2017, https://www.epa.gov/sites/production/files/2015-07 /documents/achievements_in_stratospheric_ozone_protection.pdf. The number of lives lost is not discounted. 30. The diagrammatic exposition that follows is based on the innovative work of Thomas C. Schelling, Micromotives and Macrobehavior (New York: W. W. Norton, 1978). 31. James T. Hamilton, “Pollution as News: Media and Stock Market Reactions to the Toxics Release Inventory Data,” Journal of Environmental Economics and Management 28 (January 1995): 98–113; and James T. Hamilton, Regulation through Revelation: The Origin, Politics, and Impacts of the Toxics Release Inventory Program (New York: Cambridge University Press, 2005). 32. See Stephen Breyer, Breaking the Vicious Circle: Toward Effective Risk Regulation (Cambridge, MA: Harvard University Press, 1993). 33. For an overview of these issues, see W. Kip Viscusi and James T. Hamilton, “Are Risk Regulators Rational? Evidence from Hazardous Waste Cleanup Decisions,” American Economic Review 89 (September 1999): 1010–1027; for a fuller description, see James T. Hamilton and W. Kip Viscusi, Calculating Risks?: The Spatial and Political Dimensions of Hazardous Waste Policy (Cambridge, MA: MIT Press, 1999). 34. These estimates have been converted to 2015 dollars based on cost per case estimates reported in Hamilton and Viscusi, Calculating Risks? 35. For discussion of the National Oceanic and Atmospheric Administration report on natural resource damages, see “Natural Resource Damage Assessments under the Oil Pollution Act of 1990,” Federal Register 58, no. 10 (January 15, 1993): 4601–4614. 36. The following discussion is based on the “Exxon Valdez C.V. Survey Questionnaire, National Opinion Survey Main Interview Questionnaire,” administered for the state of Alaska by Westat. The principal researchers included Richard Carson, Robert Mitchell, and other economists. 37. Catherine L. Kling, Daniel J. Phaneuf, and Jinhua Zhao, “From Exxon to BP: Has Some Number Become Better Than No Number?” Journal of Economic Perspectives 26 (Fall 2012): 3–26.

22 Product Safety

Although product safety concerns are not entirely new, they did not become a prominent part of the regulatory agenda until after the establishment of the social risk regulation agencies in the early 1970s. A pivotal event that led to the increase in public attention to product safety issues was the publication of Ralph Nader’s Unsafe at Any Speed.1 Nader charged that the automobile industry devoted insufficient resources to product safety, as was evidenced, for example, in the turnover risks posed by the Chevrolet Corvair. This compact, rear-engine car was marketed by Chevrolet in the early 1960s as a moderately priced compact that had some of the driving feel of a sports car. Its main disadvantage was that the car was highly unstable during cornering maneuvers, leading to a rash of deaths of Corvair drivers. Among the victims was Ernie Kovacs, who at the time was the host of a popular television comedy show. Emergence of Product Safety Regulations The product safety era of the early 1960s was quite different from what it is today. There were no requirements that automobiles include safety belts, and in general, they did not. Debates over passive restraint systems and air bags had yet to surface, as the primary concern was whether there ought to be any safety belt requirements at all. Auto safety was not the only area where new regulations were emerging. In the mid-1960s, Congress instituted requirements pertaining to the hazard warnings that had to be included on cigarette packages. The initial requirements for protective packaging were also instituted in that period so as to make aspirin products and prescription drug packaging child resistant. Even the subsequent establishment of the social regulation agencies in the 1970s and the emergence of these new regulations did not lead to the same degree of sensitivity to safety concerns as at the present time.2 In the 1970s, for example, the Ford Motor Company marketed a subcompact that it called the Ford Pinto. This car was the brainchild of Ford president (and then executive vice-president), Lee Iacocca, who wished to develop a budget-priced car that would compete with cheap imports. The design of the Pinto was a hurried affair, with catastrophic results. The main safety defect of the car was the placement of the gas tank too near the rear of the car. As a result, the car was highly vulnerable to rear-end collisions. Ford was conscious of the potential risks and the extra $11 per car that would have had to be spent to eliminate the hazard, but it chose to stick with the cheaper design. The result was a series of fatal accidents involving the Ford Pinto, which caught fire on rear impact, causing severe burn injuries and deaths. The substantial damages that were ultimately awarded by the courts became part of the increased product liability price tag being imposed on the nation’s businesses.

Current Safety Decisions Firms currently contemplating product safety decisions no longer look solely to the market. Instead their efforts are governed by a complex set of regulations and judicial precedents. In some cases, these regulations are quite specific. The U.S. Department of Transportation regulations for municipal buses are almost tantamount to a comprehensive bus design. The focus of this chapter is on how such product safety regulations affect the various market participants. We also address how society should approach product safety regulation to achieve an appropriate balance between competing objectives. Figure 22.1 summarizes the main mechanisms that influence product safety. Let us begin with the decisions by the producer. The producer’s environment is governed by three sets of influences: the market through consumer decisions, government regulation, and tort liability. If the market were fully efficient, then there would be no need for social regulation or product liability litigation. In a perfect market, safer products will command a higher price. If consumers are unaware of the risks, however, market outcome will not be ideal. As indicated in the discussion of the rationale for social regulation in chapter 19, assessing the degree and character of imperfect information is an area in which one should exercise caution. Even if consumers are not fully knowledgeable of all the implications of the product, that fact does not always mean that the market supplies too little safety. Indeed, the opposite result may pertain if consumers systematically overestimate the risk, as many consumers do for low-probability events called to their attention. Consequently, one must assess the particular context and nature of the risk involved before concluding that market incentives for safety will be adequate.

Figure 22.1 Product Safety Decision Process

Consumer Complaints One form of information that is frequently used as an index of informational failure is the presence of

consumer complaints.3 If consumers file complaints concerning the performance of a product, can we necessarily assume that a market failure exists and warrants some form of regulation? If consumers are filing complaints for products that fail to meet with their reasonable expectations, it is then likely to be useful to examine the prevalence of complaints. However, in general it is difficult to distinguish the extent to which consumer complaints reflect a market failure or simply consumers who are unlucky. For example, suppose that consumers believe that there is a 75 percent chance that new Kia cars will run well, but they believe that there is a 25 percent chance that these cars turn out to be lemons. Because of the car’s low price, the consumers are willing to take this gamble. However, after the fact, the 25 percent of the consumers who get stuck with a lemon are no longer facing an uncertain prospect. Instead they must confront the certainty of definitely owning a bad car. Conditional on knowing that they have purchased a lemon, this 25 percent of the consumers may voice regret with their purchase, but on an ex ante basis, before they knew the outcome of the product quality lottery, they may have been making a sound decision from an expected utility standpoint. One aspect of information provision that makes it more likely to be a clear-cut market failure is if the information has a public good nature. Firms, for example, may have little incentive to investigate the safety properties of electronic stability control because disclosure of this information will benefit all manufacturers of cars with electronic stability control, not simply the firm undertaking the research. Factors Affecting Producer and Consumer Actions Based on the impacts of the market, regulation, and tort liability, the producer will choose the products it will make and their characteristics, leading to the product attribute outcome indicated in figure 22.1. Consumers are also making decisions. Based on the information they have received from the media, their experiences, firms, and government regulation, they will make a purchase decision. Moreover, they will also make a product use decision that may be influenced by the incentives created by regulation and tort liability. Government regulations, for example, mandate the use of safety belts in many states and prohibit alcohol consumption by minors. Similarly, tort liability creates incentives for consumer behavior because people who drive while intoxicated, operate an all-terrain vehicle negligently, or otherwise do not exercise appropriate care can be found guilty of contributory negligence, thus reducing or possibly eliminating their prospective court award after an accident. Product Performance and Consumer Actions The combined impact of product attributes and the consumer actions result in the risk outcome and financial loss component in figure 22.1. The main issue that has been stressed by economists and generally ignored by the product safety professionals is that product safety is not simply an engineering issue. Individual behavior is relevant both to market decisions and to safety-enhancing actions that individuals may take. The performance of these products affects the experience base of information that consumers have when making subsequent purchases, which in turn will alter the market environment of the firm. From the standpoint of regulatory policy, two aspects of these relationships are particularly noteworthy. First, government regulation is not the only economic influence affecting safety incentives. The market and tort liability also are of consequence. Regulation generally affects firms either through design standards that influence the technology or by addressing observed product defects, as in the case of safety recalls. In contrast, tort liability operates ex post facto. The courts do not address products that are potentially risky but for which there have been no adverse outcomes. The focus instead is on observed defects. People must be

injured to collect for bodily injury losses. Consequently, the timing of the incentives generated by different institutions in terms of how they can potentially influence product safety is different. Regulations have a greater opportunity to operate in a more anticipatory manner. Second, safety is the outcome of the joint influence of producers’ safety decisions and user actions. The task of regulatory policy is to ascertain how best to influence both these determinants of safety outcomes rather than restricting our focus on technological solutions to safety. Changing Emphasis of Product Regulation The emphasis has shifted on the determinants of the product safety environment. The regulatory period of the 1970s concentrated on technological solutions to safety, such as mandated changes in automobile design. The decade of the 1980s marked a shift toward regulating consumer behavior, both through a wave of rightto-know policies and through requirements such as more stringent drunken driving rules and mandatory safety belt use. In addition, the role of tort liability has changed, as there had been an escalation in the role of tort liability through the 1970s, culminating in the explosion in liability insurance premiums in the mid1980s. Product safety issues had previously been an afterthought. These matters were in the domain of corporate public affairs offices, which dealt with product safety as part of their general public relations efforts. By the 1990s, product safety had become a central corporate concern. Safety and environmental regulations affecting the automobile industry were blamed by some critics for that industry’s collapse. Tort liability awards for workers in the asbestos industry led to the bankruptcy of a major American firm and the elimination of the asbestos industry. Other entire industries have also disappeared because of product safety concerns. The rising liability costs for private planes, which averaged over $100,000 per plane in the 1980s and 1990s, have led aircraft companies such as Beech, Cessna, and Piper to all but eliminate their production of private airplanes. The manufacturing of diving boards for motel swimming pools is also a vanishing industry. Some previous regulatory concerns have emerged in new guises. Consumer behavior is pertinent not only for matters such as drunk driving but also for distracted driving, as text messaging while driving increases the risk of accidents. The focus on technological determinants of safety now includes possible errors in the automobile computer systems and the challenges posed by self-driving vehicles. The focus of the remainder of this chapter is on developing the economic tools needed to approach such regulatory issues in a sensible manner. Premanufacturing Screening: The Case of Pharmaceuticals Regulations that hit products at a particularly early stage are those that pertain to the premanufacturing screening of products. Firms selling medical devices cannot market these products without prior government approval. Pharmaceuticals, insecticides, and chemical products are all subject to extensive testing and labeling requirements before they can be marketed. Food products are also subject to premarket testing, although these inspection procedures are generally viewed as more lax than the other premanufacturing screening regulations already noted. Notwithstanding food safety regulation, for example, imported produce drenched in pesticides and meat from animals treated with large doses of hormones and antibiotics are staples in the typical American diet. The most extensively analyzed premanufacturing screening effort is that of pharmaceutical regulation by

the Food and Drug Administration (FDA). Before a firm is permitted to market a pharmaceutical product, it must establish the safety and efficacy of that good. Although the regulatory requirements have evolved throughout the past 100 years, a pivotal event was the 1962 Kefauver-Harris amendments to the Food, Drug, and Cosmetics Act. The major stimulus for this regulatory regime was the effect of the morning sickness drug thalidomide on pregnant women in England, many of whom had babies with serious deformities caused by the drug. This drug had not been approved for use in the United States, but now, four decades later, it has been revived in the United States as an anticancer medication marketed under the brand name Thalomid. New drugs also have potentially beneficial effects as well. New drugs account for over half of the 0.6 year gain in life expectancy of elderly Americans from 1996 to 2003, an 8 percent decrease in overall cancer mortality from 2000 to 2009, and substantial improvements in the quality of life.4 Figure 22.2 illustrates the FDA’s general approach to safety and efficacy. Consider a drug directed at mortality risks. The increased probability of survival i indicated on the horizontal axis is the measure of efficacy. The reduced probability of survival r is indicated on the vertical axis. The concern with respect to safety and efficacy leads to a floor i* on the acceptable level of efficacy and a ceiling r* on the permissible reduced probability of survival. The acceptable combinations of safety and efficacy are indicated by the shaded region in figure 22.2. However, if the net effect of the drug on mortality was the matter of concern, all drugs on the line for which r = i would break even from that standpoint, and all points below that line represent drugs for which the efficacy of the drug outweighs the safety concerns. Excessive concern with safety on balance makes us less safe.

Figure 22.2

FDA Risk Balancing

Although restrictions on drugs with severe side effects that create an overall net health risk to society are clearly desirable, establishing an appropriate balance in the premanufacturing screening decision is a complicated task. The principal benefits of more stringent screening pertain to the decreased risk of approving a drug that might have adverse effects. This more stringent screening process also imposes costs. The first class of costs consists of the testing costs and the forgone opportunity to market a potentially profitable drug. The second class of costs is that to society, which may be deprived of potentially beneficial drugs with life-extending properties because the drugs are tied up in the testing process. These costs have been of substantial concern to those with terminal diseases, such as AIDS, for which a potentially effective drug that is possibly risky appears to be a good gamble. The regulator’s task is to attempt to balance the competing concerns. Weighing the Significance of Side Effects Particularly in the case of pharmaceuticals, simplistic alternative policy objectives of eliminating all risks associated with pharmaceutical products are clearly inappropriate. Perhaps the main distinguishing feature of prescription drugs is that they pose potential hazards and, as a result, their use must be closely monitored by a physician. The FDA requires the information pertaining to the drug to be summarized in a label containing hazard warnings. These warnings are reprinted in an annual volume, the Physicians’ Desk Reference, distributed to all doctors throughout the United States. Inspection of almost any entry in the Physicians’ Desk Reference will indicate the presence of potentially severe adverse effects or complications that may result from prescription drugs. As viewers of televised ads for prescription drugs can attest, these health impacts sometimes range from nausea to anaphylactic shock and death. The presence of such potential hazards with prescription drugs does not imply that FDA regulation has been remiss, only that the product has inherently risky attributes and that ultimately society must strike a balance between the benefits these drugs provide and the potential hazards they pose. Recognition of the need for balance does not completely resolve all policy issues at stake. The FDA must also decide on the stringency of the testing criteria. Drug Approval Strategies Table 22.1 summarizes the nature of the tradeoff.5 Suppose that the FDA is analyzing a new drug that is both safe and effective, but its properties are not yet known to the FDA. Ideally the FDA would like to approve such beneficial drugs, but there is the potential that it may reject a beneficial drug because of misleading test results. In addition, firms seeking approval for beneficial drugs may be discouraged by the costs associated with the lengthy approval process and may abandon a drug. Situations in which the FDA review process leads to the rejection of potentially beneficial drugs are designated Type I errors. Table 22.1 Type I and Type II Errors for New Drug Applications

If the FDA were to adopt a more lenient drug approval strategy, then it would incur the competing danger of approving dangerous drugs that should not be marketed. Errors of this type are designated Type II errors. Ideally, the FDA wants to approve all beneficial drugs and reject drugs that are not safe and effective. Achieving both objectives simultaneously is generally not feasible, in large part because information about the drugs obtained through premarket testing is never fully informative. Moreover, because this information is costly to acquire, additional information imposes burdens on firms and on consumers who are deprived of potentially beneficial drugs. As a result, there is always a need to strike a balance between the Type I and Type II errors. Although a few critics have charged that the FDA has been too lax, the consensus in the economics literature is that the FDA has placed too great an emphasis on Type II errors, as illustrated in figure 22.2. The FDA primarily seeks to avoid approving drugs with potentially adverse consequences, and it places insufficient weight on the Type I error of failing to approve beneficial new drugs. A political factor generating the motivation for this emphasis is that the victims of Type II errors are more readily identifiable than those of Type I errors. In the case of Type II errors, specific people will suffer adverse consequences that can be linked to the drug. In contrast, Type I errors generally have a more diffuse probabilistic effect. One percent more of the 3,000 patients suffering from a variant of heart disease may die if a new drug does not appear on the market, leading to a total of thirty expected deaths. However, the particular people who will die because the drug is not available may not be identifiable ex ante. Instead there is simply a treatment group population that will suffer a probabilistic loss in terms of expected health if the drug is not available. The identifiability of the parties suffering the adverse consequences becomes quite different when the lobbying group for the new drugs consists of a well-defined group with a clear-cut stake in the accelerated approval of such drugs. One such constituency that emerged in 1988 was that of AIDS patients, who sought more rapid approval of AIDS-related drugs. Two weeks after their protest event, “Seize Control of the FDA,” the FDA expedited the approval process for AIDS drugs. Unfortunately, no comparable constituency exists for many drugs, such as those that prevent risks of heart attacks for the population at large. The FDA not only must set the criteria for whether it accepts or rejects a drug, it also must determine the degree of premarket testing that it will require. Because full information is prohibitive and the cost of information acquisition may be substantial, the extent of premarket testing must necessarily be bounded. Figure 22.3 summarizes the shape of the health and nonhealth costs of premarket testing. As the downwardsloping curve in the diagram indicates, the expected health costs from unsafe or ineffective drugs decline as the extent of premarket testing increases because the FDA is better able to avoid approving drugs that will

turn out to have adverse consequences. However, minimization of health costs is not our sole objective, since testing also imposes costs. The cost of the research and development and of the lost market opportunities stemming from delay in the drug’s approval is indicated by the upward-sloping curve in the diagram. The sum of the cost to the firms and the health costs leads to the total cost function at the top of the diagram. For total cost levels associated with the amount of premarket testing indicated by t*, costs are minimized, establishing this amount of testing as the optimal amount.

Figure 22.3 Cost Tradeoffs in Premarket Testing Source: Henry G. Grabowski and John M. Vernon, The Regulation of Pharmaceuticals: Balancing the Benefits and Risks (Washington, DC: American Enterprise Institute, 1983). Reprinted with the permission of the American Enterprise Institute for Public Policy Research, Washington, DC.

Accelerated Drug Approval Process Although economists have long urged that drug approval time be accelerated, the FDA has done little to change its overall policy. There are two principal tracks for drug approval that were established by the 1992 Prescription Drug User Fee Act. If the drug is going to provide a minor improvement over existing marketed therapies, it will receive the standard review, which is slated to target about ten months. If the drug is categorized as offering major advances in treatment or therapy where no comparable drug existed previously, the FDA can accord this drug a priority review, for which the review time is about six months. In very unusual situations of drugs targeted at serious diseases and medical needs that are not currently being met, the FDA can offer either accelerated approval or a fast-track process. Despite the changes in the approval process, the slow pace of approving beneficial new drugs is still a continuing concern. Has excessive concern about safety led to a problem of drug lag in which drugs are available in Europe and other countries before they are in the United States? For example, beta blockers,

which are used to treat arrhythmias, angina, and hypertension, were approved in the United Kingdom before they were approved in the United States. The advanced colorectal, head, and neck cancer drug, Eloxatin (oxaliplatin), was approved by twenty-nine other countries before being approved in the United States in 2002. These case studies may identify outliers, so economists have attempted to develop more comprehensive perspectives on the possible drug lag problem. The median approval time for standard drugs dropped from twenty-two months in 1993 to twelve months in 2002. Similarly, for priority drugs, the median approval time dropped from fourteen months in 1993 to six months in 2002. Mary Olson found that before the Prescription Drug User Fee Act, the percentage of new drugs first introduced in the United States was only 28 percent, but that figure rose to 40 percent in 1992–1997 and then increased to 50 percent after the passage of the 1997 Food and Drug Administration Modernization Act.6 There no longer appears to be as great a lag before drugs approved in other countries become available in the United States.7 One of the main factors leading to this apparent shift in FDA policy is that accelerating the approval time of drugs that address life-threatening diseases not only has a well-defined constituency, but they also have less potential adverse health risks because of the high risk of mortality for this group of patients. Although the widespread consensus is that this shift in policy was attractive, most FDA decisions are not as clear-cut. Even if officials are willing to commit to a particular tradeoff, it can be very difficult to ascertain what these tradeoffs are. In terms of figure 22.3, the cost to the firms can be estimated reasonably reliably, but there is often substantial uncertainty regarding the expected health costs to the population. In situations in which we do not know the health curve costs, FDA officials may be held responsible for their judgments regarding the entire shape of this curve if they adopt an aggressive drug approval policy. The incentives for bureaucratic risk aversion are clear. Behavioral Response to Product Safety Regulation Seat belt usage has played a prominent role in the regulation literature. Use of automobile safety belts reduces the fatality risk to the passenger who buckles up, but it entails costs of time and discomfort. Based on these tradeoffs, Glenn Blomquist estimated that the implicit value of statistical life implied by the decision to use a seat belt was $1.4 million.8 More recent studies have indicated similar relationships between seat belt usage and the value of a statistical life and risk-taking behavior more generally. Analysis of stated preference estimates of the value of a statistical life found that whereas the average sample had a value of a statistical life of $7.7 million, those who use seat belts only sometimes or never had a value of a statistical life of $5.7 million.9 Similarly, those who engage in the high-risk personal activity of smoking cigarettes also tend to be more likely to engage in risky driving behavior by not using seat belts.10 Individual risk preferences tend to be consistent across different domains. These various estimates, however, only reflect risks to the passengers. Safety belts also raise potential externality concerns pertaining to the impact on insurers, other vehicles, and pedestrians. The usual approach to product safety regulation has been to alter the technology of the product to enhance safety. If the behavior of the users of the product remains unchanged after the mandated safety device is instituted, then we will reap the benefits of the engineering controls. In the case of automobile safety, for example, engineering experts generated a variety of predictions of substantial gains in safety that would result from the wave of initial safety regulations. These experts predicted a 0.5 percent reduction in occupant death rates from dual braking systems, a 0–2.5 percent reduction in occupant death rates from improved

windshields, a 4–6.5 percent reduction in occupant death rates from an energy-absorbing steering column, a 7–16 percent reduction in occupant death rates from lap seat belts, and a 0.25–1 percent reduction in occupant death rates from shoulder belts. Similar projections accompanied the introduction of air bags and antilock brakes. The benefits derived from these improvements all hinge on a key assumption—the behavior of the driver will remain unchanged. In an influential economic analysis, Sam Peltzman hypothesized that driver behavior would not remain unaffected.11 This theoretical hypothesis in turn has led to extensive empirical work by Peltzman and others, including Glenn Blomquist. In the case of safety belts, for example, the safety improvement from the new technology would reduce the potential hazards to the driver of driving fast. As a result, the relative benefits of taking the safety precaution of driving slowly would decline once an individual had buckled up. Faster speeds would become more desirable. Figure 22.4 illustrates Peltzman’s reasoning diagrammatically. Suppose that initially the driver is at point A, where the line 0A gives the relationship between driving intensity and the driver’s risk of death before regulation. With the use of safety belts, the risk curve drops to 0C. After the introduction of the safety devices, if the driver does not change his degree of precautions, he will be at a death risk at point B. However, because the marginal benefits to the driver of taking the precaution have been reduced, he will increase his driving intensity to a point such as C, thus muting some of the effect of the safety belts.

Figure 22.4 Relationship of Driving Intensity to the Regulatory Regime

The factors influencing the movement from point B to point C are based on elementary economic principles. Figure 22.5 indicates the marginal benefits and marginal costs to the driver of driving slowly. For simplicity, suppose that the marginal cost of driving slowly has a constant value per unit time, leading to the flat marginal cost curve MC. Originally the driver faced a marginal benefit curve of MB0 for driving slowly. However, after the introduction of the safety device—in this case, auto safety belts—the marginal benefit of driving slowly has been reduced to MB1, assuming that the belt is worn. As a result, the optimal slowness of the driving speed has been reduced, which is to say that the driver now finds it desirable to drive faster once she is using devices that will decrease her risk of injury or property damage from an accident.

Figure 22.5 Choice of Driving Speed

Consumer’s Potential for Muting Safety Device Benefits The overall economic story is consequently that once individuals buckle up, they will have an incentive to drive faster, thus muting and possibly offsetting the beneficial effects of the safety device. This line of argument has long aroused considerable controversy because of the surprising nature of the results, as well as the extent of the empirical effect that has been claimed for it. The underlying theory that some offsetting behavioral response occurs is quite sound and is based on the same kinds of marginal benefit and marginal cost reasoning that is fundamental to economics. If one recast

the safety belt issue in a somewhat different context, then most individuals would accept the economic mechanisms at work. Suppose that instead of making the car safer by introducing safety belts, we make driving riskier by making the streets icy. Few would question that it is optimal for people to drive with greater care and slower speeds when the roads are icy and slick than when they are dry. Once the ice melts and the streets return to their dry condition, one would expect people to drive faster. In essence, what safety belts do is take us from a risky regime (such as that of icy streets) to a safer regime (such as dry streets), and the benefits from exercising driver care will decline. How much of a decline will occur has long been a matter of dispute. In Peltzman’s original paper, he did not claim on theoretical grounds that the effect of safety belts would necessarily be counterproductive, although it could be. However, in his empirical analysis of both time-series and cross-sectional data pertaining to aggregate motor vehicle death rates, he was unable to find any statistically significant effect of the introduction of safety belts. He concluded that the behavioral response effect offset the technological gains. This issue has become an ongoing controversy in the automobile safety literature. The current consensus appears to be the following.12 Automobile safety regulations have reduced the risks to drivers and motor vehicle occupants. Whatever offsetting behavior occurs only diminishes the safety-enhancing benefits of seat belts but does not eliminate these benefits. However, evidence suggests that drivers wearing safety belts do drive faster, inasmuch as there has been an increase in the fatalities of motorcyclists and pedestrians with increased safety belt utilization. Thus the overall mechanism described here has support, even though the magnitude of the offset effect is not as great as the early studies claimed. There is less general agreement on the extent of the offset from the deaths of motorcyclists and pedestrians. Although some observers adhere to the view that these effects completely offset the beneficial impacts of automobile safety regulation, the mainstream view is that on balance safety regulation has a riskreducing effect, with a muting of the impact of safety regulations by the decrease in care exercised by drivers. This role of individual responses to regulations is not restricted to safety belt issues. In particular, other product safety regulations that rely on changes in the technology similarly will interact with individual usage of the product to govern the ultimate product safety that will be experienced by the consumer. Similar offsetting concerns have also been raised with respect to other regulatory interventions, such as energy efficiency standards. Will, for example, people who purchase energy efficient cars now engage in offsetting behavior in their other energy usage decisions? As in the case of seat belts, it is unlikely that a rational balancing of the benefits and costs of these decisions will lead to net counterproductive effects. The Lulling Effect A case that is particularly intriguing is that of safety caps. For more than four decades, the government has imposed child-resistant safety cap requirements for products such as aspirin, prescription drugs, and selected other hazardous products (such as antifreeze). Safety caps not only reduce the benefits to parents of putting medicines in a location for which access by children is difficult, but they also may give parents a false sense of security. In a phenomenon that Viscusi terms the lulling effect, there may be a misperception on the part of consumers of the efficacy of the safety device, leading to an additional decline in safety precautions.13 Consumer product safety commissioners and the public at large routinely refer to these caps as being childproof, whereas in fact they are not. Figure 22.6 indicates the impact of these various influences. Suppose that before the advent of safety caps, the expected loss suffered by the consumer from any given level of safety effort is given by the curve

EL0. Suppose that the consumer originally was at point A, which reflects the optimal amount of expected loss that the consumer is willing to incur, given the costs associated with undertaking safety efforts. After the advent of the safety caps, the expected loss associated with any degree of safety precautions declines to EL1. If the consumer does not change his precautions, the postregulation safety effort will be at point B. For the usual economic preferences, one will necessarily decrease the level of safety-related effort, so that the individual will be to the left of point B. If this decrease in safety precautions is sufficient, there will be no effect of the safety device on the safety outcome, and we will be at point C. For the safety device to be counterproductive, leading to an outcome such as point F, one must impose very severe restrictions on the shape of individual preferences. Such a counterproductive effect of regulations is conceivable, but it requires that very special and unusual assumptions be met. It is for that reason that one should not expect the behavioral responses to regulatory policies to be fully offsetting when consumers have accurate perceptions of the efficacy of the regulatory intervention.

Figure 22.6 Safety Mechanisms and the Choice of Precautions Source: W. Kip Viscusi, “The Lulling Effect: The Impact of Child-Resistant Packaging on Aspirin and Analgesic Ingestions,” AEA Papers and Proceedings 74 (May 1984): 324–327.

Effect of Consumer’s Perception of Safety Device Efficacy However, matters are quite different if consumers do not accurately perceive the efficacy of the safety

device. If they believe the safety mechanism reduces the perceived risk from the curve EL1 to EL2, then we may end up at a counterproductive outcome such as point D much more easily. Consumers believe they are at point D, whereas they are actually at point F. The danger of safety mechanisms is not simply that of a falloff in the optimal level of consumer behavior but also an inducement to misperceptions regarding the risk, leading to a further drop-off in safety precautions. In the safety cap case, there is detailed evidence regarding the character of consumer precautions before and after the caps went into effect.14 Poisonings from the contents of safety cap bottles are the main source of poisonings from aspirin and analgesic products. In most of these instances, the bottles have been left open by consumers. The rash of open-bottle poisonings is not surprising, inasmuch as there have been widespread complaints regarding the difficulty of grappling with the caps. In an effort to deal with these caps, many consumers have responded by simply leaving the caps off altogether. The poisoning context also provides some intriguing evidence regarding possible spillover effects for other products. If consumers undertake a common safety precaution for their medicines, then the introduction of safety caps may lead to a decrease in safety precautions overall. In the case of products with safety caps, the net effect of the decrease in precautions may be simply to have no observable effect on safety, which has in fact been the case for aspirin. For products not covered by safety caps, the decrease in the overall level of precaution taking will increase the risk from these products. Such an increase in risk has in fact been observed, as there has been an observed adverse spillover effect on other products. After the introduction of safety caps for aspirin bottles, the poisoning rates for analgesic products such as Tylenol escalated from 1.1 per 1,000 in 1971 to 1.5 per 1,000 in 1980. Taking into account the rise in the sales of Tylenol and related products accounts for only half of this increase. The overall implication of this analysis is that there have been 3,500 additional poisonings annually of children under five that resulted from the decreased safety precautions after the advent of safety caps. The presence of such behavioral responses does not imply that all government regulations are bad or that these particular regulations are necessarily ill conceived. What the responses do suggest, however, is that one cannot view safety as simply being a matter of engineering controls. Individual behavior plays a key role. These responses are not restricted to safety caps and seat belts. Empirical evidence suggests that the newly introduced safety mechanism for butane cigarette lighters also reduces parental care. However, the safety mechanisms for the leading lighter brands are sufficiently difficult for young children to operate that on balance the devices are safety enhancing.15 Regulations that attempt to influence safety behavior through hazard warning programs, safety training efforts, and other efforts should be regarded as a central component of any product safety regulatory strategy. Costs of Product Safety Regulation: The Automobile Industry Case A principal target of product safety regulation has been automobiles. Dozens of regulations affect the safety of autos, as well as their environmental impact from emissions. Much of the impact of these regulations began in the 1970s—the decade in which the major wave of auto regulations emerged. Let us begin with a roster of the initial wave of regulations affecting auto safety. These include occupant protection requirements, steering column protection, seat belt assemblies, side door strength, bumper requirements, fuel system integrity standards, and a variety of other specific safety standards. A fuel penalty due to the added weight will be imposed on the car because of these safety devices. The standard that requires a bumper to withstand low-speed crashes is a chief contributor to this greater weight. The total costs

exceeded over $1,000 per car. These costs are in addition to the costs imposed by emissions standards. What developed was a situation in which both safety and environmental regulations were imposing substantial and increased costs on the automobile industry in the 1970s. This decade was also a period of dramatic change because of the inroads being made by foreign imports of small cars as a result of the dramatically higher gasoline prices in the late 1970s. U.S. sales of Toyota, Honda, and Nissan (then known as Datsun) soared in the 1970s. Some critics charged that government regulation was undermining the previously superior position of the U.S. automobile industry by imposing a required shift in the technology for automobiles, thus making much of the U.S. production system and U.S. design of automobiles obsolete. With the sunk costs in the earlier designs no longer being useful, U.S. producers sacrificed much of their previous advantage over foreign competitors, thus making it easier for foreign firms to compete in American markets. Even for political observers who did not blame government regulation for playing a central role in the demise of the U.S. automobile industry, the relationship between the health of the industry and government regulation was an important concern. The Carter administration, for example, undertook a substantial financial bailout of the Chrysler corporation in an effort to avoid its bankruptcy. In addition, it initiated some modest efforts designed to target regulatory relief at that industry. On taking office, the Reagan administration instituted a sweeping program of regulatory relief for the automobile industry. Table 22.2 summarizes the components of this effort. While this overhaul of regulatory initiatives took place several decades ago, the breadth of the regulatory interventions at that time was quite impressive. Government regulations pertained to almost every aspect of the design of automobiles, ranging from speedometer standards to emissions requirements. Moreover, the price tags associated with many of these regulations are on the order of hundreds of millions of dollars. What is perhaps most impressive is the broad range of the twenty-nine regulations listed in table 22.2 and the fact that a single industry could account for so much regulatory activity. Table 22.2 The Reagan Administration’s Auto Reform Package Five-Year Savings ($ million) Issue Gas-tank vapors Emissions tests

High-altitude autos Pollution waivers Paint shops Test vehicles Driver vision Fuel economy Speedometers Tire rims Brake tests

Action (Date of Completion)

Industry

Public

Rules Acted on Declined to order new controls on cars (April 1981)

103

1,300

5 19

— 129

1

1

0.2 1 —

— 1 —

300 — 160 — — 300 —

— — — — 20 75 1.8

Streamlined certification of industry tests on vehicles (October 1981, November 1982) Raised allowable “failure rate” for test of light trucks and heavy-duty engines from 10 to 40 percent (January 1983) Reduced spot checks of emissions of vehicles on assembly lines by 42 percent; delayed assembly-line tests of heavy-duty trucks until 1986 (January 1983) Ended assembly-line tests at high altitude, relying instead on industry data (April 1981) Allowed industry to self-certify vehicles as meeting high-altitude emission standards (April 1981) Consolidated industry applications for temporary exemptions from tougher emissions standards for nitrogen oxide and carbon monoxide (September 1981) Delayed until 1983 tougher hydrocarbon pollution standards for auto paint shops (October 1981) Cut paperwork required to exempt prototype vehicles from environmental standards (July 1982) Scrapped existing 1981 rule and second proposed rule setting standards for driver’s field of view (June 1982) Decided not to set stiffer fuel economy standards to replace those expiring in 1985 (April 1981) Revoked rule setting standards for speedometers and tamper-resistant odometers (February 1982) Scrapped proposal to set safety standards for explosive multipiece tire rims (February 1982) Eased from 30 to 20 percent the steepness of grades on which post-1984 truck and bus brakes must hold (December 1981)

Tire pressure Battery safety Tire safety Antitheft protection Fuel economy Tire ratings Vehicle IDs Seat belt comfort High-altitude emissions Emissions reductions

Particulate pollution Methane standards Passive restraints Bumper damage

Scrapped proposal to equip vehicles with low-tire pressure indicators (August 1981) Scrapped proposal to set standards to prevent auto battery explosions (August 1981) Revoked requirement that consumers be told of reserve load capacity of tires; eased tire makers’ reporting requirements (June 1982) Eased antitheft and locking steering wheel standards for open-body vehicles (June 1981)

— — —

130 — —





Streamlined semiannual reports of auto makers on their progress in meeting fuel economy goals (August 1982) Suspended rule requiring industry to rate tires according to tread wear, traction, and heat resistance (February 1983) Downgraded from standard to administrative rule the requirement that all vehicles have ID numbers as an aid to police (May 1983) Scrapped proposal to set standards for seat belt comfort and convenience (June 1983)

— —

0.1 10









Failed to revise Clean Air Act order ending weaker high-altitude emissions standards in 1984; eased through regulatory changes Failed to revise Clean Air Act order to cut large trucks’ hydrocarbon and carbon monoxide emissions by 90 percent by 1984; standard was delayed until 1985 Failed to ease Clean Air Act order reducing nitrogen oxide emissions from light trucks and heavy-duty engines by 75 percent by 1984; regulatory changes under study Delayed a proposal to scrap specific particulate standards for some diesels in favor of an average standard for all diesels; Stiffer standards delayed from 1985 to 1987 Shelved because of “serious” costs; questions a plan to drop methane as a regulated hydrocarbon

38

1,300

105

536

150

563

40

523





Delayed and then revoked requirement that post-1982 autos be equipped with passive restraints; revocation overturned by Supreme Court in June 1983 Cut from 5 to 2.5 mph the speed at which bumpers must resist damage; change is on appeal

428

981



308

Rules with Uncertain Futures

Source: Michael Wines, “Reagan Plan to Relieve Auto Industry of Regulatory Burden Gets Mixed Grades,” National Journal, July 23, 1983, pp. 1534–1535. Reprinted by permission of the National Journal. Note: Dashes indicate regulations for which there are no cost estimates.

Although there has been no comparable wave of motor vehicle regulations since that time, the U.S. Department of Transportation has remained an active regulator of vehicle characteristics ranging from roof crush resistance standards and rear view mirror standards to fuel economy requirements. Over the October 1, 2005 to September 30, 2014 period, new National Highway Traffic Safety Administration regulations imposed costs of $6.6 billion to $13.0 billion. In addition, the fuel economy regulations jointly proposed by the EPA Air Office and the Department of Transportation had estimated costs of $33.0 billion to $60.0 billion. These regulations also had estimated benefits in excess of the costs. The potential benefits of regulations are illustrated by the air bag regulations. The addition of such safety devices is a valued attribute for consumers. Once the air bags were introduced, consumers were willing to pay a higher price for the cars. Estimates by Rohlfs, Sullivan, and Kniesner of the results of the quasiexperiment associated with the gradual introduction of air bags into the U.S. market found that consumers’ valuation of the risk reduction generated by the air bags was consistent with a median value of a statistical life ranging from $9 million to $11 million.16 Some consumers, however, did not value the technology and placed a negative value on air bags. It is likely that the frequency of these negative values will increase after the fatal incidents involving the Takata air bags, which sometimes sprayed deadly shrapnel on the driver when the air bag activated. The automobile industry will remain a principal target of product safety regulation. The purpose of these efforts is not to drive a leading American industry out of business but rather to address the main source of product safety and environmental problems. Motor vehicles accounted for more than one-fourth of all accidental deaths in the United States in 2014, and it is inevitable that continued regulation of automobiles and other motorized vehicles will remain a prominent policy concern.

In addition to being highly regulated, automobiles are the target of substantial litigation. Nevertheless, one should not lose sight of the fact that market forces also play a substantial role in incentivizing automobile companies to provide cars with characteristics in line with consumer preferences. For example, the prices of used cars are higher for cars that are more powerful, have a higher resale value retention rate, have a better maintenance rating, have automatic transmissions, and have similar value characteristics. Moreover, in their used car purchases, consumers pay a higher price for safer cars, as reflected in the accident history for these model lines, with the implicit value per expected life saved ranging from $5.2 million to $7.5 million. These implicit value of a statistical life estimates are somewhat lower than the values implied by the purchases of cars with air bags as well as the labor market estimates discussed in chapter 20.17 Trends in Motor Vehicle and Home Accident Deaths The purpose of regulation of home and automobile safety is to reduce the accident rate. Focusing on these two classes of injuries can yield substantial dividends. In 2013, motor vehicles accounted for 26 percent of all accidental deaths, and home accidents other than motor vehicle accidents accounted for an additional 51 percent. The remainder of the accidents that are not due to either motor vehicle or home accidents stem from accidents that occur at work or in public places, such as falls in public places, deaths from firearms, and crashes involving planes and trains. The administrators of the regulatory agencies responsible for auto safety and home accidents generally refer to the improvements in the trends for these accidents as evidence of the efficacy of the agency. Annual press releases announcing decreases in accident trends portray these declines as evidence of the agency’s success. As will be noted in chapter 23, this approach has also been used by Occupational Safety and Health Administration officials when defending the accomplishments of their agency. Accident Rate Influences Such evidence does little to show that regulation has had a demonstrable effect on safety. More important is that because of the greater affluence of society, consumers demanded greater safety from their products. This wealth effect alone should continue to lead to safety improvements. These safety improvements are in evidence in auto accident rate trends, provided that one defines the risk measure properly. This definitional issue arises most particularly with respect to motor vehicles. Two types of variations are most important. The first pertains to the intensity of usage of motor vehicles. People drive cars more often today than they did ninety years ago, so that we cannot simply compare death rates across the population but must take into account the intensity of the product’s use. One mechanism for doing so is to look at the automobile accident rate on a mileage basis rather than a population basis. Doing so yields a quite different picture of the trend in automobile safety. Automobile death rates have declined from 21.65 per 100 million vehicle miles in 1923 to 1.16 per 100 million motor vehicle miles in 2014, whereas the accident rate on a population basis has been more stable, declining from 16.5 deaths per 100,000 population in 1923 to 11.1 deaths per 100,000 population in 2014. Temporary shifts in the age structure of the population also may influence the accident rate. The rise in the proportion of teenage drivers in particular eras, for example, has also contributed to temporary swings in the motor vehicle accident rate. Changes in the character of highways and the driving speeds on these highways also affect motor vehicle accident rates, even though the safety of the car itself may not have

changed. Decreases in vehicle miles driven during economic recessions influence both the total number of fatalities and the death rate. In general, one should take all these various factors into account when analyzing safety trends. The Decline of Accident Rates Since the 1930s, motor vehicle accident death rates per 100 million miles driven have declined quite steadily. The decline preceded the advent of government regulation just nearly five decades ago. While there was some flattening of the drop-off in motor vehicle accident rates in the 1960s, that decade also marked an increase in the total motor vehicle death risk arising from the changing age structure in the population and increased driving due to the growth of the interstate highway system. The decline in home accident rates was steady from the 1940s through the early 1990s, after which home accident rates have risen. Once again, the advent of a decline in the accident rate preceded the establishment of the Consumer Product Safety Commission. Statistical studies of the Consumer Product Safety Commission have failed to indicate any statistically significant impact of these efforts on product safety.18 These studies do not imply that no regulations of that agency have ever been effective, only that their impact has been sufficiently small that their influence is not evident in national accident statistics. The rise in home accident rates since 1991 is not because regulations have become less effective and products have become more dangerous. Instead the changing age structure of the population is a key driver of these trends. Individuals aged 75 and older are particularly vulnerable to fatalities associated with falls, choking, fires, and motor vehicles. The econometric studies of motor vehicle accidents have yielded more optimistic results. Almost all these studies have indicated an acceleration in the rate of decline in motor vehicle accident death rates beginning with the advent of regulation in the 1970s and 1980s. And controlling for other factors indicates that much of this decline is due to the impact of safety regulations. The magnitude of any offsetting behavioral impacts has been more controversial. From an economic standpoint, there is no reason that government regulations should not be effective, provided that the offset from the decrease in consumer precautions is not too great. The efficacy of these regulations in terms of promoting safety is not so limited that there should not be some beneficial effect observed. The main surprise from the earliest studies of these regulations, indicating the absence of a demonstrable effect, stems primarily from the imbalance between the initial projected impacts of these agencies and their observed impacts. This disparity suggests that other economic behavior, such as users’ precautions, should also be taken into account. The existence of beneficial safety effects also does not imply that the benefits of the regulation exceed the costs. However, in the realm of safety regulations, the benefit amounts calculated by the agency exceed the estimated costs, and there have been fewer economic critiques of benefit-cost analyses of the safety regulations than of the fuel economy regulations. The critical task from the standpoint of long-run regulatory policy is to ensure that on balance, these regulations are in society’s best interests. By the early 1980s, the price tag for safety and emissions regulations had reached over $2,000 per car, but there were observable benefits as well. Econometric estimates indicated that the automobile death risk would be as much as 40 percent greater in the absence of such safety regulations. By some calculations, these regulations also produced benefits in excess of their costs, in large part because the safety regulations did not impose stringent deadlines for the adoption of specific technologies. Instead they proceeded on an incremental basis, so that a gradual development of the

cluster of safety standards reflected an evolving knowledge of the changing safety technologies. Fuel economy standards may be more problematic from a benefit-cost standpoint, as the discussion in chapter 24 indicates. The Rise of Product Liability Direct government regulation of product safety is not the only influence on firms’ safety decisions. Increasingly, product liability awards by the courts have played an important role in establishing safety incentives.19 Firms purchase liability insurance to address the liability risks associated with products as well as legal liability associated with either property damage or personal injury to others. The rise in companies’ liability costs is reflected in the trends in table 22.3. In the 1960s, the total insurance premiums that were paid for general liability insurance such as product liability coverage were only $746 million. The past fifty years, however, have seen a dramatic expansion in the role of liability. Changes have included shifts in the criteria used to assign liability to corporations. The earlier negligence doctrine has been replaced by a strict liability doctrine that requires companies to bear the costs of product injuries in a greater share of situations. In addition, there has been a tremendous expansion in hazard warnings cases and in the concept of what constitutes a product design defect. Table 22.3 Trends in General Liability Insurance Premiums Year

Premiums ($ million)

1960 1970 1980 1985 1988 1990 1995 2000 2005 2010 2013

746 1,658 6,612 11,544 19,077 18,123 18,582 19,917 42,812 37,853 44,712

Source: A. M. Best and Co., Best Aggregate and Averages, various years; Insurance Information Institute, The Fact Book 2003: Property/Casualty Insurance Facts, 2003; and Insurance Information Institute, The Insurance Fact Book, 2015. Note: The commercial line statistics starting in 2005 represent the sum of other liability insurance and products liability insurance.

The net effect of these changes was to increase general liability premiums to $1.7 billion in 1970, and they rose even further to $6.6 billion in 1980. A substantial expansion occurred in the mid-1980s, as liability premiums jumped to $11.5 billion in 1985, reaching $19.1 billion in 1988. Since that time, costs have risen to $44.7 billion annually.20 Even these impressive costs do not capture the full liability cost to firms. Corporations also must pay for the cost of extensive legal staffs. Moreover, liability judgments may have costs that are not covered by insurance, such as punitive damage awards and awards in excess of the policy limits. Many industries, such as the pharmaceutical industry, are unable to obtain any liability insurance coverage at reasonable rates from conventional insurers. As a result, they have established separate insurance mechanisms outside the standard industry channels. These insurance costs are in addition to the amounts shown in table 22.3.

The impact on businesses of this product liability revolution has been substantial. Pharmaceutical companies have responded by withdrawing vaccines that have been hard hit by liability costs. A National Academy of Sciences panel blamed the lagging research and development of contraceptive devices by U.S. firms on the rise in liability costs.21 The domestic aircraft industry manufacturing planes for private use has all but disappeared. Between 15 and 25 percent of the purchase price of ladders goes to pay for liability costs, and 17 cents of every fare dollar on the Philadelphia mass transit system goes to pay for liability insurance expenses. One of the seminal legal events that led to this product liability revolution was the emergence of the strict liability doctrine that replaced the earlier negligence criteria. Under a negligence test, a firm is required to provide the efficient level of product safety. Consider the statistics in table 22.4, which gives the benefits and costs of different levels of product safety. The objective is to set the level of product safety so as to minimize total social costs. This optimal point is achieved at the medium degree of safety, which offers a net social payoff of $50,000. Higher and lower levels of safety each provide fewer social rewards. Table 22.4 Summary of Social Costs for Product Safety Degree of Safety

Consumer Accident Costs ($)

Safety Costs to Company ($)

Total Social Costs ($)

High Medium Low

0 50,000 150,000

140,000 50,000 25,000

140,000 100,000 175,000

Negligence Standard Under a negligence regime, a firm is liable if it does not meet the efficient medium degree-of-safety standard. Thus, all firms providing a low level of safety become liable for the $150,000 in accident costs that could have been prevented if they had provided the efficient degree of safety at the medium level of safety (see table 22.4). Firms facing this penalty consequently must choose between providing a low level of safety (which will cost them $25,000 from a manufacturing standpoint and $150,000 from a legal liability standpoint) or providing a higher level of safety (such as the medium level of safety that will impose $50,000 of manufacturing costs and liability costs). A negligence standard such as this will create incentives for firms to choose the medium level of safety, which in this case is the socially efficient level. Once they have done so, firms are free of any possible liability burden because they have met the efficient standard of care. Strict Liability Standard In contrast, under a strict liability standard, a firm is liable for all accident costs incurred by the consumer, irrespective of whether the firm has met the appropriate level of safety. (Actually, this is a variant of strict liability known as absolute liability, but consideration of this extreme case facilitates the exposition.) In the case of the high degree of safety, the cost to the company is the same as the social cost, which is $140,000. At the medium level of product safety, the social cost that must be borne by the company under strict liability is $100,000, and at the low level of product safety, the social cost that must now be internalized by the company is $175,000. The company will choose the level of product safety that minimizes these social costs—the medium level of product safety. Strict liability achieves this outcome almost by definition. Since strict liability requires that the company

bear all product-related costs associated with accidents, in effect what this doctrine does is force the company to internalize all accident costs. The social objective function and the company’s profit function consequently become one and the same. Tracing Accident Costs and Causes A problem arises when we are unable to distinguish which accident costs are traceable to the product. In addition, problems of moral hazard also arise. The result may be that entire product markets may disappear. If all ski manufacturers were required to pay for the hospital bills of those injured while using skis, the price of skis would become exorbitant. Similarly, the prices of automobiles would escalate if companies had to pay for all accident costs resulting from automobiles, irrespective of the parties at fault and the behavior of the drivers involved. If we abstract from such complications, from the standpoint of achieving an economically efficient outcome, both the negligence rule and the strict liability rule are equivalent. Each leads to the medium level of safety. The difference is that under strict liability, companies pay a share of the accident costs in a much broader set of instances. As a consequence, the legal system’s movement toward a strict liability doctrine shifted the balance of power in the courts toward the consumers, who are consequently able to collect from companies in a larger share of cases than they were before. The Ford Pinto Case An instructive example of how the negligence standard could be applied is with respect to the calculations for the Ford Pinto’s gas tank safety. Recall that Ford’s decision to place the gas tank at the rear of the vehicle saved costs but led to burn injury and deaths on rear impact. Moving the gas tank would save lives on an expected basis but would raise the price of the vehicle. Table 22.5 summarizes the calculations done by Ford in 1973 with respect to this safety risk. The magazine Mother Jones won a Pulitzer Prize for its exposé of these calculations, though apparently the estimates were prepared not just for the Pinto but for proposed auto safety regulations more generally. In Ford’s defense, it should also be noted that General Motors has undertaken similar analyses with respect to side-impact fires. Table 22.5 Ford Estimates of Benefits and Costs for Gas Tank Design Benefits Outcome of Faulty Design

Ford’s Unit Value ($)

Ford’s Total Value ($ million)

180 burn deaths 180 serious burn injuries 2,100 burned vehicles

200,000 67,000 700

36.0 12.1 1.5

Total

49.6

Costs Number of Units

Unit Cost ($)

Total Cost ($ million)

1.5 million cars 1.5 million light trucks

11 11

121.0 16.5

Total

137.5

The cost estimates at the bottom of table 22.5 indicate that moving the gas tank to reduce the risk of fires

is not inordinately costly, only $11 per vehicle. However, when this cost is imposed on an entire vehicle fleet, the cost total is nontrivial—$137.5 million. The quite appropriate and sensible economic question to ask is whether the benefits exceed the costs. As indicated by the calculations in the top panel, the benefit levels fall short of what is needed to make the safety improvement desirable. In preparing these estimates, Ford estimated the value of burn deaths at $200,000 (or $1.0 million in 2015 dollars), which was in fact the average court award for burn deaths in product liability cases at that time. Ford’s conclusion that the benefits of the safety improvement did not exceed the costs was not persuasive to jurors, who levied both compensatory and punitive damages in Ford Pinto cases. Did the company err in its analysis, or was it the victim of a runaway jury that failed to understand economic reasoning? The key flaw in Ford’s analysis was using court awards to value the prevention of burn injuries and deaths. These values primarily capture income losses and medical costs, and they understate the value of preventing such serious outcomes. To value prevention of burn deaths, one should use values of a statistical life, not court awards. For that era, the estimates of the value of a statistical life in the price levels at that time were in the $3 million range rather than the higher $9.4 million value now used by the Department of Transportation. Use of an appropriate benefit measure, such as the $3 million figure in that era, boosts the burn death value to $540 million in 1973 dollars, which alone is sufficient to justify moving the gas tank to a safer location. Whether the context is government risk policies, corporate decisions, or personal actions, the appropriate method for attaching a benefit value to reduced risks to life is the value of a statistical life number. Focusing on the incentives provided by court awards alone will lead to undervaluation of the risk. Escalation of Damages In addition, the role of damages has rapidly escalated. The penalties levied by regulatory agencies are often quite modest—on the order of $1,000 or less, and only a few million dollars even in the most severe cases. In contrast, million-dollar awards in product liability cases are routine. Newspapers throughout the country gave prominent coverage to the woman who spilled a cup of hot McDonald’s coffee on her lap and suffered burns, for which she received a several-million-dollar award (later reduced by a higher court). In addition to the phenomenon of runaway juries, there also may be a reasonable basis for some large liability awards. For a consumer who loses twenty-five years of his or her work life at a rate of pay of $40,000 per year, the lifetime earnings loss is $1 million. Because the size of the award roughly doubles when one also takes into account the pain and suffering associated with the accident, as well as the loss to the family associated with such injuries, one can see how severe injury awards on the order of $1 million or more can become routine rather than exceptional. While significant awards of $1 million or more may have merit, the blockbuster awards have generated increasing controversy. The greatest level of stakes is with respect to punitive damages. Since the 1981 Ford Pinto case, Grimshaw v. Ford Motor Company, there have been more than one hundred punitive damages awards of at least $100 million. All but a few of these awards have been the result of jury trials rather than bench trials, leading some observers to propose that judges be given greater control over the level of punitive awards. Even though many punitive damages awards are reduced on appeal, the high stakes involved mean that the entire company’s survival is often on the line in these cases. The automobile industry has been the target of many of these punitive damages awards. Ford was levied a $125 million punitive award for the placement of the fuel tank in the Ford Pinto. GM incurred a punitive

award in 1993 for its placement of the fuel tank outside the vehicle frame, leading to fire-related death in a GMC Sierra truck. GM also incurred a punitive award of $100 million for substandard door latches, $4.8 billion for fuel tank risks in the Chevrolet Malibu, and $100 million for an accident resulting from inadequate passenger compartment protections. Ford incurred a punitive damages award of $120 million for rollover risks of the Ford Ranger, $290 million for rollover risks of the Ford Bronco, and $153 million for a parking brake failure. There was also a punitive award of $250 million against Chrysler for a defective rear liftgate latch. While these awards are generally reduced on appeal or settled for a lower amount, they nevertheless impose substantial costs on the affected firms. This escalation in awards has led to a variety of product liability reform efforts designed to limit the role of product liability damages. A chief target for these efforts has been the awards for pain and suffering because of the absence of well-defined legal criteria for determining such damages. Some lawyers, such as Melvin Belli, suggest that jurors should ascertain the value of pain and suffering for a small time interval (such as a second) and then scale it up by the length of time the pain and suffering was endured. In the case of very long-term injuries, this procedure could produce astronomical pain-and-suffering awards. A useful exercise is to consider whether from the standpoint of good economics one should simply multiply the value of pain and suffering for small time intervals by the amount of time the pain and suffering is experienced to generate the total welfare loss. The lobbying over the economic stakes involved in product liability reflects the patterns of political influences one would expect for rent-seeking behavior. Table 22.6 provides a summary of the groups in favor of a pain-and-suffering damages cap in the left column and a listing of the groups opposed to such a cap in the right column. This pattern reflects the economic stakes involved. Parties who bear the cost of such pain-and-suffering awards, ranging from business representatives at the U.S. Chamber of Commerce to groups representing the construction industry, favor pain-and-suffering caps. In contrast, labor and consumer groups generally oppose such caps because they limit the awards that the victims of accidents can potentially receive. The debate over the pain-and-suffering cap proposals and the institution of these caps by various states has been almost devoid of compelling economic reasoning. In the case of each party’s arguments, it has been the economic self-interest and the stakes involved that have driven the debate rather than any underlying rationale concerning the appropriateness of particular concepts of pain and suffering. Table 22.6 Group Standings on Proposal to Impose Limits on Damages Awards for Pain and Suffering For Proposal

Against Proposal

Alliance of American Insurers American Consulting Engineers Council American Medical Association National Association of Home Builders National Association of Manufacturers National Association of Realtors National Association of Towns and Townships National Federation of Independent Business National School Boards Association U.S. Chamber of Commerce

Association of Trial Lawyers of America Brown Lung Association Consumer Federation of America Consumers Union Environmental Action National Council of Senior Citizens Public Citizen United Auto Workers United Steelworkers Union Women’s Legal Defense Fund

Source: Wall Street Journal, April 9, 1986, p. A64. Reprinted by permission of The Wall Street Journal, © 1986, Dow Jones & Company, Inc. All rights reserved worldwide.

Risk Information and Hazard Warnings

One of the rationales for market failure is that consumers do not have perfect information regarding the safety of the products they purchase. In some cases, consumers may be able to monitor the overall riskiness of products as a group but not the riskiness of products manufactured by particular companies. Consumers know that chain saws are hazardous, but they may have less ability to discriminate between the differing degrees of riskiness of Echo chain saws and Stihl chain saws. In situations where consumers know the average product risk but not the risk posed by the individual product, there will be a phenomenon akin to the classic lemons problem.22 Table 22.7 presents an example for the automobile safety case. Suppose that there are three classes of cars ranging in safety from low to high. If consumers had perfect information regarding the properties of the cars, they would be willing to pay up to $30,000 for the safe car and as little as $20,000 for the average-safety car. Because they cannot distinguish the differing degrees of safety, they will make their judgments based on the average safety across this entire group of cars, which produces an average value to consumers of $23,500. The losers from this group-based value approach are the producers of the high-safety cars, and the winners are the producers of the low-safety cars. This kind of redistribution from the high-quality to the low-quality market participants is a standard property of lemons markets. This property holds whether we are dealing with the properties of used cars or the salaries given to graduates of a college in a situation where the individual’s performance cannot be distinguished from the group average. The presence of such group-based pricing provides a disincentive for firms at the high end of the market because their safety efforts will not be rewarded. Instead they will be sharing the benefits of safety with all other firms in the market. Table 22.7 Markets with Imperfect Information: Lemon Markets for Risky Cars Fraction of Cars

Safety Level

Consumer Value with Perfect Information ($)

Group-Based Value ($)

Gain or Loss from Full Disclosure ($)

0.2 0.3 0.5

High Medium Low

30,000 25,000 20,000

23,500 23,500 23,500

+6,500 +1,500 −3,500

Self-Certification of Safe Products Firms can potentially avoid this pooling problem by identifying themselves as being producers of safer products. The gains and losses that would result from full disclosure of this type appear in the final column of table 22.7. The incentives for such revelation are greatest at the highest end of the quality spectrum and decline as one moves toward the lower end of the spectrum. Firms that produce the low-quality cars in this example would be willing to pay to suppress information on the differing riskiness of the cars. From the standpoint of the companies producing the high-quality cars, the practical issue is how to convey credibly to consumers the lower riskiness of their cars. This is the classic economic signaling issue. Firms cannot simply claim that they produce high-quality products because such claims will have no credibility. All firms could make such claims without cost. The classic signaling problem is how to establish an economic mechanism so that the firm can credibly convey to consumers the safety of its products. Such mechanisms include the provisions of warranties and guarantees, since the costs of offering these product attributes are higher if one is marketing a risky product. Purely informational efforts, such as ratings provided by government agencies and by consumer groups, also may be of value in enabling consumers to make quality judgments.

Government Determination of Safety Increasingly, the government has become active in trying to meet the informational needs of consumers. Beginning in 1965, Congress mandated hazard warning labels on cigarettes that were required to be on packs and in advertising starting in 1966. This system evolved over time, and in 1984, cigarettes began to include a system of rotating warnings that alerted consumers to a diverse array of potential hazards from cigarettes. Public information campaigns, antismoking efforts by physicians, and excise taxes have also contributed to the decline in smoking rates from 42.4 percent of adults in 1965 to 15.1 percent in 2015. For risk communication efforts to be successful, they should lead consumers to adequately assess the risks of smoking. These risks are substantial. Based on Surgeon General reports, one can calculate a lung cancer fatality risk from smoking of 0.08 and a total smoking mortality rate of 0.26. These risks dwarf the risk cutoffs for many government risk regulation efforts, which are often in the range of lifetime risks of 1/10,000 or less. However, consumers are generally cognizant of the hazards of smoking, as shown by the data in table 22.8, which pertain to the perceived risk out of one hundred smokers for dying either from lung cancer or any smoking-related disease. Both smokers and nonsmokers overestimate the risk of smoking and, as one might expect, the overestimation is greater for nonsmokers. The bottom panel of table 22.8 presents comparable perceptions for e-cigarettes, which are starkly safer than conventional cigarettes, since they do not involve the burning of tobacco. The failure of consumers to perceive the comparative safety of ecigarettes may discourage switching from more dangerous cigarettes. A continuing challenge for public health officials is how to communicate the safety of new products in an otherwise risky product group. There is a major difference in the regulatory approaches taken in the United Kingdom, which has encouraged e-cigarettes as a smoking cessation mechanism, and in the United States, which has tended to treat e-cigarettes much like conventional cigarettes. Table 22.8 Perceived Risks for Smokers of Cigarettes and of E-Cigarettes

Cigarette risk beliefs Lung cancer Total mortality E-cigarette risk beliefs Lung cancer Total mortality

Full Sample

Smokers

Nonsmokers

41.0 50.3

35.7 42.6

41.8 51.4

27.3 33.3

20.9 23.2

28.2 34.7

Source: W. Kip Viscusi, “Risk Beliefs and Preferences for E-Cigarettes,” American Journal of Health Economics 2 (Spring 2016): 213–240, table 2. Note: Table entries give the respondents’ perceived percentage of smokers who will contract lung cancer or die of all smoking-related causes.

Other product labeling efforts have also played a role in cigarette regulation throughout the world. Countries such as the United Kingdom, Canada, and Australia have required that cigarette packaging include graphic health warnings on the packs to discourage smoking behavior. Such warnings have been proposed but not adopted in the United States. Based on its assessment of the impact of the graphic warnings in Canada, the FDA concluded that there was no evidence that graphic health warnings decreased smoking prevalence rates. The results of the FDA study and a concern with infringement on corporate speech led the D.C. Circuit to overturn the U.S. graphic warnings initiative. A related packaging regulatory approach is that of “plain packaging,” whereby all product marketing information would be deleted from the pack. The Australian plain packaging regulation permits the name of the brand to appear in black lettering

on a drab olive background, accompanied by large graphic health warnings covering at least 75 percent of the front and at least 90 percent of the back of the pack. No evidence thus far indicates that plain packaging diminishes smoking prevalence rates or fosters more accurate risk beliefs. For these and related labeling policies, the pivotal economic issue is how and to what extent they address failures of individual rationality, such as systematic underestimation of the risk. Congress also instituted on-product warnings for products containing saccharin. This artificial sweetener was viewed as being potentially carcinogenic, but the scientific evidence and the extent of the risk have long been a matter of dispute. Indeed, in 1998, a panel of scientists convened by the government concluded that there was not sufficient evidence to designate saccharin a human carcinogen. Moreover, the benefits of the product from the standpoint of reducing obesity and its associated risks have greatly complicated the debate over the appropriate regulation of this product. In 2001, warnings for saccharin were repealed both at the federal level and for the state of California. Similarly, in 1989, Congress mandated warnings on all alcoholic beverages. The first warning alerts consumers to the risk of birth defects from consumption of alcohol by pregnant women. The second warning notes the presence of health problems linked to alcohol and the effect of alcohol on one’s ability to drive and operate machinery. These measures do not exhaust all initiatives of this type. Many states have also joined these efforts. Chief among these state initiatives is California Proposition 65, which has mandated the labeling of all significant carcinogens in food products. However, the threshold for determining whether a risk is sufficiently great to merit a warning is very low. In the case of cancer risks, a lifetime risk of 1/100,000 for a 70-year daily exposure is sufficient to trigger a warning, and for reproductive toxicants, any chemical that affects the growth of the fetus merits a warning. Many companies have avoided the stigma of labeling by reformulating their products, such as Liquid Paper, which formerly contained carcinogens. Although the implementation of this regulation remains a matter of debate, the proliferation of right-to-know measures has represented a major shift in the emphasis of regulation over the past quarter century. Alternatives to Direct Command and Control Regulation The rationale for employing an informational regulation rather than a direct command and control regulation is twofold. First, in many situations we do not wish to ban an activity altogether. The regulatory agency may not have sufficient information to proceed with a ban but would nevertheless like to alert consumers to a potential hazard in the interim, so that they can at least exercise caution until the information for taking more stringent action becomes firmer. Second, regulation through information may sometimes be the most appropriate and most effective response, even when the agency is in doubt as to the appropriate course of action. If individuals differ in their tastes and their willingness to bear risks, then information provides consumers with the ability to make these market judgments and to choose the level of risk that is most efficient given their own preferences. In addition, many decisions must necessarily be made on a decentralized basis. The care consumers take when using household chemicals cannot be monitored by direct regulatory agency, so that the most effective way to promote safety is to give consumers the motivation to undertake the appropriate level of precautions. More generally, the task of promoting product safety is not simply one of designing an appropriate technology for the product. As we saw in the case of safety belts and aspirin caps, consumer behavior often plays a central role in determining the safety level that will result from a particular product design. The

government can influence two classes of choices that affect safety—producer and consumer actions. In general, we will be able to achieve greater gains for safety if we take advantage of both these mechanisms rather than simply rely on a technology-based approach. The potential impact of such regulations is reflected in the statistics in table 22.9. This table provides information on the efficacy of the Drano label. Table 22.9 reports on the response of consumers to drain opener products for which the warning information had been purged, as opposed to their response to a label patterned after the current Drano label. The addition of risk information increases the frequency with which consumers wear rubber gloves or store the product in a childproof location. Table 22.9 Effects of Drain Opener Labels on Precaution Taking (%) Precaution

No Warning (n = 59)

Drano (n = 59)

Incremental Effect

Wear rubber gloves Store in childproof location

63 54

82 68

19 14

Source: W. Kip Viscusi and Wesley A. Magat, Learning about Risk: Consumer and Worker Responses to Hazard Information (Cambridge, MA: Harvard University Press, 1987).

Not all consumers would choose to wear rubber gloves, even though the label urges that they take these precautions, because this precaution is onerous. Consumers might rationally choose to forgo this precaution if they believe that the benefits to them of taking it did not outweigh the costs associated with the nuisance value of wearing rubber gloves. The warning label also has an effect on storage in a childproof location, as 14 percent more of the respondents will store the product in a childproof location after receiving the label. Moreover, for the subsample of the population with children under age five—the group at high risk of poisoning—almost all respondents who received the hazard warning would take the appropriate childproofing precaution. Finally, it is noteworthy that even when the hazard warning information is purged, over half of all consumers would undertake the precaution. It is important to recognize that consumers are not working in a vacuum. Even for a drain opener purged of warnings, over half of consumers know enough about the potential hazards to take the precaution. Moreover, the studies of hazard warnings indicate that these warnings are effective only to the extent that they provide new knowledge to consumers. Programs of education that are intended to browbeat consumers into changing their behavior or that are intended to remind consumers about desirable courses of action generally are not successful. The programs that have been shown to be effective are those providing new information to consumers in a convincing manner rather than those failing to recognize that consumers are rational decision makers who are sometimes in need of important risk information. An attraction of informational regulations from the standpoint of economists is that such regulations do not interfere with market operations to the same extent as technological standards. Nor do they impose substantial costs on firms. By providing information to consumers, the market can work more effectively to generate the incentives for efficient levels of safety. Regulation through Litigation A phenomenon that emerged with particular force in the late 1990s was the use of litigation to force

regulatory changes. The most prominent use of regulation through litigation was the settlement of the state tobacco suits in a form that was not a conventional damages payment. Instead the agreement led to payments to the states that would be funded by a per-pack levy on cigarettes of about 40 cents, which in effect would function as an excise tax. The companies also agreed to a variety of restrictions that were regulatory in character, such as limitations on sponsorship of sporting events. The tobacco lawsuit model has led to similar efforts against other risky products. Guns, fast food, health maintenance organizations, and lead paint have also been litigation targets. In some instances, the apparent objective is a financial payoff to the attorneys, while in others, the driving force appears to be to compel “voluntary” regulatory changes. Breast Implant Litigation and Regulation The often complex interaction of regulation and litigation is exemplified by the legal and regulatory debate over breast implants.23 Women had long attempted to enlarge their breasts, but these efforts increased substantially after 1962, when manufacturers introduced silicone-gel-filled breast implants. Silicone gel in a silicone envelope was less likely to cause health problems and adverse physical consequences than direct injections of silicone. Table 22.10 summarizes the timeline of the regulatory and litigation events that followed the introduction of this medical device. At the time of their introduction, breast implants were not regulated by the FDA. Unlike prescription drugs, medical devices did not face tests of safety and efficacy. In 1976, medical devices such as breast implants were placed under the FDA’s jurisdiction, but this product was grandfathered in. Thus, the agency did not explicitly examine the safety of breast implants. Table 22.10 Timeline of Critical Breast Implant Events Year

Event

1962 1965 1976 1977 1978

Silicone-gel-filled breast implants are first used. Silicone injections are classified as a drug and not approved for human use. Medical Devices Amendments give the FDA authority to regulate breast implants. Implants are grandfathered in. Mueller v. Corley; plaintiff is awarded $170,000 due to rupture. FDA General and Plastic Surgery Devices Panel recommends Class II status. FDA concerns in 1978 include gel leakage in intact implants. FDA proposes Class III status. Stern v. Dow Corning; plaintiff is awarded over $1.7 million for claim that ruptured implants caused connective tissue disease. Internal Dow Corning documents showed Dow had suppressed risk information. These documents were then sealed by court order. Silicone implants are classified as Class III requiring manufacturers to submit safety information. FDA concerns in 1988 include capsular contracture, breakage, bleeding outside the shell, migration of silicone to organs, interference with the accuracy of mammogram, calcification of the fibrous capsule, immune disorders, and cancer. Manufacturers’ safety information deemed inadequate by the FDA. Hopkins v. Dow Corning; plaintiff is awarded $840,000 in compensatory damages and $6.5 million in punitive damages for claim that ruptured implants caused her connective tissue disease. The FDA imposes a moratorium on silicone implants. First class action filed in wake of FDA moratorium. Eventually 440,000 women join. Silicone implants withdrawn from market except in limited cases. Mayo Clinic study shows systemic health risks not likely. Federal settlement approved with Dow dropping out. Dow Corning files for Chapter 11 bankruptcy reorganization, citing 19,000 individual implant lawsuits and at least 45 putative class actions. Courts appoint science panels. All panels conclude implants do not cause systemic diseases. Various courts do not allow plaintiffs’ experts to testify under Daubert. Institute of Medicine concludes only localized risks of silicone implants, including “overall reoperations, ruptures or deflations, contractures, infections, hematomas, and pain.”*

1982 1984 1988

November 1991 December 1991 January 1992 February 1992 April 1992 1994 1995 1995 1996–1998 1999

2006

Approved Allergan and Mentor’s premarket approvals for silicone-gel-filled breast implants for augmentation, reconstruction, and revision. Other approvals follow in subsequent years.

Source: Joni Hersch, “Breast Implants: Regulation, Litigation, and Science,” in W. Kip Viscusi, ed., Regulation through Litigation (Washington, DC: AEI– Brookings Joint Center for Regulatory Studies, 2002), p. 146. * Quote is from Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Washington, DC: National Academy Press, Institute of Medicine, 2000), p. 5.

While breast implants had cosmetic appeal to many women, especially in the South and the West, the product was not risk free. Breast implants often leaked, ruptured, and led to a hardening of the surrounding tissue known as capsular contracture. Beginning in 1977, women began to win lawsuits against breast implant manufacturers and, as table 22.10 indicates, the degree of scrutiny by the FDA increased following these successful lawsuits. Perhaps the watershed regulatory event was a successful lawsuit in December 1991, in which a plaintiff received over $7.3 million in total damages for her claims that ruptured implants caused connective tissue disease. FDA head Dr. David Kessler called for a moratorium on breast implant usage the following month. He viewed the situation as one of scientific uncertainty, but the effect of his action was to create widespread panic that led to the largest class action lawsuit, which eventually included more than 400,000 women. The leading manufacturer of breast implants, Dow Corning, subsequently filed for bankruptcy. Preventing the initial marketing of a product posing uncertain risks has quite different consequences than declaring a moratorium on a product already implanted in millions of women. Fearing dire health consequences, many women underwent costly and painful operations to have their breast implants removed. Subsequent scientific evidence indicated that the extreme fears concerning serious consequences, such as systemic diseases, lacked a sound scientific basis, though there are many adverse morbidity effects. Today, saline breast implants remain on the market and are becoming increasingly popular despite their adverse morbidity effects and leakage problems. In the George W. Bush administration, an FDA advisory panel recommended that silicone breast implants be approved for use, and in 2006, the FDA began approving new silicone-gelfilled breast implants for use for augmentation, reconstruction, and revision. The breast implant chronology reflects the interactions and failures of both regulatory action and the courts. For years the FDA failed to scrutinize the product risks adequately, and it was only when breast implant litigation put the product on the agency’s agenda that the agency acted and, in the view of some critics, overreacted. Because the adverse health consequences of breast implants will not be apparent in advance to women getting the implants, this product represents an ideal situation in which meaningful hazard warnings could play a constructive role. The role of the courts in some respects was constructive: The litigation highlighted some actual product defects and brought the risks to the FDA’s attention. However, there were also court awards for ailments not apparently caused by breast implants. These awards also continued after some early scientific studies rebutting these claims began to appear in the literature. This case study also illustrates how the informational environment plays a critical role in determining regulatory and judicial policies. These institutions were operating on the basis of highly imprecise information in a situation where the stakes from not acting were considerable. In such situations, it will often be imperative to make decisions before all the evidence is in. Some of these decisions may appear wrong in hindsight once the informational environment has changed, but that does not always mean that the decisions were incorrect at the time. Summary

Choosing the regulatory strategy in the product safety area is even more diverse than figure 22.1 suggests. As a society, we have a broad set of choices involving the appropriate roles of government regulation and tort liability. However, within these classes of institutional mechanisms, important decisions also need to be made with respect to the particular mechanism for intervention. Consider first the case of regulation. The alternative mechanisms by which we can intervene through government regulation are quite diverse. The government can specify in advance the technological standards that must be met by a product. A second possibility is to have some premarket approval mechanism, whereby the company submits the product to the government for review and approval before the product can be marketed. A third possibility is to have government regulations that operate after the fact, as in the case of the product recall strategies for defective automobiles and consumer products. One mechanism that has not been used in the regulatory arena is an injury tax approach, whereby the government imposes financial penalties on risky products rather than specify their technical characteristics. Cigarette excise taxes come close to being such a tax mechanism, but their level is driven more by revenue considerations than an attempt to align consumer preferences with efficient decisions. One reason this final strategy has not met with widespread adoption is that in the product case, it is particularly difficult to ascertain the contributory role of the product to a particular accident. The Consumer Product Safety Commission, for example, keeps a tally of all accidents involving the use of particular consumer products, such as ladders. However, what we do not know is whether the ladder itself buckled under the consumer or whether it was simply the case that the consumer fell from the ladder. We know that the use of ladders is potentially risky, but ideally we want to establish a tax approach that will penalize unsafe ladders and not simply raise the price of all ladders irrespective of their structural safety. The latter approach is similar to the impact of a strict liability standard for ladders. In addition to raising the possibility of potentially substantial costs, this option also may create substantial moral hazard problems as well. Since the mid-1980s, the main policy debate has not focused on regulatory reform but rather on product liability reform. The reason for this emphasis is that the stakes imposed by product liability have escalated considerably and frequently dwarf the stakes involved with direct government regulation. Pharmaceutical companies are routinely faced with liability costs that may be in excess of the value of the sales of a product, particularly for small-market products, such as vaccines. The result is that such penalties have had a chilling effect on product innovation. Some effect on product innovation is not necessarily bad. Ideally, we do want tort liability to discourage the introduction of unduly risky new products. However, we do not want to deter beneficial innovations because firms are excessively cautious owing to the prospect of potentially enormous liability burdens. A fundamental task that must be addressed in the coming years will be to restructure the tort-liability system to strike a balance between the competing objectives of promoting safety and fostering the legitimate interests of the businesses affected by the regulation. A final issue on the policy agenda is the overall coordination of regulatory and liability efforts. These are two different institutional mechanisms that affect similar classes of economic concerns. In some cases, the companies are hit twice by these institutions. They may adopt particular technological devices to come into compliance with formal government regulations, but the presence of such compliance does not provide any guarantees against additional liability costs being imposed on the firm. The phenomenon of regulation through litigation also raises concerns over whether the courts are usurping legitimate functions of the legislature and regulatory agencies. High-stakes litigation may be used to force regulatory changes and damages awards that are actually disguised excise taxes. Is the role of litigation a constructive one that fills the gaps left by other government policies, or is there an inappropriate

overlap of these efforts? While governmental regulatory bodies appear to be better suited to the regulatory tasks, what if these efforts fall short? Whether the courts are overstepping their bounds will be a matter of continuing debate for which the answer surely will vary with the particular regulatory context and the degree of vigilance of the regulatory agency. In addition to this potentially duplicative impact, there is also the issue of the appropriate division of labor between these social institutions. Which classes of hazards are best suited to being addressed by government regulations, and which are better suited to the ex post facto approach of tort liability that addresses specific accident cases identified in the courts? Differences in the temporal structure and character of the accidents no doubt will be important considerations when making these institutional allocations, but another issue may also be the difference in expertise. Many observers are beginning to question the ability of jurors to make the society-wide product safety decisions that are often part of the judgments that must be made to determine whether a particular product design is defective. Resolving such issues, which only recently have begun to be raised, will remain a central component of the future regulatory agenda. Questions and Problems 1. Ideally, we would like to determine whether there is a product market failure before we intervene. Suppose that we do not have information on consumer risk perceptions, but we do observe the price reductions consumers are willing to accept in return for greater objective levels of safety for their product. For example, economists have estimated the implicit values of a statistical life associated with purchases of different automobiles. How would you use these implicit estimates to determine whether there is a market failure? 2. The FDA uses a premanufacturing screening program for determining the marketability of new pharmaceutical products, whereas the U.S. Consumer Product Safety Commission relies on recall actions for products found to be defective, such as electric coffeemakers that short out. Can you think of any rationale other than historical accident for the difference in regulatory approaches between the two agencies? 3. The AIDS lobbyists have forced the FDA to accelerate the approval of AIDS-related drugs. In some cases, therefore, drugs that have not undergone the full FDA testing process to determine safety and efficacy will be used on patients. Is there a legitimate rationale for the expedited approval of AIDS-related drugs, or is this result simply due to the political power wielded by the AIDS lobbyists? Which other classes of drugs do you believe merit accelerated approval? More generally, if the FDA were to set up two different approval schedules, one being a thorough approval process and the second being a rapid approval process, what factors would you use to determine whether a particular drug merited the thorough or the rapid approval approach? 4. Although Coase theorem types of influences can potentially lead to market provision of nonsmoking areas and similar restrictions, current smoking policies do not reflect market influences alone. Municipalities throughout the country have enacted various smoking ordinances, and the federal government is considering measures as well. What do you believe has been the impetus for such policies? Are there any efficiency-oriented reasons why it might be desirable for the government to set uniform standards in this area? More specifically, is there any likely source of market failure? In addition, are there any equity concerns that may be motivating these measures? Which parties will gain from an equity standpoint, and which will lose? Is there any reason to believe that the political outcomes will be efficient, and how would we judge efficiency? 5. Much of the debate over the efficacy of seat belt requirements and the influence of the counterproductive effect of decreased driver precautions noted by Peltzman stems from the crudeness of the empirical information that is available. If we can look only at accident rate totals by year or by state, then much key information will be lost. If you had unlimited resources and could commission your state police to develop an empirical database for you, what factors would you ask them to assess so that you could test the Peltzman effect conclusively? 6. After the advent of an increased role for tort liability, some products have disappeared. Let us take the case of diving boards at motels. What factors would you want to examine to determine whether this change in product availability is efficient?

7. It has often been noted that Melvin Belli’s procedure for determining pain-and-suffering damages is biased. Recall that his technique is to ask jurors to assess the pain-and-suffering value for a small unit of time, and then to extrapolate the value to the total time period over which the victim experienced the pain. Defense lawyers argue that the approach overstates the value of pain and suffering. What is the structure of the utility function for pain that must hold for the Belli approach to lead to an overstatement? Under what circumstances will it lead to an understatement? 8. What policies do you believe are appropriate for e-cigarettes? Should they receive the same warnings and be subject to the same taxes as regular cigarettes? Should there be age restrictions on purchase of e-cigarette devices?

Notes 1. See Ralph Nader, Unsafe at Any Speed (New York: Grossman, 1965). 2. For a review of the development of health, safety, and environmental regulations, see Paul MacAvoy, The Regulated Industries and the Economy (New York: W. W. Norton, 1979). 3. For an excellent economic analysis of the role of consumer complaints, see Sharon Oster, “The Determinants of Consumer Complaints,” Review of Economics and Statistics 62 (November 1990): 603–609. 4. See Frank R. Lichtenberg, “The Effect of Pharmaceutical Innovation on Longevity: Patient Level Evidence from the 1996–2002 Medical Expenditure Panel Survey and Linked Mortality Public-Use Files,” Forum for Health Economics and Policy 16 (January 2013): 1–33; and Frank R. Lichtenberg, “Has Medical Innovation Reduced Cancer Mortality?” CESifo Economic Studies 60 (March 2014): 133–177. 5. The role of pharmaceutical regulatory policy with respect to such tradeoffs is a principal theme of the book by Henry G. Grabowski and John M. Vernon, The Regulation of Pharmaceuticals: Balancing the Benefits and Risks (Washington, DC: American Enterprise Institute, 1983). 6. Mary K. Olson, “Eliminating the U.S. Drug Lag: Implications for Drug Safety,” Journal of Risk and Uncertainty 47 (August 2013): 1–30. 7. Nicholas S. Downing, Jenerius A. Aminawung, Nilay D. Shah, Joel B. Braunstein, Harlan M. Krumholz, and Joseph S. Ross, “Regulatory Review of Novel Therapeutics: Comparison of Three Regulatory Agencies,” New England Journal of Medicine 366 (June 2012): 2284–2293. 8. This value is in year 2015 dollars and is derived from estimates in Glenn C. Blomquist, “Value of Life Saving: Implications from Consumption Activity,” Journal of Political Economy 87 (June 1979): 540–558. A large literature has followed, yielding values of statistical life from $2 million to $7 million. 9. These estimates are converted to 2015 dollars based on the findings in Jahn K. Hakes and W. Kip Viscusi, “Automobile Seatbelt Usage and the Value of Statistical Life,” Southern Economic Journal 73 (January 2007): 659–676. 10. Ibid. Also see Joni Hersch and W. Kip Viscusi, “Cigarette Smoking, Seatbelt Use, and Differences in Wage-Risk Tradeoffs,” Journal of Human Resources 25 (Spring 1990): 202–227. 11. The original presentation of the Peltzman results appears in Sam Peltzman, “The Effects of Automobile Safety Regulation,” Journal of Political Economy 83 (August 1975): 677–725. The role of driver precautions has been the focus of a number of other studies as well. See, among others, Glenn C. Blomquist, The Regulation of Motor Vehicle and Traffic Safety (Boston: Kluwer Academic, 1988); Robert W. Crandall, Howard K. Gruenspecht, Theodore E. Keeler, and Lester B. Lave, Regulating the Automobile (Washington, DC: Brookings Institution Press, 1986); Glenn C. Blomquist, “Motorist Use of Safety Equipment: Expected Benefits or Risk Incompetence?” Journal of Risk and Uncertainty 4 (April 1991): 135–152; and William N. Evans and John Graham, “Risk Reduction or Risk Compensation? The Case of Mandatory Safety-Belt Use Laws,” Journal of Risk and Uncertainty 4 (January 1991): 61–74. 12. See, for example, the analysis by Alma Cohen and Liran Einav, “The Effects of Mandatory Seat Belt Laws on Driving Behavior and Traffic Fatalities,” Review of Economics and Statistics 85 (November 2003): 828–843. 13. Viscusi’s work on the lulling effect and consumer precautions more generally is synthesized in W. Kip Viscusi, Fatal Tradeoffs (New York: Oxford University Press, 1992). 14. See Viscusi, Fatal Tradeoffs, for empirical documentation of the effects discussed here. 15. Gerald Cavallo and W. Kip Viscusi, “Safety Behavior and Consumer Responses to Cigarette Lighter Safety

Mechanisms,” Managerial and Decision Economics 17 (September–October 1996): 441–457. 16. Chris Rohlfs, Ryan Sullivan, and Thomas J. Kniesner, “New Estimates of the Value of a Statistical Life Using Air Bag Regulations as a Quasi-Experiment,” American Economic Journal: Economic Policy 7 (February 2015): 331–359. 17. These various effects of product characteristics on the prices of used cars are in year 2015 dollars and are based on the empirical analysis of automobile purchases by 3,000 households undertaken by Mark K. Dreyfus and W. Kip Viscusi, “Rates of Time Preference and Consumer Valuations of Automobile Safety and Fuel Efficiency,” Journal of Law and Economics 38 (April 1995): 79–105. 18. In addition to Viscusi, Fatal Tradeoffs, for a historical perspective see Nina Cornell, Roger Noll, and Barry Weingast, “Safety Regulation,” in Henry Owen and Charles L. Schultze, eds., Setting National Priorities: The Next Ten Years (Washington, DC: Brookings Institution Press, 1976), pp. 457–504. 19. The following discussion is based most directly on W. Kip Viscusi, Reforming Products Liability (Cambridge, MA: Harvard University Press, 1991). Other treatments of these issues include Steven Shavell, Economic Analysis of Accident Law (Cambridge, MA: Harvard University Press, 1987); William M. Landes and Richard A. Posner, The Economic Structure of Tort Law (Cambridge, MA: Harvard University Press, 1987); and Robert E. Litan and Clifford Winston, Liability: Perspectives and Policy (Washington, DC: Brookings Institution Press, 1988). 20. The statistics in table 22.3 through 2000 represent the general liability insurance category, but the statistics beginning in 2005 represent the sum of the principal components of what was formerly termed general liability insurance in commercial lines, which comprises products liability insurance plus other liability insurance. 21. National Academy of Sciences, Developing New Contraceptives: Obstacles and Opportunities (Washington, DC: National Academy Press, 1990). 22. The following discussion is based most directly on W. Kip Viscusi, “A Note on ‘Lemons’ Markets with Quality Certification,” Bell Journal of Economics 9 (Spring 1978): 277–279. The interested reader also should examine two of the seminal works on this topic: George A. Akerlof, “The Market for ‘Lemons’: Qualitative Uncertainty and the Market Mechanism,” Quarterly Journal of Economics 84 (August 1970): 488–500; and A. Michael Spence, “Competitive and Optimal Responses to Signals: An Analysis of Efficiency and Distribution,” Journal of Economic Theory 7 (March 1974): 296–332. 23. For further discussion of the breast implant policy issues, see Joni Hersch, “Breast Implants: Regulation, Litigation, and Science,” in W. Kip Viscusi, ed., Regulation through Litigation (Washington, DC: AEI–Brookings Joint Center for Regulatory Studies, 2002), pp. 142–177.

23 Regulation of Workplace Health and Safety

Workplace health and safety levels are governed largely by three sets of influences: the market, direct regulation of risk levels by the Occupational Safety and Health Administration (OSHA), and the safety incentives created through workers’ compensation. In each case, safety is promoted by creating financial payoffs for firms to invest in workplace characteristics that will improve worker safety. These incentives arise because improved safety leads to reduced wage premiums for risk, lower regulatory penalties for noncompliance, and reduced workers’ compensation premiums. The labor market incentives provide the backdrop for the analysis of job safety. As discussed in chapter 20, these wage incentives are often quite substantial. The principal direct regulatory mechanism consists of OSHA’s health and safety standards. OSHA has long been a controversial regulatory agency. Critics have not questioned the agency’s fundamental objective. Promoting worker health and safety is a laudable and widely shared objective. But OSHA is generally regarded as not fulfilling its mission of promoting this objective. Some observers claim that the agency imposes needless costs and restrictions on American business, whereas others claim that the agency’s efforts are not vigorous enough. For our purposes, OSHA is of economic interest not only because of its policy role but also because it provides a valuable case study of the regulatory compliance decision and the role of financial incentives in promoting effective regulatory policies. This branch of the U.S. Department of Labor began operation in 1971, after the Occupational Safety and Health Act of 1970 created it so as “to assure so far as possible every working man and woman in the nation safe and healthful working conditions.”1 Because ensuring a no-risk society is clearly an unattainable goal, the initial OSHA mandate established the infeasible as the agency’s mission. Nevertheless, a regulatory agency focusing on worker safety issues could serve a constructive function. The early operations of OSHA did not, however, even begin to fulfill the agency’s initial promise. OSHA was the object of widespread ridicule for standards that prescribed acceptable toilet seat shapes, the placement of exit signs, the width of handrails, and the proper dimensions of OSHA-approved ladders. Many of the more frivolous standards were never among the most prominent concerns in the agency’s enforcement effort. Nevertheless, they did epitomize the degree to which the federal government was attempting to influence the design and operation of the workplace—matters that previously had been left to managerial discretion. In recent years, the stories of OSHA’s misguided regulatory efforts have been less prominent. One no longer reads amusing anecdotes like that concerning the OSHA inspector who penalized a firm for allowing its employees to work on a bridge without the required orange life vests, even though the riverbed was dry. The tone has shifted. Strident criticism in the early years of agency operation gave way to comparative inattention. This inattention did not necessarily imply that the agency had been given a clean bill of health.

There has been no widely publicized reform of the agency. Moreover, unlike transportation, natural gas, oil, and airlines, there have been no legislative changes or major administrative reforms. The decrease in coverage of controversial OSHA policies occurred because a continuation of past policies, however ill conceived, was simply no longer newsworthy. Moreover, firms had complied in many instances, so that the decisions regarding the desirability of the standards were behind them. In the 1990s, the tone of public debate concerning OSHA shifted. After decreasing its enforcement effort in the early 1980s, OSHA became increasingly criticized for not doing enough. Instead of media coverage of apparently frivolous regulations, attention shifted to the continuing death toll in the American workplace. The status of job safety regulation began to be epitomized by the meatpacking worker who had become disabled because of lax regulatory enforcement and the thousands of asbestos workers who will die from job-related cancers. The late 1980s and the 1990s marked a substantial increase in activity in the job safety regulation area. For the past two decades, OSHA has been less active in issuing new regulations than agencies such as the Environmental Protection Agency and the Department of Transportation. While increased emphasis has been placed on developing regulations for toxic substances that cause cancer, the pace of rulemaking has remained slow. The standard for respirable crystalline silica that was promulgated in 2016 took about twenty years to develop. Job health and safety remains an area where society is still striving to devise workable and effective regulatory mechanisms. This chapter focuses on a general assessment of the effort to promote worker health: why we have such policies, how the initial effort failed, whether there has been any improvement in this effort, and how these policies can be reformed. Throughout its history, OSHA has been the subject of a variety of proposed reform efforts.2 That OSHA has remained a chief target of proposed regulatory reforms suggests that fundamental changes may enhance the effectiveness of the agency. Several presidential administrations have promised an overhaul of OSHA policies. The Carter administration sought to provide this risk regulation effort with greater legitimacy by eliminating some of the more frivolous standards and by enforcing the sounder portions of OSHA regulations more vigorously. Under the Reagan and Bush administrations, the attention shifted to decreasing OSHA’s confrontational character so as to foster a cooperative business-government approach to promoting workplace safety. The Clinton administration initiated a long-overdue increase in the scale of regulatory penalties, coupled with a substantial expansion in the scale of regulation. In 1994, for example, OSHA proposed indoor air-quality standards that would affect matters such as workplace smoking and, by OSHA’s estimates, would cost $8 billion per year. In part because unions objected to these smoking restrictions, the regulation was never issued. Under the George W. Bush administration, the focus was on a less confrontational approach, emphasizing outreach and cooperative programs. The Obama administration improved recordkeeping for injuries, bolstered enforcement of the standards, and increased the emphasis on regulating health hazards. Although these efforts rectified many of the more extreme deficiencies of OSHA’s initial strategy, calls for reform continue. Regulation of workplace conditions is a legitimate role for the government, but, as with other regulatory policies, the need continues for balance among competing objectives. In this case, the principal tradeoff is between the costs imposed by the regulation and the health and safety benefits they provide. The importance of recognizing such tradeoffs need not paralyze new regulatory efforts, but it should promote the development of regulations that recognize their competing effects. There is also a need to enforce the regulations that are promulgated in a manner that will foster effective incentives for compliance.

Potential for Inefficiencies There also may be more fundamental shortcomings whereby the government policy is failing to achieve as much safety improvement as is possible for the costs imposed. In more technical terms, the difficulty may be that we are not on the frontier of efficient policies (that is, those policies that provide the greatest safety for any given cost), as opposed to simply making the wrong tradeoff along such a frontier. Consider the set of feasible risk-cost combinations in figure 23.1. All points on the frontier ABC or to the right of it are potentially achievable. Ideally, the policy debate should be where along the policy frontier ABC our policy choice should be. Some finite rate of tradeoff is required, as complete safety is prohibitively costly in the case shown. Unfortunately, a danger with ill-conceived policies is not that we are setting the tradeoff rate incorrectly, but rather that we are wasting resources. If OSHA policies are now at point D, we could have greater safety at the same cost at point C or the same safety at less cost at point B.

Figure 23.1 The Policy Frontier for Job Safety

Often the debate over ill-conceived regulations is miscast as one of values—where along the frontier should we be?—whereas the more fundamental problem is that a better policy could be designed regardless of one’s disposition toward regulation. Many of the most widely publicized standards initially promulgated by OSHA fall in the category of policies dominated by less costly and more effective alternatives. As in the case of other regulatory reforms, proper application of fundamental economic principles will illuminate the nature of the policy changes required.

How Markets Can Promote Safety Before instituting a government regulation, it is instructive to assess how the market functions. Basically, one should inquire whether the market forces operate with any inadequacy. Although individual life and health are clearly valuable attributes, many other market outcomes are also valued by consumers and workers but are not regulated by government. Because markets that operate well will allocate resources efficiently, there should be some perceived inadequacy in the way these forces function before one interferes with their operation. To ensure that market outcomes will be efficient, some stringent conditions must be met. For example, the outcome of any employment decision must affect the worker and employer only, not society at large, because these broader concerns will not be reflected in the job choice. A particularly pertinent requirement is that the job choice must be consistent with basic aspects of economic rationality. Individuals must be cognizant of the risks they face and be able to make sound decisions under uncertainty. As we will discuss, these assumptions are especially likely to be violated for many important classes of risks. Even if the consensus is that market outcomes are not optimal, it is essential to ascertain the extent of the market failure. It is important to understand whether the operation of the market is fundamentally flawed or there is a narrower market failure, such as an informational shortcoming, that can be remedied through the transfer of information rather than direct control of workplace conditions. Finally, the market mechanisms will be pertinent insofar as they establish the context in which the government regulation operates. Regulations do not dictate health and safety outcomes because regulators cannot monitor and influence the health and safety attributes of all firms. Instead, these policies simply create incentives for firms and workers to take particular actions, such as installing new ventilation equipment. Whether regulations have any impact on safety will hinge on the strength of the incentives created by the policy and the safety incentives that the market generates for firms. Compensating Wage Differential Theory The fundamental economic approach to worker safety was sketched by Adam Smith more than two centuries ago.3 Smith observed that workers will demand a compensating wage differential for jobs that are perceived as being risky or otherwise unpleasant. This theory was the basis for the labor market estimates of the value of a statistical life discussed in chapter 20. The two critical assumptions are that workers must be aware of the risk (which may not always be the case) and that they would rather be healthy than not (which is not a controversial assumption). Attitudes toward health will, however, differ. Smokers and those who do not wear seat belts are much more willing to work on hazardous jobs for less pay per unit risk than their safety-loving contemporaries, such as nonsmoking seat belt users.4 These differentials in turn will establish an incentive for firms to promote safety, since doing so will lower their wage bill. In particular, these wage costs are augmented by reduced turnover costs and workers’ compensation premium levels, both of which also provide incentives for safety improvements by the firm. In effect, it is primarily the risk-dollar tradeoffs of the workers themselves that will determine the safety decision by the firm. Figure 23.2 illustrates how these forces influence the level of safety provided. Suppose that the health outcome involved is reduction of job-related accidents and that improvements in safety have diminishing incremental value to workers, just as additional units of other types of “economic goods” have diminishing importance. Consequently, the marginal value of the safety curve in figure 23.2 is a downward-sloping

curve because the initial increments in safety have the greatest value. Workers’ marginal value of safety is transmitted to firms through the wage rate that the firm must pay to attract and retain workers. The firm can provide greater levels of safety, but doing so entails additional marginal (or incremental) costs that increase as the level of safety increases. Some initial safety improvements can be achieved inexpensively through, for example, modification of existing machines or work practices. The addition of exhaust fans is one such measure for airborne risks. More extensive improvements could require an overhaul of the firm’s technology, which would be more expensive. This marginal cost curve consequently is increasing rather than staying flat because safety equipment differs in its relative efficacy, and the firm will choose to install the most effective equipment per unit cost first.

Figure 23.2 Determination of Market Levels of Safety

The price of safety set by worker preferences will determine where along this marginal cost curve the firm will stop. The optimal level of safety from the standpoint of the market will be s*. The shaded area under the marginal cost curve will be the total safety-related expenditure by the firm. This level is short of the no-risk level of safety. At the level of safety provided, workers would be willing to pay $v per expected accident to avoid such accidents. Additional safety beyond this point is not provided because the cost to the firm for each extra accident avoided exceeds workers’ valuation of the improvement.

The level of health and safety selected will not be a no-risk level, because promoting safety is costly. Almost all our daily activities pose some risk because of the costs involved in reducing the hazards. Consumers, for example, routinely sacrifice greater crashworthiness when they select more compact automobiles in an effort to obtain greater fuel efficiency because the typical small car is less crashworthy than the average full-sized car. Moreover, the order of magnitude of risks we regulate is not too dissimilar from what we encounter in other activities. As the data in table 19.3 indicate, the accident risk posed by one day of work in a coal mine (a relatively hazardous pursuit) is comparable in size to the risk of smoking 3.7 cigarettes, riding 27 miles by bicycle, eating 108 tablespoons of peanut butter, or traveling 405 miles by car.5 The total number of occupational deaths annually in 2015 was 4,836, or about the same as unintentional deaths from choking, a bit higher than the annual number of drowning deaths, and about oneeighth of the annual number of motor vehicle deaths.6 Individuals tradeoff these and other risks against other valued attributes, such as the recreational value of cycling. Risk Information The first link in the compensating differential analysis is that workers must be aware of the risks they face. For example, if there is no perception of the risks, workers will demand no additional compensation to work on a hazardous job. The available evidence suggests that workers have some general awareness of many of the risks they face. Given data from the University of Michigan Survey of Working Conditions, a strong correlation exists between the risk level in the industry and whether workers perceive their jobs as being dangerous in some respect.7 But this evidence is by no means conclusive, because the risk assessment question ascertained only that workers were aware of the presence of some risk, not the degree of risk posed by the job. A more refined test can be provided using data based on a survey of workers at four chemical plants.8 In that study, workers were asked to assess the risks of jobs using a continuous scale that could be compared with published accident measures. Overall, workers believed that their jobs were almost twice as hazardous as indicated by the published accident statistics for the chemical industry. Such a difference is expected in view of the degree to which health hazards, such as cancer, are not reflected in the accident data. Particularly noteworthy was that after the health hazards were excluded from consideration, the risk assessments equaled the accident rate for the chemical industry.9 These studies should be regarded as evidence of some reasonable perception of job risks by workers. They do not, however, imply that workers are perfectly informed. It is unlikely that workers have completely accurate perceptions of all the risks posed by their jobs. These risks are not fully known, even by occupational health and safety experts. The extent of errors in the risk assessment will not, however, be uniform across all classes of risk. As a rough generalization, one would expect safety risks (external hazards, such as inadequate machine guards) to be better understood than health risks (internal risks, such as excessive exposure to radiation). Safety hazards tend to be more readily visible and familiar risks, such as the chance of a worker in a sawmill losing a finger or the possibility of falling down a set of rickety stairs. In contrast, health hazards usually are less well understood. These risks often involve low-probability events about which nothing is known. Such risks may affect the individual decades after exposure, so that learning by observation and experience is infeasible. These difficulties are enhanced in some instances by the absence of any clear-cut signals that a health risk is present. The odor and color of gases emitted in the workplace, for example, are not a reliable

index of their potential carcinogenicity. In situations where workers are aware of the hazard, the riskier jobs should be expected to command a wage premium. The estimates of the value of a statistical life in chapter 20 indicate that these compensation levels are quite substantial, on the order of $9.6 million for each workplace fatality. Similar compensating differential values exist for the implicit value of nonfatal injuries and illnesses, and these estimates are in the general vicinity of $70,000 for each workday lost to illness or injury. Annual total wage premiums for job risk in the U.S. private sector are $46 billion for the 4,821 fatalities and $64 billion for the 916,400 lostworkday injuries and illnesses, for a total wage premium of $110 billion. This amount, which is in addition to costs of workers’ compensation, represents the potential wage savings firms could achieve by making their workplaces safer. These figures represent what workers’ risk-dollar tradeoffs are, given their current information about the risk, not what they would be if workers had full information about the risk. In addition, the calculations assume rational decision making, whereas in practice, workers may overreact to risks or may neglect to take them into consideration. Although market behavior may not be ideal, the considerable magnitude of compensation per unit risk does suggest widespread awareness of risks and their implications. The value of a statistical life results are bolstered by analogous findings for nonfatal job injuries. These studies suggest substantial compensation for job risks, when viewed both in terms of the total wage bill (6 percent of manufacturing workers’ wages) and the rate of compensation per unit risk. The level of compensation varies by industry, with low levels for industries such as finance and apparel, and higher values for dangerous industries (such as lumber). On-the-Job Experience and Worker Quit Rates The presence of possibly inadequate worker knowledge concerning the risks remains a potential impediment to the full operation of the compensating differential mechanism. The result will not be that market mechanisms will work less effectively, although some decreased efficacy will undoubtedly occur. Instead the new market forces may be influential.10 Consider a situation in which a worker starts a job without full knowledge of the potential risks. After being assigned to the position, he or she will be able to observe the nature of the job operations, the surrounding physical conditions, and the actions of co-workers. Similarly, during a period of work on the job, the worker learns about some particular difficulties in carrying out the job tasks, and even more directly, he or she observes whether co-workers are (or have been) injured. The worker can then use these experiences to evaluate the risk potential of the job. If the worker’s risk perceptions become sufficiently unfavorable, given the wage paid, he or she can quit and move to another firm. Overall, job risks account for one-third of all manufacturing quit rates. Similarly, the periods of time that workers spend at hazardous firms before leaving are shorter than for safe firms. As a consequence, there will always tend to be more inexperienced workers in high-risk jobs because the high turnover rates from these positions lead to frequent replacements. The standard observation that younger and more inexperienced workers are more likely to be involved in accidents is not entirely attributable to greater riskiness of this demographic group. The causality may be in the opposite direction—new hires are more likely to be placed in the high-risk, high-turnover jobs. The firm will also have a strong incentive to avoid placing its most experienced workers in these positions because it will lose the training investment if the worker is injured or quits. Older workers are, however, more

vulnerable to incurring injuries if involved in a job accident. All labor market responses by workers are simply variations of the compensating differential theme. If the job appears to be risky initially, the worker will require extra compensation to begin work on it. Similarly, once he or she acquires information about the risks that are present, the worker will reassess the job’s attractiveness and remain with it only if the compensating differential is sufficient. Inadequacies in the Market If market operations were fully efficient and insurance coverage was adequate, there would be no need for government regulation of health and safety. The decentralized operation of the market would be sufficient to ensure appropriate levels of the risk. Two broad classes of shortcomings limit the efficacy of market outcomes: (1) informational inadequacies and problems with individual decisions under uncertainty and (2) externalities. Informational Problems and Irrationalities For the compensating differential model to be fully applicable, workers must be cognizant of the risks they face and be able to make sound decisions based on this knowledge. The available evidence suggests that, in many contexts, workers have risk perceptions that appear plausible, but these studies in no way imply that all workers are fully informed. The general consensus is that many health risks in particular are not well understood, and, indeed, workers may be completely ignorant of some of the risks they face. With on-the-job experience, workers undoubtedly will revise their perceptions of many risks. Once again, safety hazards are more likely to be treated in a reliable manner because they tend to be readily visible and to occur with much greater frequency than many health risks, which often are low-probability events. Thus the worker has fewer observable incidents of adverse health outcomes to use in forming his risk assessment. The long time lags involved in many health risks further impede efforts to learn about the implications of these risks through experience. A worker may get cancer two decades after job exposure to a carcinogen, but tracing the cause to the job usually is not feasible. As a rough generalization, there is probably reasonable but not perfectly accurate perception of many safety risks and much less reliable assessment of the pertinent health risks. Even with accurate perceptions of the risk, however, one cannot be confident that the decisions ultimately made by the workers will be ideal. As the discussion in chapter 24 indicates, decisions under uncertainty are known to pose considerably more difficulties than decisions made in cases where the outcomes of alternative actions are known in advance. These difficulties are likely to be particularly great in situations involving very low-probability events that have severe outcomes after a substantial lag. The low probabilities and substantial lags make these decisions difficult to conceptualize. How averse, for example, is a worker to taking a one in 20,000 risk of cancer twenty-five years from now? Because of the high stakes —possibly including the worker’s life—the cost of mistaken choices will be high. Once again, it is likely that health hazards pose relatively greater demands on individual rationality than safety risks. The final class of shortcomings in individual behavior relates to the degree workers can choose from a variety of alternative risk-wage combinations. For the relatively mobile, modern U.S. economy, there seems to be substantial range of job options for almost all workers. Certainly, the classic textbook discussions of the one-company town no longer seem relevant and, even if true, would not have as great an impact in an era of interstate highways and substantial worker mobility. This mobility may be restricted during cyclical

downturns, when job opportunities are less plentiful, but because accidents move procyclically, the net influence of adverse economic conditions is not clear. Perhaps the most important constraint on individual mobility is related to the character of the employment relationship. Once on the job, individuals acquire skills specific to the particular firm, as well as seniority rights and pension benefits that typically are not fully transferable. If workers had full knowledge of the risk before accepting the position, these impediments to mobility would not be consequential. The basic difficulty, however, is that workers may not have been fully cognizant of the implications of the position and will subsequently become trapped in an unattractive job situation. Available evidence for chemical workers suggests that the extent of serious job mismatches of this type is not high. Segmented Markets and the Failure of Compensating Differentials The standard economic model of compensating differentials for risk assumes that similarly situated workers have access to the same set of job opportunities. But that may not always be the case if there are systematic differences in job opportunities based on personal characteristics, such as race and ethnicity. Hersch and Viscusi examined the treatment of immigrant workers and found profound differences in terms of whether the notion of a well-functioning market process was borne out.11 A comparison of the labor market outcomes for native U.S. workers and legal Mexican immigrants highlights the potential gaps that exist in the market. The popular conception that immigrants often work in the more dangerous occupations is accurate. Mexican immigrant workers face an annual fatality risk of 5.97 per 100,000 workers, which is 37 percent greater than the fatality risk incurred by native U.S. workers. If the underlying compensating differential theory is correct, Mexican immigrants should receive greater compensation for these additional hazards. But their compensation levels and the implied value of a statistical life based on this compensation is less than for native U.S. workers. Indeed, in the case of Mexican immigrants who are not fluent in English, there is no evidence of any wage compensation for risk. Figure 23.3 illustrates how such a situation could prevail. Native U.S. workers face a market opportunities locus given by w(p). The counterpart set of opportunities for Mexican immigrants is given by wi(p). Mexican immigrants choose jobs from their frontier with risks p2, compared to the risk level of p1 for native U.S. workers, where p2 > p1. The extra wage compensation that native U.S. workers receive for their risks is w(p1) − w(0), while the wage premium received by Mexican immigrants is wi(p2) − wi(0). In this diagram, as in the U.S. economy, Mexican immigrants face greater risk for less additional risk compensation.

Figure 23.3 Market Offer Curves for Immigrants and Native U.S. Workers

Addressing such gaps in market performance is especially important for those Mexican immigrants who are not fluent in English. Government regulators recognize that the challenges posed by job hazards may be particularly acute for workers for whom English is not their native language. They will tend to be sorted into different kinds of jobs, and acquiring the necessary safety-related training may be more difficult. OSHA has long recognized that the safety challenges may be especially great for workers for whom English is not their first language. Externalities An additional class of market inadequacies arises even if individual decisions are fully rational and ideal in all respects. Parties outside the market transaction for the job may have a stake in the risky job because of a broader altruistic concern with individual health. This type of health-related altruism is probably of greater consequence than redistributional concerns in this context. Life and health are clearly quite special, as society has undertaken a variety of health-enhancing efforts, such as Medicare and the Affordable Care Act, to promote individual well-being. The overall importance of these altruistic interests has not yet been ascertained, however. The evidence summarized in chapter 20 was exploratory in nature. In contrast, individuals’ values of life and health are

considerable, and it is not obvious that the external interests of society would boost these values substantially. Whether society’s broader altruistic concerns are of great consequence in this area is an open empirical issue that merits further attention. Moreover, there is the ethical issue of whether a legitimate altruistic concern exists or whether expressions of concern for others’ well-being is simply an attempt by more affluent citizens to impose their own risk-dollar tradeoffs on others. High-income, white-collar workers may view most blue-collar jobs as unattractive, but this opinion does not mean that social welfare will be enhanced by preventing anyone from working on this class of jobs. Until these questions can be resolved, the primary impetus for regulation of occupational hazards probably should be the shortcomings of worker decisions. OSHA’s Regulatory Approach The general approach OSHA has taken to regulating job safety is dictated at least in part by the Occupational Safety and Health Act of 1970. This legislation authorizes OSHA to set standards and to do so in a manner that will ensure worker health and safety. OSHA’s enabling legislation did not, however, specify what these standards should be, what general character they should take, or how stringent they should be. In addition, the legislation did not specify the nature of the enforcement of the standards. For example, OSHA could couple standards with a penalty for noncompliant firms, where the penalty is set at a level that could give firms some discretion as to whether compliance is desirable. For example, the penalty could be an injury tax related to the health impacts on workers, and the firm could comply with the standard only if the health benefits exceeded the costs to the firm. (The frequency of OSHA inspections could also influence the penalty.) In actuality, OSHA imposes an ever-escalating series of penalties on noncompliant firms; thus the standards can be viewed as rigid guidelines. Because of this binding character, the level and nature of the standards are of major consequence to firms regulated by OSHA. Setting OSHA Standard Levels One could characterize OSHA’s general approach as that of adopting technology-based standards whose stringency is limited only by their affordability. Cost considerations enter only insofar as OSHA is concerned with shutting down affected firms. To see how OSHA’s strategy differs from a standard benefitcost approach, consider figure 23.4. For simplicity, suppose that the marginal safety-benefit curve is flat, so that there is a constant unit benefit value. The marginal cost of providing safety is rising, as it becomes increasingly more expensive to promote safety.

Figure 23.4 OSHA Standard Setting versus Efficient Standard Setting

OSHA’s strategy is to look for the kink in the marginal cost curve—at what point does added safety become prohibitively expensive? For figure 23.4, that point is at s2, whereas the efficient level of safety is at s1. The strategy advocated by most economists is that the agency should pursue a more balanced approach that recognizes the necessity of taking into account both the costs and risk-reduction benefits in a comprehensive manner. What matters is the relationship between marginal benefits and marginal costs, not whether costs happen to jump at a particular point. Costs should always be a matter of concern, not only when a firm may go out of business as a result of OSHA policies. Such a shift in emphasis need not always lead to more lenient regulations. Some very hazardous firms probably should go out of business if provisions for efficient levels of safety and health will not permit them to earn a profit. Much of the policy-oriented debate over the safety standards has concerned their stringency. Those advocating a more balanced approach note that the Occupational Safety and Health Act does not require a risk-free workplace, only one that promotes safety “as far as possible.”12 This and other qualifiers in the act suggest that OSHA might have some leeway in being able to take costs into consideration. This view was bolstered somewhat by the U.S. Supreme Court’s decision in the 1980 benzene case, in which it overturned

the standard because OSHA had not shown that the reduction in risks would be “significant.”13 This “significant risk” criterion imposes a threshold benefit level, but it does not impose a requirement that OSHA balance benefits and costs. Indeed, such benefit-cost tests were explicitly ruled out in the 1981 U.S. Supreme Court decision regarding the OSHA cotton dust standard.14 The court upheld the OSHA cotton dust standard and interpreted the feasibility provisions of the Occupational Safety and Health Act as meaning “capable of being done.” It is the technical possibility of compliance rather than benefit-cost tradeoffs that should guide OSHA decisions. In fact, however, in this instance OSHA had based its cotton dust standards on cost-effectiveness concerns, not simply on affordability. Specifically, the standard is varied across different stages of processing because of differences in the severity of the risk in these areas and differences in the cost of reducing the risk. Further reductions in the risk were clearly “capable of being done,” and in fact many firms have already achieved cotton dust levels well below those specified in the standard.15 Clearly, technological feasibility cannot be divorced from cost considerations, since almost any risk can be reduced at sufficiently large costs. Drivers, for example, would face a lower risk of injury in an auto accident if everyone drove full-sized cars at speeds under thirty-five miles per hour. Such measures have not been undertaken because the safety benefits do not justify the increased travel time and loss in fuel efficiency. Likewise, OSHA varied the cotton dust standard because the severity of cotton dust exposures differs according to the stage of processing (different types of fibers and dust are airborne at different stages) and because compliance costs differ. Indeed, since the Reagan administration, OSHA has routinely calculated the costs and benefits of its proposed regulations. The agency does not, however, explicitly compare these magnitudes when discussing the reasons for its policy recommendations. Inevitably, some comparisons of this type are made by OSHA, the Office of Management and Budget, and other players in the regulatory process. Balanced policies would be more likely if the Supreme Court reversed its narrow and unrealistic interpretation of OSHA’s mandate or if Congress amended OSHA’s legislation. In the absence of such a change, primary emphasis will continue to be placed on the level of risk reduction rather than the associated costs. Regulations sometimes may impose costs that appear to be well out of line with any reasonable values, such as almost $70 million per expected life saved by the OSHA arsenic standards. The Nature of OSHA Standards The structure of OSHA’s regulatory approach also has been overly restrictive, as the agency has adopted a narrow technology-based approach to safety regulation. Ideally, OSHA should permit firms to achieve any given level of safety in the least expensive manner possible, consistent with having well-defined regulations that are enforceable. Instead, OSHA has typically adopted uniform standards that attempt to prescribe the design of the workplace. This orientation derives in part from the pattern set in OSHA’s initial standard-setting activity. Shortly after beginning operations, OSHA issued more than 4,000 general industry standards for health and safety, the preponderance of which were safety related. These standards, which continue to constitute most of OSHA’s safety policies, were derived from the national consensus standards of the American National Standards Institute, the National Fire Protection Association, and some existing federal standards for maritime safety. In this process, OSHA converted a set of discretionary guidelines into a mandatory prescription for workplace design.

The upshot of this effort was to establish OSHA as a leading object of ridicule for its portable toilets for cowboys and other seemingly trivial standards. Perhaps more significant than these well-publicized OSHA horror stories was the specification character of the regulations. The OSHA handrail regulation specifies their required height (30 to 34 inches), spacing of posts (not to exceed 8 feet), thickness (at least 2 inches for hardwood and 1.5 inches for metal pipe), and clearance with respect to the wall or any other object (at least 3 inches).16 Likewise, in its requirements for band guards for abrasive wheels, OSHA specifies the required thickness, the minimum diameter of rivets, and the maximum distance between the centers of rivets.17 In each case, the specification standard approach may have imposed greater costs than equally effective alternatives. To provide guidelines for how such flexibility could be achieved, President Ford’s Task Force on OSHA, headed by economist Paul MacAvoy, designed a model standard for machinery and machine guarding that indicated, for example, several ways to guard a punch press.18 This flexibility also may enhance the safety that could be achieved through a performance-oriented approach. A performanceoriented approach would stress the need for firms to achieve a particular health and safety level through whatever means they chose rather than be required to install a particular type of technology. The OSHA specification standards are so narrowly defined that they pertain to only 15 percent of all machines.19 This model standard was never adopted, but it provides an operational example of how OSHA could achieve greater flexibility in its regulatory approach without jeopardizing worker safety. It is also noteworthy that the standards remain focused on safety. Externally visible aspects of the workplace, such as handrail width, are given comprehensive and meticulous treatment. In contrast, only a small fraction of the carcinogens in the workplace have been addressed by OSHA standards. According to the tally by the AFL-CIO, in its history, OSHA has issued standards for only thirty toxic substances.20 Some OSHA health-related standards pertain to risks other than toxic substances, such as those for radiation exposure, but for the most part, the standards have been dominated by safety concerns. In view of the earlier discussion of market inadequacies, this emphasis seems misplaced. Health risks rather than safety risks are handled least effectively by the market. The greatest potential gains from OSHA regulation are likely to come from addressing the dimly understood health risks that pose the most severe difficulties for worker decision making. Moreover, the structure of the health standards is also more likely to be conducive to more effective promotion of worker health. The health standards typically limit worker exposure rather than specifying particular technologies. For example, the cotton dust standard specifies permissible exposure limits to airborne concentrations of respirable cotton dust in different stages of processing, and it indicates the circumstances under which protective equipment must be worn. Respirators are needed during cleaning operations because of unusually high levels of cotton dust in that period. The standard does not specify how the lower levels of cotton dust are to be achieved, whether through use of exhaust fans, new machines for drawing and carding the cotton, or some other approach. While health risks may present greater opportunities for productive regulatory interventions, health regulations in some instances have been less cost effective in saving lives because the agency has set very low permissible exposure levels. The following comparison is not atypical. A 1997 OSHA standard for methylene chloride exposures had a cost of $17 million per expected life saved, while the 1996 OSHA safety standards for scaffolds had a cost per expected life saved of $264,000.21 While health hazards present potential productive opportunities for regulation, if the standards are set too stringently, the potential net gains will not be realized. Reform of OSHA Standards

Proposals for reforming OSHA standards have focused on three dimensions. The first recommendation is that the emphasis should shift from safety to health. Second, firms should be given more opportunities to find less expensive techniques for promoting safety. Standards should consequently be more performanceoriented when that approach is feasible. Finally, the level of the standards should be set in a more balanced fashion that attempts to recognize the health benefits to workers and the costs to firms. Regulatory Reform Initiatives Compared with its initial activity, OSHA’s standard setting has been relatively modest. During the Carter administration, much new regulation was stymied by the uncertainties caused by the court challenges of OSHA’s legislative mandate in the cotton dust and benzene cases. The Reagan administration’s emphasis was on slowing the pace of new regulation rather than on changing its character, so that OSHA was less active than in its earlier years. The Clinton, the George W. Bush, and the Obama administrations did not return OSHA to its formerly vigorous pace of regulatory initiatives. Nevertheless, OSHA has not been completely dormant in the standards area. Changes in OSHA Standards The chief legacy of the Carter administration in the area of regulatory reform was its overhaul of the safety standards. The primary emphasis was not on a general restructuring of the standards approach but on eliminating those portions of the standards that were most extraneous and ill conceived. This emphasis was quite appropriate, in view of the importance of establishing the agency’s credibility. Assistant Secretary of Labor for Occupational Safety and Health, Eula Bingham, eliminated or modified 928 OSHA regulations in October 1978. In many cases, these changes were only editorial and had no major substantive impact. Nevertheless, the net effect of the elimination of the “nit-picking” features of OSHA regulation was to mute some of the harsher criticisms of the agency’s regulatory approach. Because of the magnitude of OSHA’s initial credibility problem, the importance of even cosmetic changes in the standards should not be underestimated. Chemical Labeling The most important structural change in regulatory policy was OSHA’s chemical labeling regulation, which was proposed at the end of the Carter administration and finalized by President Reagan.22 By providing workers with information, this regulation represented an effort to use market forces to promote safety. The chief forms of information provision required were labels on the chemicals, material safety data sheets on the nature of chemicals used in the workplace, and a program for training workers in the handling of chemicals. This regulation addresses the primary source of market failure directly and, as a consequence, preserves the constructive aspects of the health-related decisions by firms and workers. In addition, the focus of the regulation is strongly oriented toward health hazards rather than safety risks. Indeed, much of the impetus for this regulation came from the inability of direct regulatory controls to address the entire range of chemical hazards. Setting standards for all of the thousands of carcinogens in the workplace was viewed as infeasible. In addition to addressing long-term health impacts and acute health effects (for example, skin rashes from chemical exposures), the regulation also addresses accidents from fires and explosions. These safety hazards also are likely to merit greater attention than more visible workplace characteristics, since the safety-related

properties of chemicals will not be well understood in the absence of some information about risk. Providing risk information in a standardized form is also desirable from the standpoint of information processing. To assist in establishing a uniform warnings approach that would reduce trade barriers and facilitate the understanding of warnings for products and chemicals from different countries, the United Nations adopted the Globally Harmonized System of Classification of Labeling of Chemicals. Under the Obama administration, OSHA incorporated this system in the OSHA hazard communication regulation. Economic Role of Hazard Warnings The attractiveness of hazard warnings from an economic perspective is that they work in conjunction with market forces by directly eliminating the informational market failure. Moreover, in many instances, altered worker actions may be a more efficient means of promoting safety than technological changes in workplace conditions. The manner in which hazard warnings exert their influence is reflected in the data in table 23.1, which are based on reactions of workers in four major chemical plants to different hazard warnings. Each worker was shown a hazard warning for a particular chemical and was told that the chemical would replace the chemical with which the worker currently worked. In each case, the worker was shown a single warning, where the four different chemical labels used were for sodium bicarbonate (household baking soda), chloroacetophenone (an industrial chemical that is an eye irritant), TNT (a well-known explosive), and asbestos (a leading occupational carcinogen). Workers were then asked a series of questions regarding their attitudes toward the job after it had been transformed in this manner. Table 23.1 Workers’ Response to Chemical Labeling Worker Response Change in fraction who consider job above average in risk (%) Annual wage increase demanded ($) Change in fraction very likely or somewhat likely to quit (%)

Sodium Bicarbonate

Chloroacetophenone

TNT

Asbestos

−35 0 −23

45 4,700 13

63 7,400 52

58 12,800 63

Source: W. Kip Viscusi and Charles O’Connor, “Adaptive Responses to Chemical Labeling: Are Workers Bayesian Decision Makers?” American Economic Review 74 (December 1984): 949. Reprinted by permission. Note: All dollar figures have been converted to 2015 dollars.

The first row of statistics in table 23.1 gives the change in the fraction of workers in the sample who viewed their jobs as being above average in risk after being given the hazard warning information. In the case of sodium bicarbonate, the fraction of the workers who viewed their job as above average in riskiness dropped by 35 percent, so that overall for the sample, the workers all viewed their jobs as relatively safe. In contrast, for the remaining three chemicals, there was a substantial increase in the fraction who believed that their jobs were risky, particularly in the case of the most severe hazards posed, asbestos and TNT. If the market operates efficiently, these risk perceptions in turn should lead to additional wage compensation for the jobs. Workers who were shown the sodium bicarbonate label did not require any additional wage compensation to work on the job, whereas workers shown the other three chemicals required amounts ranging from $4,700 to $12,800 per year in order to remain on the job. The final market mechanism discussed earlier is that if workers are not compensated sufficiently, they will quit. For this sample, if there were no change in the wage rate after the introduction of the hazard warning, quit rates would decline by 23 percent for sodium bicarbonate and would rise by up to 63 percent for workers who are exposed to asbestos.

In terms of creating incentives for safety, hazard warnings serve to augment market forces by informing workers of the risks that they face. In addition, other studies of individual precautions indicate that hazard warnings are also likely to lead to increased precautions, as individuals become better informed of the risks they face as well as the precautions needed to reduce these risks. Effective Hazard Warnings It should be emphasized that for these hazard warnings to be effective, they must provide new information. Thus the source of the market failure is an information gap. Education efforts that are primarily efforts of persuasion and that attempt to browbeat individuals into changing their behavior have met with far less success. Innovations in OSHA Regulation The opportunities for establishing more flexible regulations are exemplified by the OSHA standard designed to prevent explosions in grain elevators. To reduce the risks associated with grain handling, OSHA developed an extensive set of rules.23 These hazards are often well publicized because explosions in grainhandling facilities may lead to the deaths of dozens of workers. Perhaps in part because of this publicity and the unpredictable nature of these explosion risks, there were no deaths from explosions in the year before the standard was issued.24 The OSHA grain elevator regulation was intended to reduce this risk further by decreasing the dust levels in grain elevators, which in turn will reduce the risk of explosions. What is noteworthy about this standard is that firms are given several alternative options to decrease the dust: (1) to clean up the dust whenever it exceeds 1/8 inch, (2) to clean up the dust at least once per shift, or (3) to use pneumatic dust-control equipment. This flexibility represented a major innovation in the design of OSHA safety standards. The regulation provides an opportunity for firms to select the most cost-effective option and has resulted in lower compliance costs than would a uniform specification standard. OSHA’s effort to use the advantage of a performance-oriented approach represents a significant, constructive contribution to OSHA policy development. Such efforts have put us on the frontier of efficient regulatory policies. Overall, OSHA has not made a dramatic change in the structure of its safety standards since its initial standard-setting efforts. Some of the extraneous and more frivolous standards have been pruned, other standards have been updated to take technological changes into account, and a few new standards have been added. Further reform in standards that have already been promulgated is expected to be minimal because no strong constituency exists for such changes. To the extent that more firms comply with the revisions of the OSHA standards, any impetus for relaxations or modifications of existing regulations will be diminished. Some progress may be made regarding future standards in the form of greater recognition of the costs of the regulations and the introduction of innovative approaches to regulation. Two OSHA efforts initiated in the 1980s, the hazard communication standard and the grain-handling standard, represent significant advances in OSHA’s regulatory approach. And recent OSHA initiatives have refined and sharpened the focus of the hazard communication standard. On balance, however, OSHA’s level of activity in the standards area has not been great since its inception, as it has retained most of its original approach. OSHA’s Enforcement Strategy

To design and enforce its standards, OSHA now has 2,265 employees. Together with the state programs that enforce OSHA regulations, there are 2,200 inspectors who monitor the compliance of workplaces with OSHA standards. The OSHA staff, in conjunction with the inspectors from states that choose to enforce OSHA regulations with state inspectors, come to the workplace, ascertain whether any violations have occurred, and penalize violators. The inspectors may return for a follow-up inspection, continuing to assess penalties until compliance is ensured. Firms will choose to comply with OSHA standards if OSHA establishes effective financial incentives for doing so. The firm must consequently find it more attractive financially to make the safety improvements than to risk an adverse OSHA inspection. The penalties that result include fines levied by OSHA, as well as possible adverse effects on the firm’s reputation, which may in turn affect worker turnover or wages. To assess whether these safety incentives are strong, we consider each link in the OSHA enforcement process. Before OSHA can affect a firm’s policies, it either must inspect the firm or must create a credible threat of enforcement. OSHA undertakes four types of inspections: (1) inspections of imminent dangers, (2) inspections of fatalities and catastrophes, (3) investigations of worker complaints and referrals, and (4) programmed inspections.25 This priority ranking has remained virtually unchanged since OSHA’s inception. Somewhat surprisingly, complaint inspections produce few violations per inspection, a finding that suggests that disgruntled workers may be using the OSHA inspection threat as a means of pressuring the employer.26 This pattern is unfortunate, since the role of workers and unions in promoting safety could potentially have been instrumental. Different presidential administrations have experimented with different strategies for OSHA enforcement. The Nixon and Ford administrations established the general inspection approach, and little change in emphasis took place thereafter except for a gradual expansion in the enforcement effort. Under the Carter administration, an attempt was made to eliminate some of the less productive aspects of the enforcement policy. The number of inspections and less important violations declined, and penalties for violations increased. The first term of the Reagan administration marked the start of what was called a less confrontational approach, as the inspection effort was scaled back. The Reagan administration also introduced more conscious inspection targeting. The biggest change was that the level of penalties assessed for OSHA violations plummeted. In the second Reagan term and in the Bush administration, the enforcement effort was not bolstered, with penalties at an all-time low. In the Clinton administration, the enforcement staff remained sparse, but penalties increased. The George W. Bush administration fostered a more cooperative inspection approach, while the Obama administration increased the penalty levels and emphasized less frequent but more thorough inspections. Inspection Policies In fiscal year 2015, there were 35,820 federal inspections and 43,471 inspections under state plans. The present total of state and federal inspections of 79,291 annually may seem substantial, but it covers very few workplaces. At this rate of inspection, an enterprise would be inspected less than once every century. Because many firms are small businesses with few employees, a more accurate index of coverage is the inspection rate per worker. There are 130 million workers at firms under OSHA’s jurisdiction at 8 million worksites. For every 59,000 workers, OSHA has only one inspector, or what OSHA calls a compliance officer. The total number of all state and federal inspections per worker is one per 1,640 workers. Many of these are follow-up inspections, so that these statistics overstate the coverage of OSHA inspections. Some OSHA critics have correctly suggested that the chance of seeing an OSHA inspector is less than the chance

of seeing Halley’s Comet. In contrast, the Environmental Protection Agency inspects all major water polluters roughly once per year, with the result being a very effective enforcement effort. Two aspects of inspections that reflect desirable changes in emphasis pertain to the emphasis on health rather than safety and the emphasis on serious violations. Health violations merit relatively more attention because of the greater inadequacies in the way these risks are treated. Safety risks are often well known to workers and generate compensating wage differentials, higher quit rates, and larger workers’ compensation premiums, all of which establish incentives for firms to promote safety. In contrast, health hazards are less well understood and, because of difficulties in monitoring causality, are not covered as effectively by workers’ compensation. The role of health inspections has increased over time, both because of the perceived challenges posed by health hazards and the substantial time that such inspections require. Departures from this overall trend have occurred because of targeted efforts to address high hazard industries, such as construction, rather than a decrease in health concerns compared to safety concerns. Ideally, inspections also should identify serious violations rather than less consequential threats to worker safety. Presidential administrations beginning with the Carter administration emphasized inspections for serious violations. The Obama administration continued in this vein, with greater emphasis on more timeconsuming inspection efforts rather than dealing with readily visible hazards. Trivial Violations On entering the workplace, the OSHA inspector attempts to identify violations of OSHA standards for which he or she will assess penalties. When determining whether a firm is in compliance, an OSHA inspector cannot consider costs of meeting the standard, only technical feasibility. Currently, OSHA inspections no longer focus primarily on trivial violations. In fiscal year 2015, each federal OSHA inspection led to an average of two violations per inspection. The share of serious violations has increased—74 percent of all violations in that year were for serious hazards. A small set of 6 percent of all violations were for repeat and willful violations, most of which would also pertain to serious hazards. The remaining 20 percent of violations were for less consequential violations of OSHA standards. OSHA Penalties The ultimate determinant of the financial impact of an OSHA inspection is the amount of the penalties that are assessed for noncompliance. Notwithstanding the widespread notoriety of the enforcement effort, these penalty levels have always been inconsequential. Annual penalties had typically been several orders of magnitude smaller than the safety incentives created by higher wages commanded for risky jobs and the value of workers’ compensation costs. The penalty structure that was in place until August 2016 took the following form. A serious violation merited a penalty that was capped at $7,000 per violation. Persistent violations in which the firm fails to abate the previously identified hazard merit penalties that are capped at $7,000 per day beyond the abatement date. Willful or repeated violations can be penalized up to $70,000 per violation. These low penalty levels had not been updated for inflation since the establishment of this penalty structure. In August 2016, OSHA was permitted to update these values to account for inflation, leading to maximum penalty levels of $12,471 per violation and a penalty level of $12,471 per day for failure to abate the hazard, and a cap of $124,709 for willful or repeated violations. Even these unimpressive maximum penalty levels greatly overstate the magnitude of the fines actually

levied. Consider the statistics for fiscal year 2015.27 The average penalty levied for serious violations was $2,148 for federal OSHA inspections and $1,317 for state OSHA inspections. Each of these values is well below the permissible penalty amount. Moreover, as indicated below, the expected economic sanction for violating OSHA will also depend on whether the firm is inspected and the violation is identified, thus further muting the incentive effect. But what about the most severe hazards that lead to the death of a worker? Worker fatalities typically lead to OSHA investigations of the cause of the accident. The median penalty levied for hazards that lead to the killing of a worker was $7,000 for federal OSHA inspections and $3,500 for state OSHA inspections. In contrast, the market incentives created by the value of a statistical life are three orders of magnitude greater than these levels. While the penalty level has ratcheted up from the level in the early days of OSHA, these financial incentives are dwarfed by other economic forces at work. Market incentives for safety through wage premiums for risk total about $110 billion, and workers’ compensation net premiums written were $43.5 billion in 2014.28 Since each of these costs will drop if firms have a better safety record, they provide a powerful incentive for safety. Firms that are found in violation can reduce the financial costs of being out of compliance by making the mandated improvements. Because of the reduction in penalties for firms that remedy OSHA violations, there is little threat from a random OSHA inspection. A firm need do little to promote safety but simply await the OSHA inspector. The firm will avoid correcting safety problems that the inspector may not identify, since the inspector will typically find only a couple of violations, and the firm will face few penalties if it makes the suggested changes. The elimination of the expected losses from inspections suggests that OSHA will have little impact on the great majority of firms that are not inspected because inspections now have little deterrence value. Enforcement Targeting In addition to changes in the level of OSHA enforcement, the focus of the enforcement effort has shifted. Perhaps the most controversial change in OSHA enforcement policies was the introduction of records-check inspections in October 1981. In these programmed safety inspections, the OSHA inspector first examined the firm’s lost-workday accident rate for the past two years (three years for very small firms). If this rate was below the most recently available national manufacturing lost-workday rate, the firm was not formally inspected. For example, a firm inspected in 1985 would have available its 1983 and 1984 lost-workday accident rates for comparison with the 1983 manufacturing rate, because of a two-year lag in publishing the Bureau of Labor Statistics data. Ideally, OSHA should target riskier firms. Indeed, every administration has introduced some targeting policy. Inspecting these outliers provides greater opportunities for safety gains. Once the risk information has been acquired, it is clearly desirable to use the data to target OSHA inspections. The OSHA procedure is not as sophisticated as it could be, however. From an economic standpoint, one would like to identify the risky outliers based on what is achievable in a particular context, which will depend on the costs of compliance for that industry. A procedure of targeting firms based on whether their record is better than the national manufacturing average does not incorporate this heterogeneity in the costs of promoting safety. A sawmill with an accident rate above the national manufacturing average may have a very safe technology for that industry, whereas a garment manufacturer with an injury rate just below the manufacturing average may be a high-risk outlier for that industry.

The changing character of the OSHA enforcement effort is exemplified as well by the change in the mix of violations cited by OSHA inspectors. Although the OSHA standards have not changed dramatically, the role of different violation categories has undergone many significant modifications. In OSHA’s initial years, violations for walking and working surfaces (for example, misplaced exit signs) constituted about one-fifth of all violations. Many of these violations were for less important risks, some of which were readily visible to workers as well. There has been a substantial drop in this category, as it is no longer among the top ten violation categories, which suggests that OSHA’s resources have been redirected from this less productive area. Two categories that have displayed increases since the early OSHA era are health related. The role of health and environmental control (for example, noise, ventilation, and radiation) has risen, as have violations for toxic and hazardous substances (for example, asbestos and coke oven emissions). However, neither of these categories made the top ten list of violations in fiscal year 2015. Construction hazards receive considerable emphasis, as fall protection ranks first in importance, scaffolding ranks third, and ladders rank seventh. Various machinery and electrical hazards are also prominent—they comprise four of the top ten violation groups. The most prominent health-related violations now pertain to hazard communication, which ranks second, and respiratory protection, which ranks fourth in importance. OSHA enforcement policies remain primarily safety related, but health hazards no longer constitute a trivial portion of the enforcement effort. Impact of OSHA Enforcement on Worker Safety Firms will choose to make the necessary investments in health and safety if the OSHA enforcement policy in conjunction with market incentives for safety makes it in the firm’s financial self-interest to do so. More specifically, a firm will comply with an OSHA regulation if

This framework is of general value when assessing the desirability of regulatory compliance. As discussed, the three links in establishing these incentives—inspections, violations, and penalties—are all relatively weak. A firm has less than one chance in one hundred of being inspected in any given year. If inspected, it expects to be found guilty of two violations of the standards, and even if every violation is found to be serious, the average penalty is just over $2,000 per violation. Based on these estimates, the annual expected financial cost from OSHA inspections is 0.01 × 2 × $2,000, or $40. The impending threat of such minor regulatory sanctions is unlikely to induce any safety improvements in advance of an inspection. The manner in which the safety incentives created by OSHA influence the decisions of firms can be seen by examining the payoffs from safety investments for a firm on a compliance–no compliance margin, which is illustrated in figure 23.5. The curve ABC gives the payoffs to the firm from different levels of safety investment in the absence of OSHA policy. If there were no government regulation, the firm would choose the optimal level of safety, which is that at s0, because that point yields the highest payoff on the curve ABC. Suppose now that OSHA standards require a minimal safety level given by s*, which is above the level at s0

shown on the diagram. For safety levels below s*, the firm will face an expected penalty level that will shift its curve downward, so that over what formerly was the payoff curve range AB, the firm’s payoff levels now are given by DEF. If the firm invests enough in safety to achieve a safety level equal to or in excess of s*, its payoff function will be given by BC as before. The real issue from the standpoint of compliance is whether the OSHA enforcement effort is sufficiently stringent so that the expected cost of noncompliance shifts the firm’s payoffs downward enough to make compliance worthwhile. In this example, the highest payoff the firm can get from complying with the standard will be at point B, whereas the highest payoff that the firm can get from noncompliance will be at point E. For the case shown, the expected costs of noncompliance are sufficient to induce the firm to choose to invest in greater safety.

Figure 23.5 Payoffs for Safety Investments for a Marginal Firm

OSHA Regulations in Different Situations The preceding analysis need not always be the case. In particular, as the three situations in figure 23.6 indicate, three broad classes of situations can arise. In the first case, a firm initially has a payoff curve AA, which is shifted downward to A′A′ once OSHA penalties are introduced for safety levels below s*. That firm will continue to choose safety level s0, because in this diagram the firm’s safety level is so far below that

needed to achieve compliance that it would not be feasible or desirable for the firm to comply with the standard. In the intermediate case, the firm’s initial payoff curve is given by BB, which shifts downward to B′B′. For that firm, the standard will be sufficient to induce compliance, because the downward shift in payoffs introduced by the additional expected penalties has given the firm enough of a financial incentive to make additional investments in safety up to the standard worthwhile. The final situation, given by the payoff curve CC, represents a situation of a firm that already is in compliance with the standard, so that OSHA regulation will be irrelevant to its conduct.

Figure 23.6 Payoffs for Safety Investment for a Heterogeneous Group of Firms

What figure 23.6 illustrates is that, in general, there will be three classes of firms. For two of these classes, firms that are already in compliance with the standard and firms that are substantially below the safety level required, one would expect little effect from the regulation. Raising the expected penalties for noncompliance is essential for effective enforcement because doing so will increase the number of firms that will choose to comply with the regulation. Moreover, the level of stringency of the regulation is also of consequence because if a regulation is very tight, many firms will choose not to comply at all. Thus a very tight regulation that is ignored may actually produce less of a beneficial safety effect than a more modest regulation for which compliance is feasible. In terms of its regulatory strategy, OSHA in effect may have adopted a strategy that was doomed to fail—stringent regulations coupled with weak enforcement. OSHA and Other Factors Affecting Injuries Because of these limitations and the weakness of the OSHA enforcement effort, it is not surprising that OSHA has no dramatic effect on workplace safety. Not all workplace injuries are due to factors under OSHA’s influence. Many accidents stem from aspects of the work process other than the specific technological characteristics regulated by OSHA. That most workplace risks have not been readily amenable to the influence of OSHA regulations is in stark contrast to the optimistic projections of the framers of OSHA’s legislative mandate, who anticipated a 50 percent drop in workplace risks.29 The chief contributing factor relates to worker actions. Although the estimates of the role of the worker in

causing accidents vary (in part because of the difficulty in assigning accidents caused jointly by worker actions and technological deficiencies), it is clear that worker actions play a substantial role. OSHA found that over half of all fatal accidents on oil/gas well drilling rigs were caused by poor operating procedures, and worker actions also have been found to be a major contributor to 63 percent of the National Safety Council’s accident measure, 45 percent of Wisconsin workers’ compensation cases, and the majority of accidents among deep-sea divers in the North Sea. Recent studies reinforce the view that, at best, OSHA regulations could have a significant but not dramatic effect on workplace safety. One statistical analysis estimated that if there were full compliance with OSHA standards, workplace accidents would drop by just under 10 percent.30 A detailed analysis of workplace accidents in California presented somewhat more optimistic conclusions. At most, 50 percent of all fatal accidents were contributed to by violations of OSHA standards that potentially could have been detected by an OSHA inspector visiting the day before the accident.31 Because even a fully effective set of OSHA regulations would not revolutionize workplace safety, it is appropriate to take a more cautious view of the prospective effects of OSHA regulation than the original framers of the Occupational Safety and Health Act did. The critical economic issue is whether OSHA regulation has had any beneficial effect on safety. Agency officials at OSHA as well as at other safetyrelated agencies frequently point to improvements in accident rate trends as evidence of the efficacy of their agency. As recently as 2016, the official OSHA pronouncements linked safety improvements to the agency’s efforts: “OSHA is making a difference. In four decades, OSHA and our state partners, coupled with the efforts of employers, safety and health professionals, unions and advocates, have had a dramatic effect on workplace safety. Worker deaths in America are down—on average, from about 38 worker deaths a day in 1970 to 13 a day in 2015.”32 There are two difficulties with linking these declines to agency efforts. First, year-to-year changes in risk levels may occur for reasons wholly unrelated to changes in safety standards or their enforcement, such as cyclical fluctuations. Second, there was a long-run trend toward safety improvements throughout the twentieth century as a result of the increased wealth of American society and the increased demand for safety that we placed on our social institutions. Thus, even in the absence of any government regulation, one would have expected a safety improvement as a result of society’s increased affluence. The extent of such a safety trend is evidenced in the death rate statistics sketched in figure 23.7.33 The death risk for American workers dropped from 15.8 per 100,000 workers in 1928 to 6.8 per 100,000 workers in 1970, more than a 50 percent decline, and these improvements were achieved before the existence of OSHA. In the post-OSHA era, these improvements have continued, as the risk level in 2013 was 2.8 deaths per 100,000 workers. The appropriate test of the agency’s effectiveness is whether OSHA has shifted this trend in any demonstrable fashion, controlling for determinants of accident rates other than occupational safety and health regulation.

Figure 23.7 Death Rate Trends for Job-Related Accidents Source: National Safety Council, Injury Facts, 2003 ed. (Itasca, IL: National Safety Council, 2003), pp. 36–37.

The methodology for approaching this issue is illustrated in figure 23.8, which presents a stylized view of the statistical tests. The curve AB represents the injury trend before the establishment of OSHA, and the curve BC represents the predicted trend in injuries after OSHA, had there been no safety regulations in place. Similarly, the curve BD represents the actual trend in injuries. If there is no statistically significant difference between the actual and predicted injury trends, then the agency has not had its intended effect. It is the vertical spread between BC and BD that represents the incremental effect of the agency on the injury rate, not the extent to which the injury level at point D lies below point B at the establishment of the agency, because much or all of the injury decline might have occurred in the absence of the agency.

Figure 23.8 Statistical Tests for the Effect of OSHA

Determining OSHA’s Impact on Safety Two approaches can be used to ascertain the efficacy of the agency. Under the first, one estimates the equation to characterize the injury rate performance during a pre-OSHA era, which is the injury rate trend given by AB. One such model that could be used to estimate this relationship would be

where α is a constant, the β terms are coefficients, and t is the error term. The dependent variable in the analysis is the risk level Riskt in some year t, which will be determined by the series of variables on the right side of the equation. The risk level in the previous year is influential because it is a proxy for the character of the technology of the industry, and typically this lagged risk level has a positive effect. Cyclical effects are also pertinent because accident rates generally move procyclically as additional shifts of workers are added, new hires are added to the workforce, and the pace of work is increased. Industry characteristics are also consequential because the mix of industries in the economy and factors such as the presence of unionization may be influential. Principal worker characteristics include the experience mix of the workforce. Once an expression like equation 23.2 has been estimated, one can use the fitted value of this equation to project out the accident rate trend BC in figure 23.8, and this predicted trend can be compared

with the actual observed risk levels along BD to determine whether the agency has shifted the injury rate downward. The main prerequisite for using this postregulation simulation approach is that one must have a substantial period of preregulation data to do so. Although this is the case for overall accidental death statistics presented in figure 23.7, all data series gathered by the Bureau of Labor Statistics on an industryspecific basis changed after the advent of OSHA, so that no preregulation and postregulation comparison is possible. Moreover, post-OSHA Bureau of Labor Statistics data are divided into two different eras. Whereas the government used to estimate worker injury rates based on a sample of firms who reported their fatalities voluntarily, it now uses data based on a comprehensive census of all occupational fatalities. If the analysis is restricted to more recent data, one can adopt an alternative approach in which one estimates an equation using only data from the postregulatory period. Equation 23.3 summarizes such a model:

This equation is the same as the preregulation simulation, equation 23.2, except for two differences. First, it will be estimated using the postregulation data rather than the preregulation data. Second, it includes variables that capture measures of the effect of the regulation, where the principal variables that have been used in the literature pertain to the rate of OSHA inspections or the expected penalty level. Equation 23.3 includes a distributed lag on the OSHA variable to account for the fact that it may take some time for an OSHA inspection to have an effect. Health and safety investments that require capital investments by the firm are not implemented instantaneously, so that it would be unrealistic to assume that OSHA regulations will always have a contemporaneous effect. In practice, most studies indicate that any OSHA effect will have a one-year lag. Mixed Opinions Regarding OSHA’s Impact The general consensus of the econometric studies is that there is no evidence of a substantial impact by OSHA. Because of the difficulty of maintaining consistent risk data before and after the advent of OSHA, only a few studies have examined the shift in risk levels beginning with the advent of OSHA. For example, Mendeloff’s analysis of the California workers’ compensation records from 1947 to 1974 produced mixed results, as some risk levels rose and others declined.34 Curington’s analysis of New York workers’ compensation data from 1964 to 1976 found a decline in “struck by machine” injuries in the post-OSHA period.35 The broader assessment by Butler of aggregate worker fatality rates from 1970 to 1990 found no significant impact of OSHA.36 The alternative empirical approach is to focus on the post-OSHA period and to analyze either the general effect of the deterrence aspects of the enforcement structure or the changes in the behavior of firms after they have been inspected. Viscusi adopted the deterrence approach and analyzed the effect of OSHA inspections and penalties for 1972–1975 and failed to find any significant OSHA impact.37 However, OSHA inspection policies have evolved since the initial OSHA efforts, so that the update by Viscusi focusing on the 1973–1983 period indicated a modest decline in the rate of worker injuries on the order of 1.5 to 3.6

percent.38 The principal enforcement component that was influential was inspections rather than penalty levels, which are quite low and vary little across violations of different severities. Other studies of the deterrence effects also suggested zero or small impacts. Bartel and Thomas’s analysis of the 1974–1978 experience did not reveal any significant OSHA impacts.39 The analysis of manufacturing plant data from 1979–1885 by Scholz and Gray found evidence of an effect of OSHA inspections,40 but further scrutiny of the manufacturing data by Ruser and Smith yielded evidence of no significant effects.41 Perhaps the strongest published evidence for OSHA’s efficacy is the change in accident rates for firms inspected by OSHA. Such studies pose econometric challenges to researchers because OSHA targets its inspections at firms with unusually high accident rates, perhaps because of a fatality incident. One would expect unusually high risk levels to decline irrespective of an OSHA inspection. Smith found a drop in the lost-workday rate at firms inspected in 1973, but not for firms inspected in 1974.42 Cooke and Gautschi found a significant drop in lost workdays because of accidents in Maine manufacturing firms from 1970 to 1976.43 Other studies have also found that injury rates at inspected firms have exhibited either no decline or a moderate decline after OSHA inspections at the firm.44 Even if evidence suggests significant impacts on safety that is sometimes as high as a 10–20 percent injury risk reduction at an inspected firm, because of the very low probability of an OSHA inspection, there will be little impact of such improvements on national injury statistics in the absence of a general deterrence effect. The possibility of a favorable impact of OSHA on workplace conditions is also borne out in more refined studies of workplace standards. One case study is that of the OSHA cotton dust standard, which was the subject of a major Supreme Court decision. That standard was directed at controlling cotton dust exposures in the workplace because these exposures lead to potentially disabling lung diseases. The promulgation of such a regulation was viewed by the business world as a dramatic event, as there were severe stock market repercussions of the various regulatory events that were involved in the issuance of the regulation. Overall, an event study analysis indicates that the market value of the cotton firm fell by 23 percent in response to the cotton dust standard.45 If it was expected that firms would be able to completely ignore the regulation, then no market effect would have been observed. A major reason for the expected compliance with the regulation was that the controversy surrounding it as well as the vigorous action by the union involved in this particular instance ensured that the cotton dust standard would be a prominent target of OSHA enforcement. Although compliance with the cotton dust standard was not required until 1984, by the end of 1982, the majority of the exposed workers were in work situations in compliance with OSHA standards. Firms’ investments in cotton dust controls from 1978 to 1982 led to an annual reduction of about 6,000 cases of byssinosis (a lung disease) annually. The standard remained controversial, however, because it is a costly means for promoting worker health. For example, the cost per case-year of total disability prevented has been estimated at $2.9 million.46 In addition, some advocates still support greater use of more performance-oriented alternatives to control cotton dust. One possible policy alternative is to require the use of lightweight dust masks for low to moderate cotton dust levels, which would produce the same benefits as engineering controls at negligible cost. Because byssinosis is a progressive disease that moves through a series of grades and is reversible in its early stages, disposable masks could be coupled with a worker rotation policy. Only for severe cotton dust levels would respirators or engineering controls be required. To date, protective equipment alternatives have not been treated as a viable policy option because of union opposition to such efforts. The available empirical results for the overall impact of OSHA and in the cotton dust case suggest that OSHA enforcement efforts may be beginning to enhance workplace safety. An improvement over the early

OSHA experience should be expected, as the standards have been refined and inspection efforts are being improved. Role of Workers’ Compensation There is a tendency to think of OSHA as establishing the incentives for safety and to view workers’ compensation as simply a social insurance mechanism that reduces the litigation costs that would arise if workers could sue their employers for job-related injuries. However, the financing mechanisms for workers’ compensation create very powerful incentives for safety. These incentives will be particularly great for larger firms, whose injury experiences play a key role in their insurance rates. Moreover, even small firms are subject to inspections by insurance underwriters, who have a strong financial incentive to price the insurance appropriately. The role of workers’ compensation soared in the latter part of the twentieth century. Workers’ compensation net premiums written were $31.6 billion annually in 2010 but increased to $43.5 billion by 2014.47 The linkage of these insurance premium costs to firms’ accident performance can potentially create extremely powerful incentives for safety—far greater than those generated by the comparatively negligible OSHA sanctions. Empirical evidence on the incentive effects of workers’ compensation has always been contaminated by the presence of moral hazard effects. Even if higher workers’ compensation premiums foster a safer workplace, the increased generosity of workers’ compensation benefits will increase the incentives of workers to report accidents. Indeed, some researchers have speculated that workers may misrepresent accidents, such as lower back injuries, in order to collect such benefits. The net result is that most studies of the link between workers’ compensation and nonfatal accidents suggest a positive correlation, which is the opposite of what one would expect if there were safety incentive effects. Matters are, however, quite different in the case of fatalities. One cannot misrepresent one’s death, and the usual moral hazard arguments do not apply. As a result, the empirical evidence on worker fatalities indicates that had it not been for the existence of workers’ compensation, job fatality rates in the United States would be as much as one-third greater than they are.48 This estimate of the safety consequences of workers’ compensation is roughly an order of magnitude greater than many estimates of the effect of OSHA safety regulations. The importance of financial incentives for safety is not surprising, but recognition of the role of such incentives could transform the approach that policymakers take toward fostering greater safety. For example, an injury tax levied by OSHA based on the accident level at the firm could serve a similar function. Cornell University economist Robert S. Smith has advocated this approach since OSHA’s inception. Moreover, it would greatly increase the scope of OSHA’s regulatory efforts, enabling it to focus its efforts on the less readily monitorable hazards, rather than on errors that will be captured in the accident statistics and can be handled through the injury tax mechanism. Economic analysis is also instructive for guiding policymakers in setting the level of social insurance benefits. Workers will value workers’ compensation and, as a consequence, will be willing to accept lower wages in return for insurance coverage of their job accidents. Indeed, workers’ compensation now pays for itself through such wage offsets. By comparing this wage offset with the cost of providing the insurance, one can ascertain whether the level of earnings replacement under workers’ compensation is optimal from an insurance standpoint. Evidence on these wage–workers’ compensation offset levels suggests that current

benefit levels are reasonable.49 When premium levels rise, workers’ compensation critics express concern about the growth in workers’ compensation costs. While employers are billed for the rising workers’ compensation costs, in most instances, it is the workers themselves who ultimately pay for those costs through reduced wages. However, no evidence suggests that the level of insurance coverage is overly generous. Instead main problems seem to stem from an extension of coverage of workers’ compensation to classes of injuries for which the job-relatedness is difficult to monitor, such as carpal tunnel syndrome. Summary Even with a reform of its policies, OSHA will not be the dominant force influencing worker safety. The role of the market in determining safety will continue to be instrumental. OSHA can augment the existing forces for safety, but even full compliance with all current OSHA regulations or those likely to be promulgated will not markedly reduce workplace risks. The no-risk society that some might envision as OSHA’s ultimate goal is simply unattainable. Nevertheless, constructive reform of OSHA could enable this agency better to foster the interests of workers and at the same time diminish the associated burden on society. Various specific reforms have been advocated in the literature. Rather than review each of these proposals, the following focuses on changes for which there is likely to be a broad consensus about the nature of OSHA’s inadequacy or the proposed remedy. The first area of proposed reform concerns the area of emphasis. In more than four decades of regulation, OSHA policies have exhibited a slight shift toward health but have remained largely safety oriented. The emphasis of both the set of existing regulations and OSHA enforcement has continued to be predominantly in the safety area. This emphasis is misplaced because market forces are better equipped to address safety risks through compensating differentials and related mechanisms. In addition, the incentives created by workers’ compensation premiums already augment to some extent the market incentives for safety. Health hazards are handled less adequately by both the market and workers’ compensation. Moreover, the coupling of substantial uncertainties with low-probability events involving potentially catastrophic outcomes makes health risks a promising target for governmental regulation. An increased emphasis on health risks is the cornerstone of a policy reform proposal by economists Thomas J. Kniesner and John D. Leeth.50 They propose that more resources be given to the National Institute of Occupational Safety and Health to research and publicize health risks. Their proposal similarly would increase financial incentives for safety, such as those provided by workers’ compensation, and decrease the emphasis on inspections, particularly for firms with strong safety records. A second class of reforms is to ensure that we are “on the frontier” of efficient policies; in other words, that we are achieving as much health and safety improvement as possible for the costs imposed. Much of the adverse reaction to OSHA’s initial wave of regulations of toilet seat shapes and the like stemmed largely from the belief that the regulatory mechanisms had not been well chosen. Strong enforcement of regulations will only be warranted for regulations that are sensible. Third, some of the most extraneous features of OSHA policy have been pruned, but finding ways to promote safety at less cost is still needed in all regulatory contexts. The use of performance standards rather than narrowly defined specification standards could, for example, enable firms to select the cheapest means of achieving the health or safety objective. Such flexibility would reduce compliance costs and increase the incentive of firms to develop innovative technologies to foster health and safety. Moreover, if structured

appropriately, as in the grain dust standard, a performance standard need not greatly increase firms’ uncertainty about whether they are in compliance. The final reform target is to strike a more explicit balance between the health improvements and the costs imposed on society. Labor market estimates of the value of a statistical life are now being used to provide guidance in terms of the appropriate tradeoff. If policymakers viewed regulatory alternatives in light of the cost per health benefit achieved, they would at least confront explicitly the nature of the tradeoffs and ideally would pursue only those policies that they judged to be in society’s best interest. Although reforming OSHA’s regulatory strategy remains a major item on any agenda of important regulatory reforms, it would be an oversimplification to say that OSHA has not improved its efforts. The agency has introduced several promising new regulations, has eliminated some of the worst initial regulations, and has better targeted enforcement efforts than it once did. The future of OSHA policies no doubt will continue to exhibit the need for reflecting the types of reforms that we have suggested, for they are at the heart of any regulatory strategy for workplace health and safety. As a result, complete regulatory reform will never be achieved with the same finality as economic regulation, where, for example, deregulation has transformed the airline industry into what some observers consider to be a more competitive situation. The need in the health and safety area is for better regulation, not deregulation, and opportunities for improvement will always remain. Questions and Problems 1. What are the rationales for occupational safety and health regulation? Does the existence of compensating differentials for risk imply that there is no rationale for regulation? What if we also knew that workers had perfect information regarding the risks they faced, and markets worked competitively? Can you think of any other possible rationales for intervention? 2. OSHA inspectors could guarantee compliance by imposing infinite penalty amounts on firms that did not comply with their regulations. If arbitrarily large penalties were permitted by OSHA’s legislation, would it be desirable to adopt such penalties? What are the factors that you would want to consider when establishing the penalty level? 3. Suppose that a technological innovation has made it easier for firms to provide a safe work environment. How would you illustrate this effect using the diagram in figure 23.4. Can we determine whether safety expenditures will rise or fall after such a shift? 4. OSHA and other regulatory agencies have typically followed a specification standard approach rather than a performanceoriented approach. What are the considerations that make a technology orientation attractive to government officials, even though it has not found great favor among economists? 5. When setting the optimal penalty level for noncompliance with a regulation, should the regulatory agency vary the penalty with firm size? Should the profitability of the company be a concern? 6. Suppose that there are different types of firms in the industry: old firms and new firms. Suppose that old firms have existing technologies for which it is more costly to adopt risk-reducing innovations, whereas new firms can incorporate these innovations in their new plant investments. Use a variant of figure 23.2 to illustrate how the optimal safety level will differ in these two situations. Should there be heterogeneity in the standards set by regulatory agencies? 7. Economists frequently advocate the use of personal protective equipment, such as gas masks and ear muffs, as less costly solutions for promoting worker safety. These cost considerations typically focus only on the purchase cost of the equipment. What other cost components are associated with personal protective equipment that might make engineering controls a more attractive alternative? 8. New information pertaining to potential safety innovations regularly becomes available. There are always new potential engineering controls that could be adopted by OSHA. Yet the agency tends to change its standards very little over time. Can you think of any economic rationales for having a relatively stable regulatory regime even in the presence of technological changes that might enhance safety?

Notes 1. Section 26 of the Occupational Safety and Health Act of 1970, 29 U.S.C. 651 (1976). 2. See W. Kip Viscusi, Fatal Tradeoffs: Public and Private Responsibilities for Risk (New York: Oxford University Press, 1992) for a review of the evidence on OSHA’s impact, and W. Kip Viscusi, Risk by Choice: Regulating Health and Safety in the Workplace (Cambridge, MA: Harvard University Press, 1983) for a detailed analysis of OSHA policies. For a recent examination of the pertinent economic literature, see Thomas J. Kniesner and John D. Leeth, “Regulating Occupational and Product Risks,” in Mark J. Machina and W. Kip Viscusi, eds., Handbook of the Economics of Risk and Uncertainty (Amsterdam: Elsevier, 2014), pp. 493–600. Other economic critiques of OSHA include Robert S. Smith, The Occupational Safety and Health Act: Its Goals and Its Achievements (Washington, DC: American Enterprise Institute, 1976); John Mendeloff, Regulating Safety: An Economic and Political Analysis of Occupational Safety and Health Policy (Cambridge, MA: MIT Press, 1979); and Thomas J. Kniesner and John D. Leeth, Simulating Workplace Safety Policy (Boston: Kluwer Academic, 1995). 3. Adam Smith, The Wealth of Nations (1776), Modern Library Edition (New York: Random House, 1994). 4. See Joni Hersch and W. Kip Viscusi, “Cigarette Smoking, Seatbelt Use, and Differences in Wage-Risk Tradeoffs,” Journal of Human Resources 25 (Spring 1990): 202–227. 5. These calculations were made by the authors using data from Richard Wilson, “Analyzing the Daily Risks of Life,” Technology Review 81 (February 1979): 40–46. 6. The occupational fatalities figure is based on 2015 CFOI data. See Bureau of Labor Statistics, “Census of Fatal Occupational Injuries (CFOI)—Current and Revised Data,” https://www.bls.gov/iif/oshcfoi1.htm#charts. The other accident statistics are from the National Safety Council, Injury Facts, 2016 ed. (Itasca, IL: National Safety Council, 2016), p. 22. 7. W. Kip Viscusi, Employment Hazards: An Investigation of Market Performance (Cambridge, MA: Harvard University Press, 1979). 8. W. Kip Viscusi and Charles J. O’Connor, “Adaptive Responses to Chemical Labeling: Are Workers Bayesian Decision Makers?” American Economic Review 74 (December 1984): 942–956. 9. The health risks were in effect excluded by informing one subsample of the workers that the chemicals with which they worked would be replaced by sodium bicarbonate (household baking soda). 10. This discussion is based on Viscusi, Employment Hazards; and Viscusi, Risk by Choice. 11. Joni Hersch and W. Kip Viscusi, “Immigrant Status and the Value of Statistical Life,” Journal of Human Resources 45 (Summer 2010): 259–280. 12. Sec. 3b, pt. 7 of 29 U.S.C. 651 (1976). 13. Industrial Union Department, AFL-CIO v. American Petroleum Institute, 448 U.S. 607 (1980). 14. American Textile Manufacturers Institute v. Donovan, 452 U.S. 490 (1981). 15. Centaur Associates, Technical and Economic Analysis of Regulating Occupational Exposure to Cotton Dust, Report to the Occupational Safety and Health Administration (Washington, DC: Centaur Associates, 1983), pp. 1–4. 16. OSHA 29 CFR Part 1910.23. 17. OSHA 29 CFR Part 1910.215. 18. Paul MacAvoy, ed., OSHA Safety Regulation: Report of the Presidential Task Force (Washington, DC: American Enterprise Institute, 1977). 19. Ibid., preface. 20. Peg Seminario, “Protecting Workers—Progress but a Long Way to Go,” Environmental Forum, September–October 2016, p. 50. 21. John F. Morrall III, “Saving Lives: A Review of the Record,” Journal of Risk and Uncertainty 27 (December 2003): 230–231. Morrall’s estimates have been converted to 2015 dollars. 22. Federal Register 48, no. 228 (November 28, 1983): 43280. 23. Federal Register 29, no. 4 (January 6, 1984): 996–1008.

24. Office of Management and Budget, Executive Office of the President, OSHA’s Proposed Standards for Grain Handling Facilities, April 1984 (Washington, DC: U.S. Government Printing Office, 1984), p. 17. Because of the random nature of major explosions, one should not conclude that the risk has been eliminated. 25. OSHA, Field Operations Manual, Directive CPL-02-00-160 (Washington, DC: U.S. Government Printing Office, August 2, 2016). 26. U.S. Department of Labor, Assistant Secretary for Policy Evaluation and Research, Compliance with Standards, Abatement of Violations, and Effectiveness of OSHA Safety Inspections, Technical Analysis Paper 62 (Washington, DC: U.S. Government Printing Office, 1980). 27. AFL-CIO, Death on the Job: The Toll of Neglect, 25th ed. (Washington, DC: AFL-CIO, 2016). Available at http://www .aflcio.org/content/download/174867/4158803/1647_DOTJ2016.pdf. 28. Insurance Information Institute, Insurance Fact Book 2016 (New York: Insurance Information Institute, 2016). 29. See Albert Nichols and Richard Zeckhauser, “OSHA after a Decade: A Time for Reason,” in Leonard W. Weiss and Michael W. Klass, eds., Case Studies in Regulation: Revolution and Reform (Boston: Little, Brown, 1981), pp. 202–234. 30. Ann P. Bartel and Lacy Glenn Thomas, “Direct and Indirect Effects of Regulation: A New Look at OSHA’s Impact,” Journal of Law and Economics 28 (April 1995): 1–25. 31. John Mendeloff, “The Role of OSHA Violations in Serious Workplace Accidents,” Journal of Occupational Medicine 26 (May 1984): 353–360. 32. U.S. Department of Labor, “OSHA Commonly Used Statistics,” https://www.osha.gov/oshstats/commonstats.html. 33. Note that the pronounced drop in fatality rates in 1992 was due to a shift in the database used to calculate the death rate and did not reflect an improvement in safety. 34. Mendeloff, Regulating Safety. 35. William P. Curington, “Safety Regulation and Workplace Injuries,” Southern Economic Journal 53 (July 1986): 51–72. 36. Richard J. Butler, “Safety through Experience Rating: A Review of the Evidence and Some New Findings,” Industrial Relations Center, University of Minnesota, Minneapolis, 1994. 37. W. Kip Viscusi, “The Impact of Occupational Safety and Health Regulation,” Bell Journal of Economics 10 (Spring 1979): 117–140. 38. W. Kip Viscusi, “The Impact of Occupational Safety and Health Regulation, 1973–1983,” RAND Journal of Economics 17 (Winter 1986): 567–580. 39. Bartel and Thomas, “Direct and Indirect Effects of Regulation.” 40. John T. Scholz and Wayne B. Gray, “OSHA Enforcement and Workplace Injuries: A Behavioral Approach to Risk Assessment,” Journal of Risk and Uncertainty 3 (September 1990): 283–305. 41. John W. Ruser and Robert S. Smith, “Reestimating OSHA’s Effects: Have the Data Changed?” Journal of Human Resources 26 (Spring 1991): 212–235. 42. Robert S. Smith, “The Impact of OSHA Inspections on Manufacturing Injury Rates,” Journal of Human Resources 14 (Spring 1979): 145–170. 43. William N. Cooke and Frederick H. Gautschi III, “OSHA, Plant Safety Programs, and Injury Reduction,” Industrial Relations 20 (September 1981): 245–257. 44. See David P. McCaffrey, “An Assessment of OSHA’s Recent Effects on Injury Rates,” Journal of Human Resources 18 (Winter 1983): 131–146; Wayne B. Gray and John M. Mendeloff, “The Declining Effects of OSHA Inspections on Manufacturing Injuries, 1979–1998,” Industrial and Labor Relations Review 58 (July 2005): 571–587; John Mendeloff and Wayne B. Gray, “Inside the Black Box: How Do OSHA Inspections Lead to Reductions in Workplace Injuries?” Law and Policy 27 (April 2005): 229–237; Amelia Haviland, Rachel M. Burns, Wayne Gray, Teague Ruder, and John Mendeloff, “What Kinds of Injuries Do OSHA Inspections Prevent?” Journal of Safety Research 41 (August 2010): 339–345; Amelia M. Haviland, Rachel M. Burns, Wayne B. Gray, Teague Ruder, and John Mendeloff, “A New Estimate of the Impact of OSHA Inspections on Manufacturing Injury Rates, 1998–2005,” American Journal of Industrial Medicine 55 (November 2012): 964–975; and David I. Levine, Michael W. Toffel, and Mattew S. Johnson, “Randomized Government Safety Inspections Reduce Worker Injuries with No Detectable Job Loss,” Science 336 (May 2012): 907–911.

45. John S. Hughes, Wesley A. Magat, and William E. Ricks, “The Economic Consequences of the OSHA Cotton Dust Standards: An Analysis of Stock Price Behavior,” Journal of Law and Economics 29 (April 1986): 29–60. 46. See W. Kip Viscusi, “Cotton Dust Regulation: An OSHA Success Story?” Journal of Policy Analysis and Management 4 (Spring 1985): 325–343. Cost estimates have been converted to 2015 dollars. 47. Insurance Information Institute, Insurance Fact Book 2016, p. 117. 48. For a review of this evidence, see Michael J. Moore and W. Kip Viscusi, Compensation Mechanisms for Job Risks: Wages, Workers’ Compensation, and Product Liability (Princeton, NJ: Princeton University Press, 1990). 49. See W. Kip Viscusi and Michael J. Moore, “Workers’ Compensation: Wage Effects, Benefit Inadequacies, and the Value of Health Losses,” Review of Economics and Statistics 69 (May 1987): 249–261. 50. Thomas J. Kniesner and John D. Leeth, “Abolishing OSHA,” Regulation 4 (1995): 46–56.

24 Behavioral Economics and Regulatory Policy

The discussion in the preceding chapters takes the standard neoclassical economic model as the framework to be applied when analyzing decisions by individuals and firms. Individuals maximize expected utility, and firms maximize profits. A substantial behavioral economics literature that often draws on insights from psychology has identified many departures from the usual assumptions of rational behavior embodied in standard economics models. Congdon, Kling, and Mullainathan distinguished behavioral failures in terms of imperfect optimization, bounded self-control, and nonstandard preferences.1 This chapter provides an exploration of some of these phenomena and their potential applicability to regulatory policies. Behavioral departures from conventional economic models generally can be characterized as forms of market failure. If, for example, consumers have biased risk perceptions regarding product risks, their product choices may not be efficient. These and other departures from economic rationality could be in either direction. People might underestimate product risks and purchase excessively dangerous products. Alternatively, they might overestimate risks and avoid relatively safe products. For example, people might take a driving vacation rather than take safer plane flights because of their fear of flying. Behavioral models are most useful as guides to policy when they identify systematic biases that tend to be in one direction. Depending on the nature of the bias, there may be a rationale for some kind of consumer protection regulation. These and many other market failures often do not involve entirely new phenomena that are unknown to economists or policymakers. However, the behavioral economics literature has given them increasing prominence through the development of theoretical models and empirical evidence that documents the prevalence of such effects as well as identifying some new anomalies.2 Similarly, many policy remedies advocated in the behavioral economics literature, such as the use of soft interventions (including warnings rather than command and control policies), have been in policymakers’ toolkit for decades. Much of the contribution of behavioral economics has been to draw greater attention to the economic merits of these well-established policy tools and to highlight the potential role of new forms of intervention that are less intrusive than the command and control regulatory approach. The advent of behavioral economics models raises fundamental questions regarding the characterization of market failure. Suppose that people’s choices violate the usual economic norms but are consistent with an alternative framework, such as Kahneman and Tversky’s prospect theory, which will be discussed below.3 Should these decisions be regarded as “irrational” and a form of market failure that warrants government intervention? Or is the problem that standard economic frameworks are overly simplistic and do not adequately capture the pertinent components of these decisions? In some instances, there could be conflicts between basing decisions on consumer sovereignty (which, from the standpoint of rational economic models, is flawed) and the usual economic principles of maximizing the net benefits to society based on

valuations derived from informed, rational decisions. We do not resolve such concerns in general, since much may depend on the nature of the market failure and the associated welfare effects. However, when considering various departures from conventional models, a recurring concern is to inquire whether this behavior should be overridden with government regulations or if it reflects a failure of conventional economic models to capture different modes of rational behavior. In the latter case, the rationale for interfering with these decisions is less cogent. Prospect Theory: Loss Aversion and Reference Dependence Effects Kahneman and Tversky developed an alternative to the standard expected utility model that they termed “prospect theory.” Their model questioned both the preference structure in expected utility models and the role of risk beliefs. Let us begin with the characteristics of individual utility. Their principal departure from conventional models was to note the importance of reference point effects. In their primary example, people view gains as being quite different from comparable losses. As part of their conceptualization, the individual’s starting point plays a pivotal role. Individuals assess their utility levels based on changes in income from their initial level as opposed to their overall income level. Thus, preferences can be characterized in a manner such as that illustrated in figure 24.1. The utility function, or what Kahneman and Tversky term a “value function,” is risk-averse in the domain of gains from the initial income level and risk seeking in the domain of losses. Because people are so averse to drops in their income from their current situation, they exhibit what Kahneman and Tversky refer to as loss aversion.

Figure 24.1

Preferences Based on the Prospect Theory Model

There are numerous experimental and empirical examples of people being especially averse to incurring losses, displaying preferences that embody a marked shift from their attitude toward comparable gains. The public’s strong resistance to raising gasoline taxes, for example, may be attributable to the belief that these taxes will make the consumer poorer, even though the total amount of the tax increase may not be substantial. A related reference point phenomenon is the endowment effect, in which subjects in experimental contexts are very averse to giving up commodities after they are in their possession.4 In various experimental studies, once student subjects have received objects (such as mugs) in these experiments, they are hesitant to give them up. The experimental subjects exhibit rates of tradeoff between mugs and money (or other commodities) that reflect greater attachment to the mug in their possession as opposed to a mug that they do not already own but could acquire through a trade. The degree to which subjects display endowment effects in such experiments may vary with the experimental design, such as who presented the mug or experimental object to the subjects, the number of practice rounds, what the subjects were told, how long they held the mug, and similar factors.5 The existence of such reference dependence effects leads to a modification of the structure of utility functions. The utility u(c1) associated with any consumption level c1 might traditionally be characterized as u(c1) = v(c1), where v is the individual’s utility that is derived from consumption. Suppose that the person’s initial consumption level was c0 but has now been decreased to c1 < c0 With the presence of reference point effects, the utility u(c1) is given by

where the z function, in the simplest empirical formulations, is a positive parameter that captures the influence of reference effects when c1 < c0, and takes on a value of zero when c1 > c0. Thus, the role of this term is to capture the additional welfare loss associated with a loss from the c0 starting point but is not captured in the level of c1 alone. These types of reference dependence effects may also arise in nonexperimental contexts with respect to valuations pertinent to regulatory policies. How much extra for a product would the consumer be willing to pay to reduce the risk of hand burns from household cleaners from 20 hand burns per 1,000 consumers to 10 hand burns per 1,000? That value is the consumer’s willingness to pay (WTP) for the safety improvement. Alternatively, suppose that the hand burn risk is currently 10 per 1,000, but the product has been reformulated, increasing the risk to 20 per 1,000. What price discount would the consumer require to purchase the reformulated product? That amount is the consumer’s willingness to accept (WTA) value for the safety improvement. For small changes in the commodity being considered, the tradeoff rate should be the same in each direction. Income effects usually are minimal, and we should find that WTP = WTA. However, that is not the result found in dozens of empirical studies. Researchers have identified a substantial WTA/WTP gap. The extent of the disparity is quite striking and cannot be accounted for by conventional economic explanations, such as income effects or differences in study design. The survey by Horowitz and McConnell reviewed 45 studies that included 208 experiments or surveys and found a mean WTA/WTP ratio of 7.2.6 A subsequent study of 164 experiments in 39 studies by Sayman and Öncüler found a mean WTA/WTP ratio of 7.1,7 while the more recent survey of 338 estimates in 76 studies by Tunçel and Hammitt found a geometric mean ratio for WTA/WTP equal to 3.3.8 The extent of the WTA/WTP discrepancy varies by type

of commodity, as environmental goods that are not traded in markets exhibit substantial differences in their WTA and WTP values. In contrast, goods that are traded frequently in markets or that are valued in repeated experimental trials are less subject to such differences. For example, the value of a statistical life revealed by worker decisions that involve an increase in risk are comparable to estimates of the value of a statistical life reflected by job choices associated with a risk decrease. There are concrete policy situations in which people have suffered a loss that must be valued, such as the contamination of a local water supply by pollution discharges. How should such losses be valued—using a WTA value that reflects the loss that has been incurred, or a much lower WTP value?9 One possibility is that the very high observed WTA/WTP ratios are artifacts stemming from survey elicitations of stated preference values. From the standpoint of ascertaining the person’s true WTP value, the expressed WTP amounts in surveys may not in fact reflect individuals’ actual preferences. In addition, survey results usually involve one-time hypothetical choices, whereas actual economic behavior often entails repeated choices over time, so that people can acquire information and learn from their experiences.10 Thus the experimental values may overstate the actual WTA/WTP gap that would be observed in usual decision contexts where there are repeated decisions. Setting these caveats aside, suppose that the survey results are sound and that there is an observed ratio of WTA/WTP in excess of 1.0. If the policy situation being considered entails a loss, should one assign an economic value to the loss based on the WTA amount or the lower WTP amount? Selecting the pertinent benefit value requires some judgment with respect to how such reference point effects should be regarded. Are they a legitimate expression of reasonable preferences, or are they the result of an irrational asymmetric response to decreases in the individual’s current situation? This controversy is one of the more prominent examples of the more general issue of how apparent anomalies in individual behavior should be treated. Government agencies currently continue to rely on the WTP values for both gains and losses, but the debate in the academic literature continues over the correct approach and whether WTA values should be used to value impacts that entail losses. Exploration of the role of reference dependence effects also highlights the importance of the framing of the decisions because the decision frame plays a key role with respect to the starting point for the assessment. Decision frames are not neutral in terms of how losses and gains are perceived, and they have been of substantial concern in the psychology literature. Consider the following example drawn from a study by Tversky and Kahneman and summarized in table 24.1.11 In the first policy situation, an infectious disease breakout will kill 600 people. There are two possible policy remedies. Policy A will save 200 people. Policy B offers a 1/3 chance of saving all 600 people, but a 2/3 chance that nobody will be saved. Respondents generally favor Policy A, which will save 200 lives with certainty rather than undertaking the policy gamble. In the second policy situation, suppose that when faced with this infectious disease that will kill 600 people, we have two policy options framed somewhat differently than before. Under Policy C, 400 people will die, whereas under Policy D, there is a 1/3 chance that nobody will die and a 2/3 chance that all 600 will die. When confronted with these options, people generally prefer Policy D, which offers the chance of no deaths rather than the certainty of 400 deaths. Yet the expressed preferences are inconsistent, as Policy A is equivalent to Policy C, and Policy B is equivalent to Policy D. Simple manipulations of the decision frame for losses can lead to a reversal of the choices without changing any of the components of the alternatives. Table 24.1 Framing Effects for Policies for 600 Infected People Certainty Options

Lottery Options

Respondent’s Choice

Policy A

Policy B

Policy A

Saves 200 of 600 people Policy C 400 people die Equivalence summary 200 live, 400 die under each policy

1/3 chance saves 600 people 2/3 chance saves 0 people Policy D 1/3 chance of 0 deaths 2/3 chance of 600 deaths

Policy D

1/3 chance of 600 alive, 2/3 chance of 0 alive, or 200 expected survivors in each case

Framing effects and the designation of the starting point for evaluations remain pivotal concerns in situations involving WTA/WTP applications. In regulatory contexts, does the policy involve a loss and, if so, what is the character of the loss? Climate change policies pose particularly complex reference point issues. In the absence of policy interventions, Earth will continue to get warmer, with a variety of adverse impacts. Is the status quo the long-term trajectory in the absence of intervention, or is it our current climatic situation, which cannot be maintained under current policies? A plausible perspective might be to suggest that the climatic reference point that most people have is not forward looking but is linked to the current climatic situation rather than scientists’ forecasts. If the current climate establishes the reference point, the impending climate change will impose a loss, which people will be particularly eager to avoid, given the influence of loss aversion. Unfortunately, paying for the policies to reduce the emission of greenhouse gases entails policy costs, which will also impose a loss from the person’s current financial position. Thus, loss aversion comes into play both with respect to the environmental deterioration and the payment for policy remedies. While similar reference dependence effects are operative in each case, the additional loss from expenditures is immediate, whereas the additional loss in terms of the climatic situation is deferred and diminished by the impact of discounting. Thus, one might expect that loss aversion with respect to climate change losses might be dampened by the discounting of these losses, compared to the effect of loss aversion on the increased cost of climate change policies. Prospect Theory: Irrationality and Biases in Risk Perception Situations involving risk and uncertainty are also well known for creating challenges to individual choice and in turn leading to patterns of irrational decisions. Even for quite rational economic actors, the difficulties posed in conceptualizing and assessing many risks are considerable. The risk statistics examined in chapter 19 indicate that the majority of the most prominent regulatory risk targets are low-probability events. Moderately sized risks, such as the risk of being killed in a car crash, were roughly one in 10,000 per year in 2015. Many other risks that we face are much smaller in magnitude; for example, the data in table 19.3 indicated that we have to drink a thousand 24-ounce soft drinks from banned plastic bottles in order to incur a cancer risk of one in 1 million. The risk threshold for whether cancer warnings are required under California Proposition 65 is a lifetime risk from daily exposure over 70 years of 1/100,000, or an annual risk of one in 7 million. Small probabilities such as these are very difficult to think about. Most people have little experience in dealing with such risks as one in 7 million or one in 100,000 on a sufficiently regular basis that would enable them to deal sensibly with such events. In some cases, the risk may not be understood at all, as the individual may be ignorant of the risk and consequently unable to make any decision with respect to it. For risks that are called to individuals’ attention, there are also difficulties that arise because of biases in risk perception. Figure 24.2 illustrates the relationship between perceived risks of death (on the vertical axis) and actual

risks of death (on the horizontal axis). Individuals tend to overestimate the risks associated with lowerprobability events, such as botulism, tornadoes, and floods. In contrast, there is a tendency to underestimate the risks associated with higher-risk events, such as cancer, heart disease, and stroke. This pattern of biases in risk belief is incorporated as an assumption in the prospect theory model. Because this relationship is systematic, our understanding of biases in risk beliefs goes well beyond simply noting that people often have mistaken probability judgments. Instead they tend to err in predictable ways.

Figure 24.2 Perceived versus Actual Mortality Risks Source: Baruch Fischhoff, Paul Slovic, Stephen L. Derby, and Ralph L. Keeney, Acceptable Risk (Cambridge: Cambridge University Press, 1981), p. 29. Reprinted with permission of Cambridge University Press.

Although psychologists generally view these differences as evidence of erroneous risk beliefs, not all aspects of apparent misperception represent fundamental errors in judgment. Differences in the length of life lost associated with these different causes of death account for much of the apparent bias in risk perceptions.12 Similarly, much of the difference in assessed life expectancy loss could be accounted for by the particular age of the respondent, which should affect beliefs regarding mortality risks of one’s peers, since the pertinent causes of death vary with age.13 Thus respondents may be giving reasonable risk assessments that reflect the risks to members of their age group, even if they do not understand society-wide risks. People who are well informed of risks, such as highly educated respondents, tend to have more accurate risk beliefs. The observed patterns are consistent with people beginning with similar risk perceptions of the risks of different causes of death, which they then update as they acquire information. As they acquire additional information about the specific hazards associated with that cause of death, their risk

beliefs move closer to the 45-degree line shown in figure 24.2 but do not coincide with it unless they become fully informed of the risk. Consider the following Bayesian model. Let individuals start with the same prior risk assessment p for all hazards, where b is the amount of information associated with these prior risk beliefs. For each hazard i, suppose that the person has received information indicating a risk probability si for the hazard, where the amount of information about the specific hazard is given by ci. Based on the beta distribution of priors, the posterior risk assessment qi for hazard i will be given by qi = (bp + cisi) / (b + ci). Thus the posterior risk assessment for any particular cause of death starts with some average prior probability p. In the absence of information about the risk, the observed risk assessments in figure 24.2 would be relatively flat. However, people do have information, and the greater the amount of this information, as reflected in the ratio ci / (b + ci), the greater the weight that will be placed on the risk si and the closer the perceived probability will be to the 45-degree line in figure 24.2. After acquiring information, beliefs move closer to that actual risk level but do not reach the 45-degree line unless ci / (b + ci) is extremely large. Thus the observed pattern of risk overestimation for small risks and risk underestimation for larger risks is in line with what one would expect based on rational Bayesian learning models. One need not appeal to assumptions outside of standard economic models to find a rationale for this pattern. Nevertheless, from a policy standpoint, the pattern of misperceptions remains of interest because it explains why there is often societal overreaction to very large risks and comparative complacency with respect to more substantial hazards. Even after making such adjustments for biases related to the level of the risk, some misperceptions of risk will remain, implying that market decisions may not be optimal. However, additional regulation will not be required in all cases. For example, if risk perceptions are already too high for low-probability events, then the level of safety provided by the market will be excessive, as economic agents will be responding to exaggerated risk perceptions. The overestimation of low-probability events also has implications for government policy. To the extent that there is an alarmist reaction to small risks that are called to our attention, and if these pressures in turn are exerted on the policymakers responsible for risk regulation, society may end up devoting too many resources to small risks that are not of great consequence. An interesting policy question is how the government should respond, if at all, to public misperceptions. The following controversy is not hypothetical, as it reflects policy discussions at the U.S. Environmental Protection Agency (EPA). Suppose that the public greatly overestimates the risks associated with hazardous waste sites. Should the government respond to these fears because, presumably, in a democratic society, the government action should reflect the interests of the citizenry? Alternatively, one might view the government as taking a more responsible role by basing policies on actual risks rather than perceived risks and attempting to educate the public concerning the overly alarmist reactions that it has. Moreover, the proper role for the government in these situations might be to serve as a steadying force in such contexts. If the general public underestimates the risk, one presumably would not expect the government to be idle and let citizens incur risks unknowingly. But there is no apparent justification for overriding the public’s preferences by undertaking a protective role when people underestimate the risk and respecting their beliefs when risks are overestimated by undertaking remedial actions to eliminate perceived, but inconsequential risks. An important function of the government is to acquire more scientific information than is feasible for an individual to obtain, to communicate this information effectively to the public, and to issue regulations that are needed to control the real risks that are present. A related result that has emerged in the risk-perception literature is that highly publicized events often are associated with substantial risk perceptions, even though the magnitude of the risks involved may not be

great. This finding is not a sign of individual irrationality but instead of the common shortcomings of public information dissemination. Typically, the events themselves rather than frequency statistics are publicized. We learn that a number of people have recently been killed by a tornado, but we are not given a sense of the frequency of these events, other than the fact that coverage of tornado victims in the newspaper occurs much more often than coverage of asthma victims. Because of the sensitivity of risk perceptions to the amount of publicity as well as the level of the risk, the pressures that will be exerted on risk regulation agencies will tend to respond to the absolute number of adverse events that are publicized rather than risk probabilities. The resulting policies will not necessarily be in line with the direction that fosters society’s best interests through efficient regulation of risks. We will take as society’s objective the promotion of societal welfare based on the true risk levels, not the risk levels as they may be perceived by society more generally. Even if we are equipped with this knowledge of how people err in their risk beliefs, it is sometimes difficult to tell whether we are overreacting to seemingly small probabilities. In 2002 two snipers in Washington, DC, created extraordinary public fear by randomly killing ten people and injuring three, leading football teams to cancel outdoor practices and thousands to change their daily routines. The daily risks were seemingly small—about a one in 600,000 chance of being killed, based on the snipers’ final record. But what if one day the snipers decided to kill dozens of people, not just one? Then the odds would become much more threatening. Consequently the chance that an individual would be shot by the sniper was not well understood, since we have a small sample of observations that might not be an accurate reflection of the ultimate extent of the risk. Just as sniper risks pose challenges for our personal decisions, government regulators must routinely confront policy decisions in which the risks may be great but are not well understood. Reacting to worstcase scenarios will distract us from the real risks that we face, but failing to attend to emerging hazards that are not yet well understood poses dangers as well. Role of Risk Ambiguity If one were dealing with a single-trial situation in which one incurred the risk as a one-time event, the precision of the risk judgment would not be important. The mean probability estimate should be our guide. Uncertainty should not be a concern in the case of one-period decisions. This principle, and violations of rational approaches to such choices, can be traced back to the wellknown Ellsberg paradox and the associated literature on why aversion to ambiguity is a form of irrationality.14 Suppose that you face two different urns, each of which contains red balls and white balls. Urn 1 contains fifty red balls and fifty white balls. Urn 2 contains an unknown mixture of red and white balls. Each urn contains a hundred balls, and you cannot see the contents of either urn. Suppose that you will win a prize if you can correctly guess the color of the ball that will be drawn. Which urn would you pick from, and what color would you name? The correct answer is that you should be indifferent between the two situations. Drawing from urn 1 offers you a known 50 percent chance of winning the prize. Because you did not know the composition of urn 2, you face an equivalent 50/50 chance of winning, by drawing from that urn, irrespective of the color of the ball you pick. Most people confronted with the choice between the two urns believe that urn 1 is preferable because it offers a “hard” probability of success. However, the chances of success can be converted to an equally “hard” probability in the case of urn 2. Suppose you were to flip a fair coin before naming the color of the

ball, and you had picked red if the outcome was heads and white if the outcome was tails. In that situation, you also face the same kind of precise probability of a 50/50 chance of success as you do with urn 1. Although you should be indifferent between the two urns in the case of a single trial, this would not be the case if there were multiple trials. Suppose that you will be engaged in ten draws from the urn, where you have to replace the balls after each trial. From urn 1 you know you have a 50/50 chance of success on each trial, where this is a precisely understood probability. That probability will never change. In contrast, from urn 2 the probability begins initially as a 50 percent chance of naming the correct ball, but with successive trials this probability will increase as you learn about the underlying mixture of balls. For example, if the first three balls that you draw are red, then you will have acquired some information about the composition of that urn. In particular, the odds that there is a disproportionate number of red balls in that urn will have gone up, so that on successive trials your expected chance of receiving a prize after naming a red ball will be greater than 50/50. Even though this probability will remain a “soft” probability, it will still be in your interest to draw from the uncertain urn. The same type of principle embodied in this example is true more generally. For one-shot decisions, the precision of the risk is not a matter of consequence. But in sequential decisions, in which learning is possible and you can revise your decisions over time, it is preferable to have a situation of uncertainty rather than to have a precisely understood risk. In situations of uncertainty, we can alter a course of action if the risk turns out to be different than we had anticipated originally. In regulatory contexts, what this result implies is that the stringency of our regulation may depend in large part on the influence of uncertainty, but we will not necessarily respond in a conservative manner to this uncertainty. If you must take action now to avoid an environmental catastrophe, then uncertainty is irrelevant. The mean risk should be your guide. However, if we can learn about how serious the problem is by, for example, experimenting with different policy options, and take effective action in the future, then it will often be preferable to make less of a regulatory commitment than one would if this were a one-shot decision. Critics of the current conservatism approach to risk analyses for regulatory proposals suggest that it runs the danger of confusing risk analysis with risk management. Ideally, the scientific analysis underlying regulatory policies should not be distorted by biases and conservatism factors. Policymakers should be aware of the true risks posed by different kinds of exposures, so that we can make comparative judgments among different regulatory alternatives. Otherwise, we run the danger of distorting our policy mix and focusing attention on hazards that offer few expected payoffs but are not well understood. The advocates for more stringent regulation in the presence of uncertainty often appeal to the precautionary principle. Arguing that it is better to be safe than sorry, they suggest that regulations should focus on the worst-case scenarios. Examples of Uncertainty and Conservatism Most situations of risk regulation involve the elements just discussed but are not well understood. Typical regulation of carcinogens, for example, is based on laboratory tests involving animals. To make the leap from animals to humans, one must adjust for differences in the size of the dosage of a particular chemical, differences in body weight and size, and differences in surface area, as well as possible differences in human as opposed to animal responses. Moreover, the results with different animal species may imply differing levels of riskiness that must then be assessed in such a manner that we can draw meaningful inferences for

humans. Even in situations where we can reach a consensus on these issues, there is often debate as to how one should model the risk relationships. For example, should one use a linear dose-response relationship or a nonlinear model? The fact that uncertainty exists does not imply that the risks are unimportant or should be ignored, but it does create an additional element that must be addressed in the course of risk regulation. Even in the case of relatively common risks about which much is known, the range of uncertainty may be substantial. Often government agencies use some upper-bound value of what the risk might be. By erring on the side of conservatism, in effect government agencies distort the true risk levels by differing amounts. We may be in a situation where a lower risk is the subject of more stringent regulation, not because it imposes a greater expected health loss but because less is known about it. Unfortunately, the risk probabilities that are touted by regulatory agencies as being measures of the risk often are not mean estimates of the probabilities but are worst-case scenarios. The example of risk assessment for Superfund cleanup efforts presented in chapter 21 typifies the approach. Calculating risks often entails making multiple assumptions. In the calculation of the lifetime risks of cancer from hazardous waste sites, the risk estimate is based on the product of five different parameters, each of which is an upperbound value, such as the ninety-fifth percentile of the parameter distribution. The result is that the chance that the actual probability that the risk is as high as the agency’s estimate of the risk is less than 0.0001. Nevertheless, the agency uses these exaggerated risk estimates without any indication that they are upperbound values. Similar kinds of recognition of risk ambiguity may occur any time there is any uncertainty about a parameter in the regulatory impact analysis. For example, benefit assessments for climate change policies are not known with precision because the underlying economic models generate a distribution of possible values of the social cost of carbon. Using a 3 percent discount rate, the mean social cost of a ton of carbon emissions is $43, but at the ninety-fifth percentile of these values, the social cost of carbon increases to $129. In these and other instances, policymakers tend to succumb to the ambiguity aversion embodied in the Ellsberg paradox and to institutionalize individual irrationalities. If the uncertainty of parameters is of interest, a more balanced approach would be to provide information on both extremes, including both the fifth percentile as well as the ninety-fifth percentile. Doing so would require provision of more balanced information than is currently available. As the social cost of carbon example illustrates, agencies often provide information on the upper-bound values but not the counterpart lower-bound values. Intertemporal Irrationalities Many anomalies in individual choice stem of various forms of intertemporal irrationality. People may have inexplicably high discount rates. In an extreme case, thousands of consumers secure payday loans to meet their debts at exorbitant annual interest rates that sometimes exceed 400 percent. Empirical estimates of discount rates across a wide variety of choice contexts indicate that personal rates of time preference are often considerably higher than the government discount rates used to evaluate regulatory policies.15 Preferences may be inconsistent over time, so that instead of conforming to the usual assumption of exponential discounting with time-invariant rates of time preference, discount rate patterns may change over time in surprising ways. Consider first the standard exponential discounting model, in which the interest rate r leads to a discount factor δ, where δ = 1 / (1 + r). For utility values ut in each period t, the discounted present value v of utility is

given by

Beginning with Strotz, economists have hypothesized that people did not follow this usual exponential discounting format but instead exhibited hyperbolic discounting.16 Thus, in one common formulation of this approach known as quasi-hyperbolic discounting, instead of the equation above, an additional parameter β comes into play, where β < 1 in the hyperbolic discounting case, so that people place an inordinate relative weight on the immediate outcome. In comparison, if β = 1, then the usual exponential discounting model applies. In the hyperbolic discounting case, the individual’s calculation of the present value of the utility stream above is

Thus, all future payoffs are diminished relative to the immediate payoff value. Hyperbolic discounting will lead private decisions to have an inordinately great present orientation, making efforts to exercise selfcontrol to promote future increases in welfare less desirable. Similarly, policies with immediate costs and deferred benefits will tend to fare less well than if people exercised exponential discounting. Note that the main impact of hyperbolic discounting is to distort the valuation of immediate payoffs versus deferred payoffs. If, however, the comparison is between policies with impacts in twenty-five years and fifty years, they will each be similarly disadvantaged by the impact of the hyperbolic discounting β term. Thus an added weight is placed on immediate versus deferred impacts, but the additional remoteness of the deferred effects does not lead to greater intertemporal biases. The potential inconsistencies arising from hyperbolic discounting are exemplified in the following example. Suppose that citizens are indifferent to two different infectious disease policies—Policy A, which leads to ten fewer illnesses in four years, and Policy B, which leads to thirteen fewer illness in five years. In this example, citizens thus require 30 percent additional illness prevention if the policy will have its impact in five years rather than four years. Suppose instead that the time frame is more immediate and includes a policy option that yields current benefits. Policy C, which is just as effective as Policy A above, will lead to ten fewer illnesses in the current year. However, suppose that people will view Policy D that prevents deaths next year as being as attractive as Policy C only if it prevents at least fifteen illnesses next year. With Policy C offering current impacts, people now require at least 50 percent more illnesses prevented if they have to wait an additional year rather than the 30 percent premium when choosing between policies with impacts in years 4 and 5. However, an inconsistency arises once we get to year 4. Suppose that we have the opportunity to choose once again between Policy A and Policy B. Even though citizens were indifferent to the policies when they both involved deferred impacts, once Policy A becomes the effort offering an immediate payoff of ten reduced deaths, its value is enhanced because it is not diminished by the hyperbolic discounting factor β. As a result, citizens will no longer be indifferent between Policy A’s reduction of ten deaths and Policy B’s reduction of thirteen deaths, as they will exhibit a strong preference for Policy A. Energy Regulations and the Energy Efficiency Gap Intertemporal irrationalities serve as the linchpin for a wide range of energy-related regulations. A principal impetus for energy regulations has been the claim that consumers and firms are subject to intertemporal

irrationalities with respect to energy-related decisions. The U.S. Department of Energy, the U.S. Department of Transportation, and the EPA have all issued recent regulatory standards based on an assumption that energy efficiency decisions are fraught with intertemporal irrationalities affecting choices of the fuel efficiency of motor vehicles and heavy-duty trucks, as well as decisions regarding the energy efficiency of light bulbs, clothes dryers, room air conditioners, and other appliances. The general impetus for such policies is the perception that there is an “energy efficiency gap,” whereby the goods that consumers purchase fall short of the energy efficiency levels of the energy efficient goods.17 Consider the following model based on the exposition of the energy efficiency gap model by Allcott and Greenstone.18 Suppose that there are two goods denoted by 0 and 1 with associated energy intensity e0 and e1, where e0 > e1. The energy efficiency of good 1 entails an additional upfront capital cost c > 0. In addition, the energy efficient good imposes an additional utility cost δ, which could be positive if the good generates an unobserved cost and negative if there are additional benefits. The private price of energy is given by p, and deferred impacts are discounted using an interest rate r. Let mi indicate the intensity of energy usage by consumer i. Finally, suppose that the utility cost δ and the capital cost c are incurred in the current period, while the energy efficiency gains are deferred and occur in the next period. The consumer’s discounted value of the energy efficiency gains justify the energy efficient technology if these costs outweigh the added utility cost and the capital cost:

Thus far, there is no energy efficiency gap in the model, as households are undertaking a standard benefitcost test. Suppose, however, that there is a weight σ < 1 on the energy cost savings, so that these savings are not fully valued by the consumer. Then the consumer purchases the energy efficient technology if

This parameter σ could capture a variety of different behavioral failures. Consumers may fail to fully perceive the energy cost savings e0 − e1, or, in the case of sales and rental properties, owners may not adequately convey this information. The value of σ can also capture the influence of hyperbolic discounting as well as inordinately high private rates of time preference. An energy efficiency gap arises if, for any of these reasons, consumers place an inadequate weight on the economic value of the energy savings. Note, however, that from a theoretical standpoint, it could also happen that σ > 1, leading to inordinate investment in the energy efficient technology. Determining the existence of an energy efficiency gap, the direction of the gap, and the magnitude of the gap all require empirical evidence. Government regulatory impact analyses for these regulations seldom do more than assert that these decisions are subject to intertemporal irrationality. In the case of purchases of durable goods, the reference point used by agencies is whether the consumption decisions are inconsistent with the choices that are optimal based on the assumed government rate of interest of 3 percent. The current patterns of consumer decisions often have higher implicit discount rates than the government’s 3 percent rate.19 Since the resulting consumer decisions entail different energy choices than those found to be optimal based on engineering studies using a 3 percent rate of discount, agencies then calculate estimated regulatory benefits generated by the difference in energy usage based on social rates of discount and personal rates of time preference. What

is missing from this judgment of intertemporal irrationality is an assessment of reasonable interest rate assumptions for private consumers, who face different interest rates than do government agencies. Economists also have questioned the calculations of energy efficiency based on idealized engineering models because these models abstract from many components of the decisions that are relevant to the determination of the optimal energy efficiency decision. Individuals may have higher revealed rates of discount, not because of a failure to be forward looking but because of the uncertainty associated with the future energy savings associated with the investments and the high sunk costs that are involved.20 Consumers also may be imperfectly informed of the energy-saving attributes of prospective purchases. The following kinds of considerations make consumer choices quite different from the standardized consumer choice frameworks used in engineering studies of energy efficiency.21 The engineering estimates are based on highly controlled studies rather than empirical estimates of the savings that consumers actually reap. The empirical studies that have been undertaken often are based on short-term time horizons and use selfselected energy savings behaviors, which will be subject to the influence of omitted variables correlated with the energy usage decision. Failure to consider the role of these and other complications leads engineering studies to overstate the savings associated with energy efficiency efforts. Despite such caveats in the economics literature, agencies’ assumption of intertemporal irrationality is the principal driver of the estimated benefits for energy-related standards.22 Consider the corporate average fuel economy (CAFE) standards that were jointly developed by the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) and the EPA. The NHTSA policy had a stated emphasis on fuel economy, consistent with the agency’s mission. Based on this principal policy objective, NHTSA proposed a 40.9 miles per gallon (mpg) fleetwide standard by 2021 and a 49.6 mpg standard by 2025. EPA instead framed the justification for the regulation in terms of the effect on greenhouse gas emissions. The regulatory standard advocated by EPA was a 54.5 mpg standard by 2025. Because the agency analyses are similar, table 24.2 summarizes the EPA estimates of benefits and costs. The most important benefit component is that of lifetime fuel savings, which are treated as benefits based on the agency’s assumption that the purchase of motor vehicles that obtain less than 54.5 mpg is a consumer error. Table 24.2 EPA’s Costs, Benefits, and Net Benefits of the CAFE Rule Input Costs Technology costs Accidents, congestion, and noise costs* Total costs Benefits Lifetime fuel savings Consumer surplus from additional driving Refueling time value Energy security benefits CO2 Non-CO2 greenhouse gas impacts PM2.5-related impacts Total benefits Net total benefits

Value ($ billion) 140.0 52.0 192.0 444.0 70.9 19.5 24.2 46.4 n.a. 8.0 613.0 421.0

Source: EPA and NHTSA, “2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel Economy Standards,” Federal Register 76 (December 1, 2011): 74854, table III-82; and EPA, “Draft Regulatory Impact Analyses: Proposed Rulemaking for 2017–2025 Light-Duty Vehicle Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards,” November 2011, table 1. Note: * indicates that these were included as disbenefits in EPA’s tables. All dollar amounts are in 2009 dollars. Estimates are for combined passenger cars and light trucks, 3 percent discount rate. CAFE, corporate average fuel economy; EPA, Environmental Protection Agency; n.a., not available; NHTSA, National

Highway Traffic Safety Administration.

The breakdown of the components of the benefits indicates the dominant role of the assumed market failure. Overall, 87 percent of the benefits are derived from correcting the assumed consumer irrationality. The EPA hypothesizes that “consumers put little weight on the benefits from fuel economy in the future and show high discount rates,” but the agency provides no empirical support for the extent of any market failure.23 Some of the minor dividends from the regulation pertain to environmental outcomes. The environmental benefit components consist of a 1 percent share of greenhouse gas benefits to the United States, a 6 percent share of greenhouse gas benefits to the rest of the world, and a 1 percent share of other environmental benefits. As this example illustrates, agency explorations of intertemporal irrationality and behavioral economics issues generally are not simply associated with general discussions of market failures that are used to motivate the need for regulation. Instead they may also provide the basis for estimating the preponderance of the calculated benefits generated by the regulation. Making Decisions How people make decisions typically falls short of idealized norms in a variety of respects. Simon, for example, hypothesized that people had bounded rationality and consequently had to rely on simpler rules of thumb for making decisions rather than undertaking the precise optimizing often hypothesized by economists.24 More recently, Kahneman has contrasted the standard economic model of decision making with an alternative model drawing on insights from the psychology literature.25 The standard textbook model of decisions is a highly reflective process that involves slow decision making that is controlled and effortful. In contrast, much of our decision making tends to be governed by an automatic system that is fast, uncontrolled, and does not involve a conscious calculation of the consequences of different choices. Kahneman refers to the automatic decision making as System 1 and the deliberative process that is more emotional and intuitive as System 2. When conceptualizing how risk beliefs are formed, Tversky and Kahneman suggest that factors other than standard Bayesian updating come into play, as people tend to rely on three principal rules of thumb, or heuristics.26 Such rules of thumb would also be consistent with System 1 behavior. These heuristics include anchoring and adjustment phenomena, availability effects, and the representativeness heuristic. In each instance, while rational explanations for the phenomena may exist, there are also documented departures from the standard rational learning models. Anchoring and adjustment involves using some information that people know as the starting point, and then modifying it to tailor it to the specific situation. In many situations, this is a reasonable approach. Weather forecasts might be of this type. A useful starting point for guessing tomorrow’s weather pattern is to assume that the weather will be like today. This forecast can then be modified based on the perceived trend and whether today’s weather was unusual. Potential dangers arise if people fail to update their assessments with incoming information and remain locked into their initial assessment. There is also a tendency to think too narrowly in terms of the range of possibilities—that is, people fail to spread the distribution of the possible outcomes sufficiently. Anchoring phenomena are especially prominent in personal injury litigation contexts, where the object of the anchor is often not to inform juries but to influence the award level. Plaintiffs’ attorneys often seek to establish a high dollar award anchor to serve as the reference point that the jury should use in setting the

compensation amount. Similarly, regulatory policy objectives (such as declaring a climate change goal under the 2016 Paris Agreement of no more than a 2 degree Celsius increase in temperatures by 2100) are intended to establish an anchor for judging the adequacy of greenhouse gas initiatives. Note too that failure to achieve that goal will be viewed as a loss, which in the context of loss aversion, will impose a greater perceived decline in welfare than if no such target had been announced. As a result, such announced goals, if credible, should promote climate change initiatives. Anchoring effects also may produce more general regulatory failures. Consider the market for cigarettes, which are perhaps the most dangerous widely used consumer product. If there are safer substitute products, fostering their consumption would enhance public health. Electronic cigarettes, or e-cigarettes, do not burn tobacco and pose negligible risks relative to those posed by conventional cigarettes. Yet the public views these products to be almost as risky as conventional cigarettes, as they view them as 70 percent as dangerous —a risk level that is far greater than scientists’ estimates of their relative risk.27 If the public views new and safer alternative products as being as risky as currently marketed products, there will be insufficient consumption of safer product alternatives. Introduction of safer product alternatives will also be discouraged. The second heuristic that Tversky and Kahneman identify is the availability heuristic, whereby people assess the likelihood of different events based on other similar phenomena that come to mind. Once again, this phenomenon could be a sensible approach, but ease of recall of related events also may lead to biased risk beliefs. Deaths from highly publicized incidents, such as the 9/11 attack or a major hurricane, are vivid risks that may tend to generate excessive perception of the risks. It is noteworthy that many of the most overestimated risks in figure 24.2 are both low-probability events as well as highly publicized risks. Shortly after floods and other natural disasters, people have very high perceptions of the risk, but thereafter, their risk beliefs drop substantially. While such patterns may be consistent with rational Bayesian learning, very sharp swings in risk belief would not be consistent with changes over time in the levels of objective risk measures. Similarly, television images of dangerous chemicals oozing in the backyard of a household living near a Superfund site have led to very high public perception of hazardous waste risks. The availability heuristic that accounts for high public risk beliefs has also been linked empirically to distortions in the agency targeting of Superfund cleanup efforts.28 A third rule of thumb affecting the formation of risk beliefs is the representativeness heuristic, whereby people assess the frequency of events using images or profiles that they believe are similar to the current situation. Perhaps the most prominent example is the “hot hands” phenomenon in basketball.29 A streak of successful basketball shots does not necessarily imply that the shooter has an unusually high chance of making the next shot. However, after a shooter has made multiple shots in a row, announcers often proclaim that the shooter is on a streak and has a “hot hand.” Surely, such streaks indicate that the shooter is in the zone and has an unusually high chance of hitting the next shot. However, such streaks occur even in situations with independent and identically distributed trials, as one can easily verify with multiple coin flips. Note that the law of large numbers does not imply that after a series of coin flips that land heads up that the next outcome is more likely to land tails up. A recurring challenge for risk regulation agencies is that often very consequential streaks will trigger regulatory concerns. A cluster of cancers may occur in a particular neighborhood. Or multiple cancers may affect groups of workers, such as the soldiers who cleaned up the nuclear waste on the Marshall Islands. Was the illness cluster an unfortunate unlucky outcome that is the result of a random series of lowprobability risks, or does it provide a signal that there is an underlying risk meriting policy scrutiny?

Behavioral Nudges In many policy situations, interventions that encourage consumers or firms to undertake socially beneficial actions can be promoted through the use of policies that nudge people in the correct direction but fall short of command and control regulations in terms of their stringency. Thaler and Sunstein advocate the use of such nudges, or what they call “libertarian paternalism,” in which choices are encouraged in the socially efficient direction, but there is no mandatory requirement that a person undertake a particular action.30 Thus, these nudges entail no violation of individual autonomy. Under some conditions, such nudges are particularly attractive to efficiency-oriented economists. In particular, a very desirable policy situation is one that satisfies what Camerer et al. label “asymmetric paternalism,” or nudges that meet the following test: “A regulation is asymmetrically paternalistic if it creates large benefits for those who make errors, while imposing little or no harm on those who are fully rational.”31 In effect, this criterion is akin to a benefit-cost test, but it is also a test in which the value of the losses incurred by rational actors whose decisions are being harmed are given greater weight than the gains to the irrational actors whose decisions are being enhanced. Although nudges by design should be non-intrusive, nudges have come under fire for interfering with private decisions. Sunstein reviews the misunderstandings of nudges that often lead to such criticisms and provides a detailed defense of the nudges approach.32 He observes that nudges should provide no significant material incentives such as subsidies, taxes, or fines. Moreover, unlike command and control regulations, they should preserve freedom of choice. Advocates of nudges don’t assume that people are irrational or exploit behavioral biases. Indeed, the ability of people to make sound decisions is essential when using nudges in conjunction with private behavior to generate desirable outcomes. But advocates of nudges do recognize that people have bounded rationality. People make many decisions with limited information as well as limited time or attention. Consequently, it may, for example, be beneficial to provide information to assist in making these decisions or to establish default rules so that people need not incur additional costs in choosing the welfare-maximizing decision. In many instances, it is possible to establish a behavioral nudge by manipulating the choice architecture rather than mandating the choice. Marketing departments of major companies have long recognized the importance of such default rules. If the company can get you to subscribe to a product or service and you are obligated to pay for it unless you cancel, the consumer’s inertia will tend to foster continued consumption of the product. Manipulation of the default rule may be useful in policy-related contexts as well. Current U.S. default rules for organ donation after an auto accident is an opt-in system that requires the donor to indicate that the organs will be donated after a fatal accident. If instead the system were an opt-out approach in which the default was that the organs would be donated unless the driver indicated a preference for not making such a donation, the donation rate would increase. International differences in organ donation rates reflect the potential influence of opt-out versus opt-in approaches.33 Organ donation rates in Austria, which has an opt-out system, greatly exceed those in Germany, which has an opt-in system. Similarly, if retirement savings contributions from earnings are framed as an opt-out system rather than an opt-in system, people would save much more for retirement.34 Whether these and similar behavioral nudges are socially desirable depends in part on the economic merits of the decisions being fostered, the level of transaction costs associated with making decisions other than those suggested through the behavioral nudge, and the distribution in society of those who would benefit from nudges and those who would not. Informational policies of various kinds serve as behavioral nudges. Disclosure requirements for mortgages serve to alert people to pitfalls and consequences from substantial long-term debts. The provision

of information can have sometimes dramatic impacts on market performance. Over the past half century, there has been a major policy effort to communicate the risks of smoking to the public. Warnings on the pack were required in the United States beginning in 1966, and numerous other policy initiatives followed. The dramatic decline in U.S. adult smoking prevalence rates from 42.4 percent in 1965 to 17.8 percent in 2013 undoubtedly is attributable at least in part to the societal risk communication efforts and other regulatory policies, such as smoking restrictions and higher cigarette prices. A recent policy success story of behavioral nudges is the replacement in 2011 of the long-standing food pyramid (MyPyramid) with the “USDA food plate,” or MyPlate.35 The USDA food plate indicated the recommended portions of vegetables, fruits, grain, protein, and dairy in a comprehensible manner that was both supportive of public health and less confusing than the pyramid.36 Comparison of the food pyramid in figure 24.3 and the food plate in figure 24.4 indicates the evident superiority of the food plate approach in terms of conveying meaningful dietary information to consumers. These softer forms of regulation promote sounder choices without imposing requirements on consumer behavior.

Figure 24.3 USDA’s MyPyramid Source: U.S. Department of Agriculture, https://www.cnpp.usda.gov/mypyramid.

Figure 24.4 USDA’s MyPlate Source: U.S. Department of Agriculture, https://www.choosemyplate.gov.

Providing information through warnings can be a potentially effective regulatory strategy that works in conjunction with market forces to address market failures linked to consumer beliefs. However, even in the case of the provision of warnings, complicating behavioral factors come into play. Two such considerations are label clutter and information overload.37 Each of these concerns is a reflection of individuals’ cognitive limitations. Label clutter pertains to the presentation of the information on the label. Is the information well organized and easy to read? How easily the warning can be processed will affect the degree to which it is read and understood. Information overload pertains to the amount of information that is presented on the label. If the consumer is inundated with information, then the key information that the warning is intended to convey is unlikely to be processed. Pesticide labels and warning labels for prescription drugs often fall prey to this pitfall, in part because of strong regulatory and liability incentives to address all possible adverse concerns rather than to foster more efficient consumer decisions. A recent example of the use of labels as a policy nudge to promote regulatory objectives is the EPA fuel economy label that was introduced in 2013 (see figure 24.5). The policy intent of the label is to foster consumer choices of more fuel-efficient vehicles. The labeling regulation was promulgated before the advent of the more recent CAFE standards. Ideally, such governmental labeling efforts should be pretested on consumers to ascertain their likely efficacy, which is a potentially important consideration because

whether and how consumers process the label will influence its ultimate effect on behavior. As a starting point for evaluating these informational efforts, it is often useful to place oneself in the role of a representative consumer. An instructive policy exercise is to examine these and other such warnings to see whether they appear to convey the intended information in an effective manner or whether they could be improved by reducing label clutter and providing more parsimonious information. The current label provides fuel economy measures from a variety of perspectives—overall miles per gallon, miles per gallon in the city and on the highway, gallons of gas used per 100 miles, the driving range, annual fuel costs, and the marginal fuel cost over the next five years. Are all of these measures instructive? Should some of these measures be pruned to make the label more parsimonious, or should additional alternative perspectives be provided? These types of issues are routinely confronted in warning label design. In addition to the dominant role of this financial fuel economy information, the label also has two environmental ratings—for fuel economy/greenhouse gas emissions and smog—where these impacts are rated on either ordinal or cardinal 10-point scales that are of different width. Whether the EPA was attuned to the public’s understanding of these measures or the role of individuals’ cognitive limitations more generally in designing the label is unclear. In a subsequent analysis of fuel economy standards, the EPA did not allude to any likely impact of the label on consumers’ fuel economy decisions.

Figure 24.5 Environmental Protection Agency–Department of Transportation Fuel Economy Label

Some informational interventions can serve a twofold purpose that goes beyond providing information about risks—the information could also serve to establish a social norm. Allcott examined an interesting example of this type pertaining to household energy usage.38 Households received a monthly energy report indicating their energy usage, a comparison of their usage with their efficient neighbors, and a comparison of their usage relative to that of all neighbors. If their energy usage was less than the twentieth percentile of the household’s neighbors, this rating was accompanied by praise that their usage was “Great,” and it received two smiley faces. Households that used more than the mean energy amount were termed “Below

Average” and initially received a “frownie face,” but the frownie face practice was discontinued after customer complaints. The intermediate group received one smiley face in addition to being praised for being “Good.” The combination of information and the social norm efforts led to a 2.0 percent reduction in energy consumption. The information about the energy usage of others may be helpful in illuminating what the efficient level of energy usage should be and also may serve as a form of injunctive norm that conveys societal disapproval of inordinate energy consumption. There are analogous results of the influence of social norms with respect to taxpayer compliance.39 The state of Minnesota ran an experiment providing different kinds of tax information to taxpayers, where one of the informational manipulations indicated that 90 percent of Minnesotans were in full compliance with the law with respect to their tax obligations. Taxpayers receiving this information were more likely to comply with the law in their own tax filings than were taxpayers who received other kinds of information, such as being told about productive uses of the tax dollars. As in the case of energy usage norms, such norms may be influential for multiple reasons. Knowing that so many people are in compliance may indicate that this is the most desirable tax strategy because of the prospect of significant penalties for noncompliance, or it may be a simple desire to be in step with one’s peers. Behavioral considerations may enter tax policy in other ways as well. If consumers owe money to the government as opposed to receiving a refund, they are in a loss position. The influence of loss aversion should make them more thorough in identifying ways to avoid owing taxes. That phenomenon was borne out in Swedish data, as taxpayers who expected to owe additional payments were more diligent in claiming tax deductions than when they anticipated a refund.40 A standard maxim of economists is that individual utility is decreased by having more constraints on choice. However, people choose to constrain themselves in various ways, such as alarm clocks that urge us to wake up. One might also make financial commitments to weight loss strategies or perhaps forgo purchasing highly desirable but unhealthy snacks so as not to be tempted by their presence in the home. Schelling observes that the internal battle for self-control often leads to the desirability of such internal constraints or discipline.41 In many situations, people make current choices that affect the well-being of their future selves, as in the case of cigarette smoking and addiction to dangerous products. Inadequacies in such choices affecting one’s future self have come under the designation of “internalities,” drawing a parallel with externalities. Current choices have an impact, but in this instance, the external impact is on the future self of the consumer rather than on others. These types of concerns can also generate seemingly unlikely support for regulatory interventions that appear to work against personal interests. Smoking restrictions limit the situations in which people can smoke impose convenience costs on smokers and may decrease their ability to enjoy their smoking behavior. While one would not expect smokers to favor government policies that impose smoking restrictions, Hersch found that that there was considerable support of such efforts among smokers.42 Interestingly, the greatest support for restrictions was among smokers who had a history of unsuccessful quit attempts. Their apparent intent in supporting smoking restrictions was to use these policy limitations as a constraint that would assist in their smoking cessation efforts. The Behavioral Transfer Test Adopting the results from behavioral economic studies in regulatory contexts requires that they be pertinent to the particular situation.43 Evaluations of the benefits of government programs generally have what is

known as a benefit transfer test, in which one should demonstrate the applicability of benefit values from one context (such as the value of a statistical life for job risks) to other fatality risk situations (such as transportation safety). Similarly, it is appropriate to impose a behavioral transfer test when using the findings from behavioral studies in one situation to the evaluation of regulatory policies. For example, anomalies that are evident in classroom experiments may not generalize to market situations either in terms of the nature of the anomaly or, more likely, the magnitude of any departure from rational economic models. Although many of the anomalies in the behavioral economics literature are derived from experiments— some of which are classroom studies with no or modest financial stakes—considerable evidence shows that many behavioral anomalies generalize to other actual decision contexts. DellaVigna reviews a substantial roster of studies that document behavioral failures regarding retirement savings, credit card usage, health club memberships, cab driver work effort, and other situations.44 Some of the identified anomalies pertain directly to specific classes of decisions that are subject to government regulation, while others serve to demonstrate that experimental results often carry over to actual decisions. Even if that is the case, it is not self-evident that the empirical magnitudes of these effects are generalizable. As the substantial literature on the WTA/WTP discrepancy indicates, there is often considerable heterogeneity in the biases that are present, with there being no WTA/WTP difference in some cases. Decision contexts differ in a variety of ways, including the size of the financial stakes, the frequency of making these decisions, the potential for learning from one’s mistakes, the availability of information to assist in the decision, and the attributes of the people making the decision, such as their education and financial status. Because of these differences, the presence and extent of behavioral anomalies may vary across decision contexts. The same type of guidance that was discussed for stated preference studies in chapter 21 could also provide a useful framework for a behavioral transfer test. First, does the sample of respondents have preferences and risk beliefs that are similar to those in the regulated population of interest? Second, do respondents understand the commodity being valued in the study, and are the stakes in the experiment likely to generate behavior similar to that observed in market contexts? Third, does the nature of the choice situation parallel that in the market in terms of the time permitted for making the decision, the degree of repetition of such decisions, and the potential for learning? Given the hypothesized stark difference between the System 1 and System 2 decision processes, experimental structures that impose rapid response times may generate quite different outcomes than reflected in actual decisions for which a more deliberative process is feasible. Fourth, are influential consumer attributes, such as education and family structure, shared by the experimental subjects? Summary The role of individual and market failures in providing a potential impetus for regulation is well established. The developing field of behavioral economics has provided additional frameworks for conceptualizing these failures as well as experimental and empirical evidence that call our attention to situations in which shortcomings in individual choice are likely to arise. However, despite the flurry of economists’ interest in possible behaviorally informed interventions, these efforts remain a very modest component of regulatory policies. Their principal role to date has been in identifying and justifying potential market failures, which for the most part have led to more traditional kinds of regulations rather than softer approaches that are often favored by behavioral economists. In 2010, the United Kingdom established a nudge unit known as the Behavioral Insights Team to explore

possible policy innovations that could exploit behavioral insights. Some experimental policies that the group has examined mirror long-standing regulatory interventions in other countries, while others are more novel. For example, provision of energy efficiency information for appliances found that the labels influenced the purchase of more efficient washer-dryer units, but not other appliances. Another experiment was an attempt to overcome consumers’ high discounting of future rewards by offering the “Green Deal,” whereby consumers would get energy-saving improvements at no upfront cost and would repay the cost out of future energy bill savings. This effort, however, generated very little household participation.45 While these informal nudges can sometimes be influential, supplementing their influence with financial incentives is often beneficial, particularly with respect to establishing long-run incentives. Establishing the social norm of recycling and making recycling opportunities available has led to high recycling rates in the United States, but adding an additional financial incentive through bottle deposits on plastic water bottles on the order of 5 cents per bottle has boosted these recycling rates even higher. Similarly, there have been efforts to discourage reliance on plastic grocery bags, which have led to some reductions in their usage. Imposing a financial charge of 5 pence per bag has been particularly effective in the United Kingdom, leading to a reduction of 83 percent in plastic bag usage based on the initial pattern of consumer responses.46 Some U.S. locales have adopted similar measures, including a ban on such bags in California, while other states, such as Arizona and Missouri, have passed laws prohibiting cities and counties from regulating the use or disposition of plastic bags. As with other regulatory efforts, optimal policy mix should be explored, coupled with an assessment of the societal benefits and costs of the intervention. It is likely that the role played by behavioral economics in the regulatory policy arena will continue to expand. One would expect that the behavioral economics literature will identify new shortcomings in individual behavior and will improve our understanding of the empirical characteristics of these shortcomings. Such considerations will surely play a significant role in the design and evaluation of regulatory policies. In many instances, the intermediate linkage between the overall policy structure and policy outcomes is the behavior of consumers and workers whose actions the policy must influence to have any impact. Better understanding of the mechanisms for altering this behavior will enhance the development of effective regulatory policies, including nudges and other soft forms of behavioral interventions.47 As in the case of more conventional regulatory policies, learning about the efficacy of these regulatory approaches will stimulate the introduction of other policy efforts that can exploit the insights provided by past successes. Questions and Problems 1. Kahneman distinguished System 1 and System 2 decision making. System 2 decisions are likely to be more thoughtful and rational. Can you identify any types of regulatory policies that might promote System 2 decisions? Do you believe that cooling off periods for signing contracts and waiting periods for the purchase of firearms serve a constructive role? Should the use of such waiting periods be extended to other decisions? 2. Should the use of behavioral nudges in terms of default rules be expanded? For example, should organ donation after auto accidents be based on an opt-out system rather than an opt-in system? Should purchase of health insurance and pension contributions similarly be based on opt-out systems? More generally, suppose an economic analysis has analyzed the efficient choice for the average consumer for a wide class of decisions. What are the pros and cons of making the average efficient choice for the population a default choice that consumers would have to override? Does it matter how difficult it is to alter these choices and the extent of heterogeneity in individual preferences? 3. A consumer purchased a new refrigerator using a credit card on which the consumer pays 15 percent interest. As a result, the consumer valued the energy savings of different models using a 15 percent interest rate. The U.S. Department of

Energy analysis of appliance efficiency concluded that the consumer made the wrong choice. There are more energy efficient models that would have been preferable based on the agency’s discount rate of 3 percent. Do you agree that the consumer is irrational?

Notes 1. William J. Congdon, Jeffrey R. Kling, and Sendhil Mullainathan, Policy and Choice: Public Finance through the Lens of Behavioral Economics (Washington, DC: Brookings Institution Press, 2011). 2. For a summary perspective on behavioral findings as they affect policy analysis, see David L. Weimer, Behavioral Economics for Cost-Benefit Analysis: Benefit Validity When Sovereign Consumers Seem to Make Mistakes (New York: Cambridge University Press, 2017). Also see Lisa A. Robinson and James K. Hammitt, “Behavioral Economics and Regulatory Analysis,” Risk Analysis 31 (September 2011): 1408–1422. 3. Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47 (March 1979): 263–291. 4. Daniel Kahneman, Jack L. Knetsch, and Richard H. Thaler, “Experimental Tests of the Endowment Effect and the Coase Theorem,” Journal of Political Economy 98 (December 1990): 1325–1348. 5. Charles R. Plott and Kathryn Zeiler, “The Willingness to Pay–Willlingness to Accept Gap, the ‘Endowment Effect,’ Subject Misperceptions, and Experimental Procedures for Eliciting Valuations,” American Economic Review 95 (June 2005): 530–545. 6. John K. Horowitz and Kenneth E. McConnell, “A Review of WTA/WTP Studies,” Journal of Environmental Economics and Management 44 (November 2002): 426–447. 7. Serdar Sayman and Ayşe Öncüler, “Effects of Study Design Characteristics on the WTA–WTP Disparity: A Meta Analytical Framework,” Journal of Economic Psychology 26 (February 2005): 289–312. 8. Tuba Tunçel and James K. Hammitt, “A New Meta-Analysis on the WTP/WTA Disparity,” Journal of Environmental Economics and Management 68 (July 2014): 175–187. 9. For exploration of the situations in which some economists have advocated the WTA value, see Jack L. Knetsch, Yohanes E. Riyanto, and Jichuan Zong, “Gain and Loss Domains and the Choice of Welfare Measure of Positive and Negative Changes,” Journal of Benefit-Cost Analysis 3 (December 2012): 1–18. 10. John A. List, “Does Market Experience Eliminate Market Anomalies?” Quarterly Journal of Economics 118 (February 2003): 41–71. 11. Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211 (January 1981): 453–458. 12. See W. Kip Viscusi, Jahn K. Hakes, and Alan Carlin, “Measures of Mortality Risks,” Journal of Risk and Uncertainty 14 (May 1997): 213–233. 13. See Daniel K. Benjamin and William R. Dougan, “Individual’s Estimates of the Risk of Death: Part I—A Reassessment of the Previous Evidence,” Journal of Risk and Uncertainty 15 (November 1997): 115–133. This observation is incorporated in the context of a Bayesian learning model by Jahn K. Hakes and W. Kip Viscusi, “Mortality Risk Perceptions: A Bayesian Reassessment,” Journal of Risk and Uncertainty 15 (November 1997): 135–150. These results indicated that people used the actual death risk, the discounted lost life expectancy associated with the cause of death, and the age-specific hazard rate when forming their risk assessment. 14. The original Ellsberg paradox is discussed in the article by Daniel Ellsberg, “Risk, Ambiguity, and the Savage Axioms,” Quarterly Journal of Economics 75 (November 1961): 643–669. 15. A summary of a large literature in this area is in table 1 of Shane Frederick, George Loewenstein, and Ted O’Donoghue, “Time Discounting and Time Preference: A Critical Review,” Journal of Economic Literature 40 (June 2002): 351–401. 16. R. H. Strotz, “Myopia and Inconsistency in Dynamic Utility Maximization,” Review of Economic Studies 23, no. 3 1956): 165–180; Edmund S. Phelps and Robert A. Pollak, “On Second-Best National Saving and Game-Equilibrium Growth,” Review of Economic Studies 35 (April 1968): 185–199; and David Laibson, “Golden Eggs and Hyperbolic Discounting,” Quarterly Journal of Economics 112 (May 1997): 443–477.

17. Adam B. Jaffe and Robert N. Smith, “The Energy Paradox and the Diffusion of Conservation Technology,” Resource and Energy Economics 16 (May 1994): 91–122. 18. Hunt Allcott and Michael Greenstone, “Is There an Energy Efficiency Gap?” Journal of Economic Perspectives 26 (Winter 2012): 3–28. 19. Jerry A. Hausman, “Individual Discount Rates and the Purchase and Utilization of Energy-Using Durables,” Bell Journal of Economics 10 (Spring 1979): 33–54. 20. Kevin A. Hassett and Gilbert E. Metcalf, “Energy Conservation Investment: Do Consumers Discount the Future Correctly?” Energy Policy 21 (June 1993): 710–716. 21. These concerns are reviewed in Allcott and Greenstone, “Is There an Energy Efficiency Gap?” 22. Ted Gayer and W. Kip Viscusi, “Overriding Consumer Preferences with Energy Regulations,” Journal of Regulatory Economics 43 (June 2013): 248–264. 23. U.S. Environmental Protection Agency, Office of Transportation and Air Quality, Draft Regulatory Impact Analysis: Proposed Rulemaking for 2017–2025 Light-Duty Vehicle Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards (2011), available at https://www.epa.gov/regulations-emissions-vehicles-and-engines/proposed-rule-and -related-materials-2017-and-later-model. 24. Herbert A. Simon, “A Behavioral Model of Rational Choice,” in Herbert A. Simon, Models of Man: Social and Rational; Mathematical Essays on Rational Human Behavior in a Social Setting (New York: Wiley, 1957), pp. 239–258. 25. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011). 26. Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (September 1974): 1124–1131. 27. W. Kip Viscusi, “Risk Beliefs and Preferences for E-Cigarettes,” American Journal of Health Economics 2 (Spring 2016): 213–240. 28. W. Kip Viscusi and James T. Hamilton, “Are Risk Regulators Rational? Evidence from Hazardous Waste Cleanup Decisions,” American Economic Review 89 (September 1999): 1010–1027. 29. Amos Tversky and Thomas Gilovich, “The Cold Facts about the ‘Hot Hand’ in Basketball,” Chance 2, no. 1 (1989): 16– 21; and Jonathan Jay Koehler and Caryn A. Conley, “The ‘Hot Hand’ Myth in Professional Basketball,” Journal of Sport and Exercise Psychology 25 (June 2003): 253–259. 30. Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008). 31. Colin Camerer, Samuel Issacharoff, George Loewenstein, Ted O’Donoghue, and Matthew Rabin, “Regulation for Conservatives: Behavioral Economics and the Case for ‘Asymmertric Paternalism,’ ” University of Pennsylvania Law Review 151 (January 2003): 1211–1154. 32. Cass R. Sunstein, “Misconceptions about Nudges,” SSRN Working Paper 3033101, November 2017. 33. Alberto Abadie and Sebastien Gay, “The Impact of Presumed Consent Legislation on Cadaveric Organ Donations: A Cross-Country Study,” Journal of Health Economics 25 (July 2006): 599–620. 34. Brigitte C. Madrian and Dennis F. Shea, “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior,” Quarterly Journal of Economics 116 (November 2001): 1149–1187. 35. Further discussion of the policy context for these efforts is provided by Cass R. Sunstein, Simpler: The Future of Government (New York: Simon and Schuster, 2013). 36. Ibid. 37. Wesley A. Magat and W. Kip Viscusi, Informational Approaches to Regulation (Cambridge, MA: MIT Press, 1992). 38. Hunt Allcott, “Social Norms and Energy Conservation,” Journal of Public Economics 95 (October 2011): 1082–1095. 39. This result, which is discussed in Thaler and Sunstein, Nudge, is based on Stephen Coleman, The Minnesota Income Tax Compliance Experiment State Tax Results (Minneapolis: Minnesota Department of Revenue, 1996), available at http://www .revenue.state.mn.us/research_stats/research_reports/19xx/research_reports_content_complnce.pdf. 40. Per Engström, Katarina Nordblom, Henry Ohlsson, and Annika Persson, “Tax Compliance and Loss Aversion,” American Economic Journal: Economic Policy 7 (November 2015): 132–164.

41. Thomas C. Schelling, “Self-Command in Practice, in Policy, and in a Theory of Rational Choice,” American Economic Review 74 (May 1984): 1–11. 42. Joni Hersch, “Smoking Restrictions as a Self-Control Mechanism,” Journal of Risk and Uncertainty 31 (July 2005): 5– 21. 43. W. Kip Viscusi and Ted Gayer, “Rational Benefit Assessment for an Irrational World: Toward a Behavioral Transfer Test,” Journal of Benefit-Cost Analysis 7 (April 2016): 69–71. 44. Stefano DellaVigna, “Psychology and Economics: Evidence from the Field,” Journal of Economic Literature 47 (June 2009): 315–372. 45. Michael G. Pollitt and Irina Shaorshadze, “The Role of Behavioural Economics in Energy and Climate Policy,” in Roger Fouquet, ed., Handbook on Energy and Climate Change (Cheltenham, UK: Edward Elgar, 2013), 523–546. 46. Rebecca Morelle, “Plastic Bag Use Plummets in England since 5p Charge,” BBC News, July 30, 2016, http://www.bbc .com/news/science-environment-36917174. 47. Adam Oliver, ed., Behavioral Public Policy (New York: Cambridge University Press, 2013).

Index

Note: Figures and tables are indicated by “f” and “t” respectively, following page numbers. AARP, 815 ABI, 221 Ability, 87–88 Above 890 Mc decision, 574–575 Accident costs and causes, 849 Accident rates declines of, 845–846 influences on, 844–845 injury tax and, 901 OSHA impact on, 899 population age structure and, 845 societal income and, 730, 731f trends in, 844–846 Accountability rulemaking process and, 21 in securities markets, 714, 716 Acid rain, 52 Acquisitions efficiencies from, 224 foreclosure through, 265 by New Economy companies, 382, 400, 403, 404 vertical mergers, 213 volume as percentage of GDP, 214, 216f, 217 Actavis, 356, 357 AdCenter, 418 Addyston Pipe case (1899), 154, 214 Adjustment costs, 196, 197 Adjustment phenomena, 923–924 Administrative Procedure Act, 52, 450 Advanced Micro Devices (AMD), 294–296 Advertiser-funded platforms, 412–413 Advertising competition through, 115–117 deceptive, 441 as entry cost, 180 of eyeglasses, 608–609 FTC rules, 441 liquor prices, 609–610 platforms for online, 418 socially wasteful, 80

AdWords, 418 Affordable Care Act, 877 AFL-CIO, 881 Aftermarkets defining, 359 demand and, 360 monopoly power in, 358–364 Agreements anticompetitive, 355–356 collective bargaining, 206 Paris Agreement, 805 price and, 156–158 reverse-payment settlement, 356–358 Sherman Act of 1890 and, 156 TFEU and, 156 AIDS-related drugs, 830–831 Aid to Families with Dependent Children program, 20 Airbus, 254 Airfares concentration and, 664–665 regulation effects on, 646–650, 647t, 648f Airline Deregulation Act of 1978, 446, 449, 644, 655, 656 Airline industry, 90, 625, 642. See also specific airlines anticompetitive nonprice practices, 657–659 anticompetitive pricing practices, 659–665 competition and antitrust policy after deregulation, 656–657 concentration measurement, 657 deregulation, 610, 644 dynamic productive inefficiency and, 654–655 entry and exit regulation, 645 hub-and-spoke system, 610, 650–651, 651f, 652f, 653 limit pricing in, 200, 201f load factors, 649, 650f mergers in, 220 motor-carrier regulation comparison to, 645 nonprice competition, 649 post-9/11, 729t predatory pricing in, 659–661, 662f, 663 price coordination in, 138 price regulation, 644–645 prices and concentration in, 664–665 prices and quality of service, 646–650, 647t, 648f, 649t price signaling in, 663–664 regulation and deregulation lessons, 665–666 regulation effects, 646–655 regulatory history, 643–644 regulatory practices, 644–645 safety, 655, 656f welfare estimates from price and quality changes, 653–654 Airline Tariff Publishing Company (ATPCO), 138, 663–664 Air mail, 643 Airmail Act of 1934, 643

Air pollution, 18, 735, 737 valuation, 765, 815 Airport gates and slots, 658–659 Air quality. See also Clean Air Act standards for ambient, 783 valuation of, 765 AirTran Airways, 656 Alaska, 813, 814 Albrecht v. Herald Co. (1968), 318 Alcohol advertising restrictions, 441 government liquor stores, 482 liquor price advertising, 609–610 taxes on, 3 warnings on, 856 Alibaba, 526 Alifraghis, Jacob, 137 Allis-Chalmers, 145 Allocative inefficiency, 78, 632 Alphabet, 382 AltaVista, 378 Aluminum Company of America (Alcoa), 323, 324, 327, 329–332 Amazon, 382, 526 collaborative filtering, 400 economies of scale and, 186 growth of, 375–376 1-Click Ordering, 378 pricing structures, 408 as two-sided platform, 408 video services, 507 Amazon Business, 248n39 Ambient air quality standards, 783 Ambiguity aversion to, 915 risk, 915–916 American Airlines, 104, 233, 350, 656–657, 658, 659 American Can, 214 American Express, 293, 294 American National Standards Institute, 880 American Society of Composers, Authors and Publishers (ASCAP), 155 American Stock Exchange, 149 American Tobacco, 214, 215, 329, 340 American Tobacco case (1946), 157 America Online (AOL), 217, 396, 397 Anchoring phenomena, 923–924 AndroGel, 356, 357 Android, 376, 419 Anheuser-Busch, 213 Anticompetitive agreements, 355–356 Anticompetitive behavior nonprice practices, 657–659 pricing practices, 659–665

rapid and disruptive innovation and, 421 Anticompetitive effects predatory pricing and, 344 raising rivals’ costs, 272–277 reverse-payment settlements and, 356–358 of tying, 308 of vertical mergers, 265–266 Anticompetitive offenses, categories of, 102–103 Antitrust, 96–108 competition and, 67 inequality and, 74–75 monopoly breakup through, 95 in New Economy, 379–383 rapid and disruptive innovation and, 419–426 two-sided platform analysis challenges, 415–416 Antitrust law and policy costs of standards, 99 exclusive dealing and, 292–296 global competition and law enforcement, 106–108 merger trends and, 214–217 minimum RPM, 315–317 predatory pricing and, 345–351 price fixing and, 153–167 private actions in enforcement of, 104 purpose and design of, 96–99 rationale for, 3–4 standards for, 97–99 tying and, 310–312 U.S. federal, 100–105 vertical mergers, 277–279 Antitrust Paradox, The (Bork), 97, 237 Antitrust regulation, 4–5 A&P, 365 Appalachian Coals case (1933), 155 Apple, 376 market makers for stock of, 149 RPM and, 313 Application programming interfaces (APIs), 395, 418 Appropriability, 87–88 Archer Daniels Midland, 143, 144 Areeda, Philip, 345 Areeda-Turner rule, 346–347, 349 Argentina, 106 Argos, 152 Arizona solar panels in, 681 trucking deregulation in, 636–637 Army Corps of Engineers, 732 Arrow, Kenneth, 88, 812 Arsenic, 732–733, 733t, 768, 880 Arthur Andersen, 708 Asbestos, 884, 884t

ban, 38, 53 OSHA regulations, 762, 866 Aspartame, 261 Aspen Skiing Company v. Aspen Highlands Skiing, 352 Aspirin, 839 Assessing intent to monopolize, 327–328 Assessment criteria, 9–11 Asset loophole, 234 Asymmetric paternalism, 926 Asymmetric regulation, 568–570 Atari, 378 Atchison, Topeka and Santa Fe railroad, 641 AT&T, 87, 104, 120, 160, 187, 283, 333, 375, 399 deregulation and, 576–577 essential facilities and, 353 ICC and, 570–571 influence by, 470 network effects and, 377 price cap regulation and, 438–439 as regulated supplier, 525 regulation supported by, 456 U-verse service, 507 AT&T-DIRECTV, 507 Autolite, 260 Automobile safety regulation, 833–837 costs of, 840–841, 844 Availability heuristic, 924–925 Average total cost rule (ATC rule), 346–347 Averch, Harvey, 541 Averch-Johnson Effect, 541–544 B777 airplane, 227 Baby Bells, 399 Backhauls, 639 Baer, Bill, 103 Baidu, 401 Bailouts, 715 Bain, Joe, 188–189, 190–191, 193 Bain-Sylos postulate, 193–196, 195f, 197 Bank Holding Company Act of 1956, 471 Banking branch, deregulation of, 471–473 commercial, 716 emissions trading system, 794 reform, 715 unit, 471 Banking Acts of 1933 and 1935, 445, 706–707 Bank of the United States, 703 Bank runs, 702–703 Banks competition limited between, 705–706 contagion problems and, 702

deposit insurance and, 703–705 deposits in, 701 deregulation of, 707 investment restriction, 705 monitoring of activities of, 706 reserve requirements, 705 role of, 700–701 shadow banking sector, 716 Bargaining Coase theorem as game of, 773–774, 773t collective, 206 coordination and, 136–143 symmetric, 774 Barriers to entry, 188–191 Baseload units, 531, 679n13 Baskets of services, 548 Baumol, William, 191, 641 Bauxite, 330 Bayesian models, 912–913, 923 BBC, 526 Becker, Gary, 463 Becker model, 463–467, 735 Beech-Nut, 224 Behavioral economics, 905 decision making, 923–925 irrationality and biases in risk perception, 911–914 loss aversion and reference dependence effects, 906–910 prospect theory, 906–914 risk ambiguity, 915–916 Behavioral nudges, 925–932 Behavioral remedies, 248n39 Behavioral transfer test, 932–933 Bell Labs, 87, 576 Bell System, 573, 577 Benchmark regulation, 556–557 Benefit-cost analysis, 34–36, 35f, 732–733, 735, 762 Benefit-cost test, 29 Benefit estimates, distortion of, 41–42 Benham, Lee, 609 Benzene, 762, 878 Berkey Photo, 333 Bertrand, Joseph, 127 Bertrand model, 127, 130 Best reply function, 123 Beta blockers, 832 Beta videotape format, 117, 345 Bias new source, 733, 787 in risk perception, 911–914 Big data, 399–404, 401f Bilodeau-Shell, 140 Bing, 403, 406, 413–414

Bingham, Eula, 882 Blanket licenses, 155 Block booking, 296, 299–301 Blomquist, Glenn, 833, 834 Blood-monitoring services, 296 Blue Label salt, 371 BMI case (1979), 155 Boeing, 107, 227, 253–254, 254t Bonds, investment-grade, 705 Borenstein, Severin, 664 Bork, Robert, 97, 236, 237 Boston Marathon bombing, 727 Bottled water, 250–251, 251t Bottlenecks, 353 Bouche-Tard, 140 Branch banking restrictions, deregulation of, 471–473 Brand loyalty, 92 Brand proliferation, 206–209 Braniff Airlines, 136 Bravo, 283 Breast implants, 858, 859t, 860 Breyer, Stephen, 809, 811 Brighty, Mike, 153 British Telecom, 546, 549 Broadband Internet access services (BIAS), 526–527, 582–583 net neutrality meaning and, 584–585, 586 Broadcast Music, Inc. (BMI), 155 Broadcast signal requirements, for cable television, 492 Broadcast television, 492 Brooke Group v. Brown and Williamson Tobacco (1993), 323, 346, 348–349, 351 Brown and Williamson, 348–349 Brown Shoe case (1962), 234–236, 265, 277–278 BSN, 250–251, 251t Bubble policy, emission trading systems, 793–794 Bundling, 259, 307 mortgages, 710 vertical unbundling, 671, 672 Bureau of Labor Statistics, 743, 756, 890 Bureau of the Census, U.S., 174 Burmah Oil and Gas, 213 Buses, 625 DOT regulations, 824 passenger travel, 627 Bush, George H. W., 30, 44, 52, 278, 732, 748, 794 Bush, George W., 30, 43–44, 47, 732, 860, 882 Business Electronics v. Sharp, 316 Business stealing effect, 183–184 Bus Regulatory Reform Act of 1982, 446 Buyer coalitions, 290n30 Byssinosis, 900 Cable Communications Policy Act (1984), 502, 503

Cable companies, 283, 298 deregulation of, 446 FCC size limits for, 498 fees on, 486–487, 500 franchise bidding, 479 nonprice concessions from, 500 renewal contract deviations from initial contract, 502, 502t Cable internet service, 582 Cable television bidding stage competition, 499–500 broadcast signal requirements, 492 contracts for franchises, 499 early regulation of, 492–493 economies of density and scale in, 495–498 franchise bidding cost types, 500 franchise renegotiation, 501 franchising process, 498–506 growth of, 493, 493t as natural monopoly, 494–498 performance after initial award, 500–502 politically imposed costs, 500, 501t rate deregulation, 503–504, 506 rate regulation, 502–506 reregulation, 505–506 system components, 494–495, 495f Cable Television Commission (Massachusetts), 499 Cable Television Consumer Protection and Competition Act (1992), 502, 505 Calculators, 316 California electricity bills, 673t electricity retail price cap in, 677 electricity sector restructuring in, 671–673 interconnection fees in, 682 Proposition 65, 19, 856, 911 regulation by, 18, 608 wholesale electricity prices, 673, 674f workers’ compensation claims in, 898 California Edison, 672 California Energy Crisis, 673 causes of, 674–675 implications for future policy, 677–678 retail price cap, 677 strategic withholding of electricity and, 675–676 California Independent System Operator, 672 California Power Exchange (CALPX), 672, 673, 675 California Public Utilities Commission (CPUC), 545, 550–551, 618 Canada antitrust standards in, 97 competition law in, 106 Canadian National railroad, 641 Canadian Pacific railroad, 641 Cancer-causing chemicals, 41–42

Cancer risk, 810 Capacity expansion, strategic, 205–206 Cap and trade systems, 795–797 Capital cost-reducing, 193 discipline from markets for, 481 reduced formation, 601–602 Capital requirements as barrier to entry, 189–190 as cost disadvantage, 189 Capper-Volstead Act of 1922, 101 Capture theory (CT), 54, 454, 457–459, 463, 737–738, 738t Carbon dioxide, 798. See also Social cost of carbon social cost of, 800–801 Carbon tax, 789, 799 Carbon trading systems, 800 Carcinogens, 19, 856, 916 Careerists, 451 Carnation, 370 Cartels, 115 cement, 145–146 cheating in, 143 citric acid, 143, 144, 145 compliance monitoring, 143 conflict in, 142 fines paid by, 164, 165t gasoline, 140 lysine, 67, 137, 142–143 price fixing by, 102 trade association meetings and, 144 vitamin C, 145 Carter, Jimmy, 27–29, 732, 840, 866, 882 Casino industry, strategic capacity expansion in, 205–206 Causes of death, 722, 723t Celler-Kefauver Act of 1950, 100, 215–216, 237 Cellophane case, 326–327 Cement cartel, 145–146 Cement manufacturers, 260 Census, 174–175 Census of Fatal Occupational Injuries, 756 Certificate of Need programs, 441 Chamberlin E. H., 77 Cheating in cartels, 143 collusion versus, 134–135, 134f, 143 Checker Taxi Company, 614 Chemical labeling, 883–885, 884t Chevrolet Corvair, 823 Chex, 250 Chicago School, 96, 259 exclusive dealing theories and, 286–287 foreclosure and, 266

merger guidelines and, 237–238 Chicken Delight, 298 Childproof caps, 837 China, 106 Anti-Monopoly Law, 114, 253 citric acid production in, 145 merger approvals, 107 Chloracetophenone, 884, 884t Chlorofluorocarbons, 805 Choice architectures, 926 Christie, William, 149 Christie’s, 404, 407 Chrysler, 841, 852 Cigarettes, 855t, 924 advertising restrictions, 441 hazard warning labels on, 854–855 taxes on, 3 Cinematch, 400 Citibank, 293 Citric acid cartel, 143, 144, 145 Civil Aeronautics Act of 1938, 643 Civil Aeronautics Board (CAB), 444, 449, 453, 646, 666 creation of, 643 entry and exit regulation by, 645 price regulation by, 644–645 Civil liberties, 727–729, 727f Clayton Act of 1914, 100, 101, 104, 107, 111–112, 215, 216, 252, 656 amendment in 1950, 234 price discrimination and, 365 treble damages and, 161 vertical restraints, 285 Clean Air Act, 731, 783 Clean Air Act Amendments of 1990, 796 Clean Air Interstate Rule, 796 Clear Skies Initiative, 815, 816t Climate change initiatives, 52 benefit assessments for, 917 uncertainty and, 803–804 Clinton, William, 30, 43–44, 52, 732, 882 Clorox, 185, 213, 252–253 CNN, 526 Coal mining, 738, 738t Coase, Ronald, 96, 260, 772 Coase theorem, 772–783 as bargaining game, 773–774, 773t environmental contexts and, 780–781 long-run efficiency concerns, 776 nuclear waste siting and, 781–782 pollution and, 774–776, 775t smoking and, 777–780 transaction costs and other problems, 776–777 Coaxial cable, 494, 571

Code of Federal Regulations, 48, 49f Cognitive limitations, 927–929 Coke, 261–262, 319 Coke oven occupational exposure limits, 768 Colgate-Palmolive, 241, 245 Collaborative filtering, 400 Collective bargaining agreements, 206 Collusion, 115, 130 case studies of, 146–153 challenges to, 135–136 cheating versus, 134–135, 134f, 143 communication in, 136–139 competition laws and, 106 compliance and stability of, 143–146 conflict and, 142 corporate strategy as invitation to, 138 enforcement policy, 161–167 explicit, 93, 136 fines for, 164–165 hub-and-spoke, 151–153, 153f mergers and, 220 penalties for deterring, 162n69 price wars and, 148 tacit, 93, 136 territorial restraints and, 318–319 theory of, 131–135 Comcast, 1, 213, 260, 281, 586 NBC Universal and, 283–284 Commercial aircraft, 254 Commercial banking industry, 716 Commissioner of Competition, 97 Commission rates, 606–607, 606t, 607f Commitment, 270–272 Commodity Futures Trading Commission, 716 Common costs, 528, 529 Common pool problem, 687, 689–690 alternative solutions to, 691–692 prorationing and, 690–691 Communication in collusion, 136–139 of hazards, 748, 766, 766t Communication networks, 376 Communications Act (1934), 571 Compaq, 355, 397 Compatibility of standards, 117 Compensating wage differential theory, 869–871 Compensation principle, 69, 73–75 Competition through advertising, 115–117 among banks, limiting, 705–706 destructive, 442 dynamic, 192–209

entry conditions and, 179 excessive nonprice, 80 externalities under, 218 in franchise bidding, 499–500 for government policy control, 474 imperfect, 88, 130 innovation and, 83–84, 88 monopoly pricing constrained by, 90 monopoly versus, 71–73 nonprice, 598–600, 649 oil production rates and, 690 perfect, 77, 90 predatory pricing and, 345 unconstrained, 67 upstream, 275 among video service suppliers, 507–508 welfare and, 68–88 Competition Act (Canada), 97 Competition Act of 1998 (South Africa), 98 Competition Tribunal, 97 Competitive bidding processes, franchise bidding, 482–485 Competitive equilibrium, 68 innovation and, 84–85, 84f social welfare and, 591 Competitive local exchange carriers (CLECs), 580 Competitive model of direct effects, price and entry/exit regulation, 591–594 Competitive model of price and entry/exit regulation, 591–594 Compliance, 136 cartels and monitoring for, 143 collusion stability and, 143–146 expected cost of, 891 tax, 931 uncertain costs of, 787–789, 788f CompuServ, 397 Computer operating systems, 187, 376, 395–399 Concentration, 91 air fares and, 664–665 in airline industry, 657 indexes of, 173–174 of industries, 175–176, 176t market structure and, 173–174 merger guidelines and, 176–177 price-cost margin and, 178–179 sources of, 184–187 Concentration ratio, 91, 174–176 Concessions nonprice, 500 opportunistic holdup to extract, 490 Concord Boat case (2000), 291 Conduct, 93–94 Congestion, 406 Conglomerate mergers, 213

Connally “Hot Oil” Act of 1935, 691 Conrail, 638 Conscious parallelism, 157 Consent decrees, 104, 138, 333 Conservatism in risk analysis, 916 uncertainty and, 916–917 Conservatism adjustments, 41–42 Consumer actions, 825–826 Consumer complaints, safety and, 825 Consumer expectations, network effects and, 385 Consumer preferences, franchise bidding and, 486 Consumer Price Index, 649 Consumer Product Safety Commission, U.S., 2, 8, 26, 721, 845 child-resistant cap requirements, 41 Consumer surplus, 69, 70, 231 Consumer welfare standard, 97, 238 Contagion problems, 702 Container liner shipping companies, 139 Content providers, 526, 582 Contestability, 191–192 Contestable markets, 191 Continental Airlines, 656, 659 Continental Baking, 370 Continental T.V., 319 Contingent valuation, 763, 812–815 Continuous incremental improvement, 378 Contracts for cable television franchise, 499 enforcement of, 67 exclusionary, 287–290, 394 after franchise bidding, 488–491 incomplete long-term, 490 opportunistic holdup and, 490–491 recurrent short-term, 488–490 share, 291 Contract that references rivals (CRR), 291–292 Convenience samples, 764 Coordinated effect, 218, 250–251 Coordination, 135 bargaining and, 136–143 costs, 261 mergers and, 220–221 Copying machines, 301–304, 323, 359 Corporate average fuel economy (CAFE), 46, 921, 922t, 928 Corporate leniency programs, 165–167 Corporate strategy, as collusion invitation, 138 Cost disallowances, 544–545 Cost effectiveness, 765–766 Cost-effectiveness analysis, 743 Cost-effective regulation, promotion of, 40–41 Cost estimates, distortion of, 41–42

Cost functions economies of scale in, 184, 514 economies of scope and, 514 with learning curve effect, 198 limit pricing and, 196–198 optimal pricing and, 514 Cost of entry, number of active firms and, 182 Cost per normalized life saved, 762 Cost-reducing capital, 193, 201–206 Cost-reducing processes, 83n15 Cost reductions natural monopoly transformation by, 564–567 welfare effect of, 229–230 Cotton dust standard, 33, 878–879, 881, 900 Council of Economic Advisors, 27, 28, 46, 742 Council on Wage and Price Stability, 26–27, 29 Counterfactual approach, 610–614 Cournot, Augustin, 120 Cournot game, 120–121 infinitely repeated, 131–132 Cournot model, 595 game theory in, 120–121 mergers and, 219 Nash equilibrium in, 122–125, 123n10, 129, 131–132 oligopolies in, 120–126, 129 profit maximization in, 121–123, 122f Cournot price, 124–125 HHI and, 177 Cournot solution, 124–125, 131–132 HHI and, 177 Court of First Instance (EU), 282 Courts health, safety, and environmental regulation and, 9 RIAs and, 52–53 Cowling, Keith, 81–82 Crafton, Steven, 613 Crandall, Robert, 136–137, 735 Cream-skimming, 547, 568–570, 569f, 575 Crest, 241–243, 245 Cross-subsidization, 467, 569f, 601 Crude oil, 452 domestic supply curve for, 696, 695f price controls, 684, 694–697 quantity control regulation on, 439 Culpability scores, 164 Customer foreclosure, 273 Dallas–Fort Worth airport (DFW), 350 Damages for collusion and price fixing, 161–164 escalation of, 851–853 limits on, 852, 852t

Data packets, 582–583, 585 Deadweight loss, 71 from mergers, 228 from monopoly, 78 Deadweight welfare loss (DWL), estimating, 80–82 Decentralized hazards, 808 Deceptive advertising, 441 Decision frames, 909 Decision making, behavioral economics and, 923–925 Declining-block tariff, 521 Default rules, 926–927 Deferred effects, discounting, 37 Dell, 294, 391 Delta Airlines, 656–657, 658, 659, 664 Demand elasticity, in DWL estimation, 81, 82 Demand-side sources of transformation, 564–567 Demand-side substitution, 327 Demsetz, Harold, 482 Dentsply International case (2005), 290 Department of Energy, 673 energy efficiency standards and, 919 Department of Homeland Security, 2 Department of Justice, U.S., 1, 15, 107–108 Antitrust Division, 100, 103, 176–177 ATPCO investigated by, 663–664 cable television rate analysis, 503 corporate leniency program, 115, 165–167 Frontier Airlines and, 660 GE-Honeywell merger and, 282 IBM and, 333–334 merger evaluation process, 232–233 merger guidelines, 176–177, 238 Microsoft and, 393 RPM and, 317 UPP and, 246 vertical mergers and, 278–279 Department of Labor, U.S., 2 creation of, 865 Department of the Interior, 732 Department of Transportation, U.S. (DOT), 2, 20, 26, 40, 46, 642, 757, 762, 851, 866 bus regulations, 824 energy efficiency standards and, 919, 921 regulation by, 841 Deposit insurance, 703–705 Depository Institutions Deregulation and Monetary Control Act of 1980, 707 Deposits, 701 Deregulation, 447t, 449, 625 airline industry effects of, 646–650, 647t, 648f airline safety and, 655 banks, 707 of branch banking restrictions, 471–473 of commission rates on NYSE, 607

cost savings from, 46–47 empirical evidence for ET and, 469–470 FCC and, 579–580 hub-and-spoke system development and, 650–651, 651f, 652f, 653 intermarket approach and, 608 intertemporal approach and, 608 major legislation, 448t natural monopoly changes and, 567 partial, 567 surface freight rates and, 633–636 of surface freight transportation, 629 trucking industry entry and, 639 waves of, 446 Destructive competition, 442 Deutsche Post, 206 Differential efficiency hypothesis, 178 Differentiated standards, 735 Digital, 355 Digital subscriber line (DSL), 582 Direct broadcast satellite (DBS), 283, 507 DIRECTV, 507 Disclosure requirements, 927 Discounting deferred effects, 37–38 exponential, 918 hyperbolic, 918–919 models for, 918 Discover, 293, 294 Discovery, 159, 160 DISH Network, 507 Disruptive innovation, 378, 418–426 Disruptive technologies, 87–88 Distributed generation of electricity (DG), 680–683 Diversion ratio, 242, 244–245 Dixit, Avinash, 201 Dixit game, 202–205, 204f Dodd-Frank Act of 2010, 714–716 Dollar Tree, 246 Domestic oil production restricting, 687–692 supply curve, 696, 695f Domestic Passenger Fare Investigation, 645 Domestic Policy Staff, 27 Donovan, Raymond, 28 Doubleclick, 382 Double marginalization, 262–265 Drain openers, 857, 857t Drano label, 857, 857t Driving speed, 835, 836f Dr. Miles Medical Co. v. John D. Park & Sons, 315 Dr. Pepper, 319 Drug approval

accelerating process, 831–833 strategies, 830–831 Drug side effects, 829 Dudley, Susan, 57 Duke, James B., 328 Duopoly, 115 Duplicating machines, 296 duPont, 214, 326–327, 366 Dynamic competition, 192–209 Dynamic efficiency, 94, 226 Dynamic pricing, 387 Dynamic productive inefficiency, 640–641, 654–655 Earnings sharing regulation (ESR), 550–553 Earnings variation, in PCR, 549 East Jefferson Hospital, 311 Eastman Kodak, 214, 323, 332–334 intellectual property rights and, 354 tying by, 311–312 Eastman Kodak v. Image Technical Services, 358–360, 359t, 363–364 East Texas oil fields, 690 Eaton, 291 eBay, 187, 261, 378, 380–382, 405–406 pricing structures, 407, 408 as two-sided platform, 408, 412 eBid, 407 E-cigarettes, 855, 855t, 924 Econometric studies, 248 Economic analysis, or exclusive dealing, 286–287 Economic efficiency, 10, 68 Economic regulation, 626 allocative inefficiency from, 632 control of other variables, 441 defining, 435 development of, 6 entry and exit control, 439–440 history of, 442–446, 447t, 448t instruments of, 438–441 major legislation, 445t, 448t price controls, 438–439 procedures, 452–453 process of, 449–453 quantity controls, 439 rate regulations, 6–8 reasons for, 436–438 theories of, 453–473 trends in, 443–446, 444f Economic self-interests, 41, 737, 738, 739 Economics of Climate Change, The (Stern Review), 798–799 Economic surplus, 69, 70f Economic theory of regulation (ET), 454, 458–467 branch banking deregulation and, 471–473

critique of, 468–469 empirical evidence and, 469–470 Economies of density, 495–498 Economies of scale, 75–76, 76f cable television and, 495–498 concentration and, 184–187 in cost functions, 184, 514 interstate telecommunications and, 572, 574f production technology and, 184 product-specific, 568 Economies of scope, 513, 514, 568 Efficiency, 94 dynamic, 226 mergers and, 223–227 pecuniary, 223 pricing and, 344–345 real, 223–224 eHarmony, 407 Electrical equipment price fixing case, 144–145 Electricity sector, 669 California restructuring of, 671–673 distributed generation and, 680–683 future regulation in, 683 historical, technological, and regulatory background, 669–671 PCR in, 553–554 restructuring effects on, 678–680, 678f vertical unbundling in, 671 Electric service provider (ESP), 672, 677 Electrolux, 233, 246 Electronic Arts, 284 Ellsberg paradox, 915, 917 Eloxatin (oxaliplatin), 832 Emergency Petroleum Allocation Act, 452 Emission credits, 674 Emission standards, cost-effectiveness analysis in, 53 Emission trading systems, 792–794 Empirical estimates of value of statistical life, 756–757 Energy efficiency gap, 919–923 Energy efficiency standards, 52 Energy Policy Act of 1992, 670 Energy Policy and Conservation Act, 452 Energy regulations, 919–923 Energy sector, 669 Enforcement collusion, 161–167 contract, 67 of environmental regulation, 807–817 EPA and, 807–809 global competition and, 106–108 hazardous waste regulations, 809–812 market foreclosure and, 277 merger laws, 232–254

OSHA, worker safety impact of, 891–900 OSHA strategy for, 886–890 OSHA targeting changes, 889–890 pollution sources and, 808 price fixing and, 153–167 private actions for, 104 R&D and, 421 United States antitrust law, 100–105 English auction, 483 Enron, 708 Entry free, 180–182, 597 inefficient, 530–531, 597 Entry barriers, 91–92 Entry conditions, 91 equilibrium under free entry, 180–182 market structure and, 179–180 merger evaluation and, 251–253 preemption and brand proliferation, 206–209 raising rivals’ costs and, 206 Entry/exit regulation competitive model of direct effects, 591–594 imperfectly competitive model of direct effects, 595–598 innovation and, 602–603 price theories and, 591–603 Entry presumption, 252 Entry regulation airline industry, 645 excessive nonprice competition and, 598–600 indirect effects of price regulation and, 598–601 productive inefficiency and, 600–601 surface freight transportation and, 631–632 taxicabs and, 615 trucking industry, 638–639 Environmental cleanup programs, 739, 809–812 Environmental contexts, special features of, 780–781 Environmental policies congressional voting patterns and, 735–736, 736t economic models of, 735–737 selecting optimal, 783–792 Environmental Protection Agency (EPA), 1–2, 8, 16, 20, 28, 30, 33, 45, 46, 721–722, 731, 743, 759, 768, 771, 866 asbestos ban, 762 cap and trade systems and, 796 cleanup programs and, 809–812 discounting deferred effects and, 38–39 emissions standards, 53 emissions trading systems and, 792–795 energy efficiency standards and, 919, 921, 922, 922t enforcement work, 807–809 evaluating regulation performance, 817 fuel economy label, 928–929, 930f fuel economy regulations and, 841

influence pattern studies and, 55 new source bias, 736, 787 offset trading and, 793 public misperceptions and, 913 radionuclide regulation, 762 RIA for asbestos regulation, 38, 53 senior discount and, 815 standard-setting guidelines and, 783 Superfund, 739 Toxics Release Inventory, 808 Environmental quality, 34–36, 35f Environmental regulation, 721, 771 enforcement and performance of, 807–817 enforcement option and consequences, 807–809 evaluating performance of, 817 global warming and, 797–799 group externalities and, 804–807 irreversible effects and, 797–799 market trading policies, 792–795 multiperson decisions, 804–807 Environmental tobacco smoke, 777 Equal Employment Opportunity Commission, 2 Equalizing discrimination, 631 Equilibrium. See also Nash equilibrium competitive, 68, 84–85, 591 free-entry, 180–184 innovation changing, 84–85, 84f innovation rate, 424 market, 271 models of imperfect competition and, 130 Pareto optimal, 68 political, 464–466 in Stackelberg model, 126 user, 408–410 Equilibrium innovation rate, 424 Equilibrium quantity of users, two-sided platforms, 408–410 Escalation of damages, 851–853 Essential facilities doctrine, 352–354, 355 Ethanol subsidies, 460–461 Ethylene dibromide, 759 Ethylene oxide, 733 European Citric Acid Manufacturers’ Association, 144 European Commission (EC), 98, 139 coordinated effects and, 250–251 GE-Honeywell merger and, 282 Google and, 416–419 Intel and, 295 MD-Boeing merger and, 253–254, 254t merger evaluation process, 234 merger guidelines, 238 price fixing fines, 105 European Committee of Domestic Equipment Manufacturers, 97–98

European Court of Justice, 157 European Union (EU), 106, 107–108 antitrust standards in, 97–98 cap and trade in, 795 merger approvals, 107 merger regulation, 113 Evans, David, 379, 573 Evident purpose test, 102 Excessive nonprice competition, 80, 598–600 Exclusionary contracts Microsoft, 394 surplus extracted from customers by, 289–290 surplus extracted from entrants by, 287–289 Exclusive dealing, 284 antitrust law and policy historical development, 292–293 Chicago School theory, 286–287 CRR for, 291–292 economic analysis, 285–286 Intel, 294–295 by Microsoft, 426 profit-shifting by, 426 Visa-MasterCard, 293–294 Execunet I decision (1978), 452, 575 Executive Order 11821, 26 Executive Order 12044, 27 Executive Order 12291, 29 Executive Order 12498, 29 Executive Order 12866, 30 Executive Order 13258, 30 Executive Order 13563, 31 Executive Order 13610, 31 Executive Order 13771, “Reducing Regulations and Controlling Regulatory Costs,” 31 Exit regulation airline industry, 645 cross-subsidization and, 601 indirect effects of price regulation and, 601–602 innovation and, 602–603 reduced capital formation and, 601–602 surface freight transportation and, 631–632 Expected life years lost (E(YYL)), 725 Explicit collusion, 93, 136 Exponential discounting model, 918 Export associations, 101 Externalities, 454–456 Coase theorem for, 772–783 under competition, 218 distributed generation and, 681 group, 804–807 market inadequacies and, 877 mergers and, 219 negative, 455 network, 344–345, 526

positive, 455 smoking, 777–780 Exxon, 155, 284, 813 Exxon Valdez spill, 812–815 Eyeglasses, advertising of, 608–609 Facebook, 375, 377, 381 acquisitions by, 382, 400 terms of use, 402 as two-sided platform, 407 Facebook Friend Exporter, 403 Facebook v. Power Ventures (2008), 402–403 False negatives, 99 False positives, 99 Family Dollar, 246 Family income, 74f Fannie Mae, 30 Federal agencies, spending summary for, 50t Federal Aviation Administration, 2, 655, 757, 762 Federal Communications Act (1934), 449, 492 Federal Communications Commission (FCC), 1–2, 6, 16, 283, 444–446, 452–453, 492, 503, 505, 506, 551 cable television company size limits and, 498 creation of, 571 deregulation and, 579–580 franchise fee limits set by, 500 net neutrality and, 585, 586 price-cap regulation by, 438–439 priority lanes prohibition, 585, 587 regulatory option offered by, 556 transparency requirement, 585 unreasonable interference restricted by, 585 Federal Deposit Insurance Corporation (FDIC), 445, 706 Federal Energy Administration, 452 Federal Energy Regulatory Commission (FERC), 670, 673 Federal Home Loan Mortgage Corporation (Freddie Mac), 710 Federalism, 16–21 advantages of, 17–18 Federal Maritime Commission, 444 Federal National Mortgage Association (Fannie Mae), 710 Federal Pacific, 145 Federal Power Commission, 445, 450, 453 Federal Register, 24–25, 47, 48f, 57 Federal regulation federalism debate and, 16–21 rulemaking process, 21, 22f–23f, 24–25 state regulation overlap with, 20–21 Federal Reserve, 709, 714, 715 Federal Reserve Act of 1913, 706 Federal Reserve System, 706 Federal Sentencing Guidelines, 105, 164 Federal Trade Commission (FTC), 1, 16, 103, 104, 107–108, 151, 224, 340 advertising rules, 441

brand proliferation and, 209 conglomerate merger classes, 213 Google and, 417–418 Intel and, 295, 355 MD-Boeing merger and, 253–254, 254t merger evaluation, 232 merger guidelines, 176–177 Microsoft and, 392 reverse-payment settlements and, 356 Time Warner, Turner Broadcasting, and TCI merger and, 280–282 UPP and, 246 vertical mergers and, 260, 278 Federal Trade Commission Act of 1914 (FTC Act), 100 FERC Order 888, 670 Fiber-optic cable, 494, 578 Internet access, 582 Financial market predation, 339 Financial sector crisis in, 713–714 government intervention and regulation in, 703–706 historic legislation in, 706–709 regulation in, 699 regulation rationale for, 701–703 regulatory reform, 714–716 role of, 699–701 Financial Stability Oversight Council, 715 Fines cartels paying, 164, 165t for collusion, 164–165 for price fixing, 105, 164–165 Fios service, 507 Firms cost of entry and number of active, 182 determining boundaries of, 260 interdependence of decisions, 117 multimarket, 223 profit-maximizing, 116 regulation of, 1 rival, 206, 230–231, 272–277 First-best effects, 591–594 Fisher, Franklin, 327 Fishing areas, 806–807 Fixed costs, 7 knowledge creation and, 375 Fixed proportions, 266–268, 267f Flexible wrapping materials, 326 Florida, trucking deregulation in, 636–637 Florida Power & Light Company, 145 Folgers, 340 Food, Drug, and Cosmetics Act, 827 Food and Drug Administration (FDA), 1, 18, 30, 356, 762 breast implants and, 858, 859t, 860

drug approval and, 830–831 pharmaceutical regulation by, 827 risk balancing approach, 828, 828f safety and efficacy approach, 828 side effect information requirements, 829 tamper-resistant packaging requirements, 41 Food and Drug Administration Modernization Act, 833 Food pyramid, 927 Ford, Gerald, 26–27, 732 Ford Bronco, 851 Ford Motor Company, 260, 823 Ford Pinto, 823, 849–851, 850t Ford Ranger, 851 Foreclosure, 259 through acquisitions, 265 Chicago School and, 266 customer, 273 input, 272 market, 265–266, 277 vertical mergers and, 266–270 44 Liquormart decision (1996), 609–610 Framing effects, 909, 910t Franchise bidding, 479 assessment of, 491–492, 499–502 basic elements of, 482–485 competition in, 499–500 contractual arrangements after, 488–491 cost types in, 500 information advantages of, 485 length of time, 499 via modified English auction, 483–485 potential drawbacks, 486–487 Franchise fees, inefficiency of, 486–487 Franchises, 188 Franchising process, for cable television, 498–506 Freddie Mac, 30 Free entry equilibrium under, 180–182 inefficiencies from, 597 Free-entry equilibrium determining, 180–182 socially optimal market structure and, 182–184 Free-rider effect, 460 Free-rider problem, 700–701 Free-riding, 262 Freightliner, 291 Freight transport, 437 Frequent flyer programs, 659 Friedland, Claire, 457 Friedman, Milton, 8 Frontier Airlines, 660, 661, 661n57, 663 Froogle, 416

FTC v. Actavis (2013), 355–356 Fuel economy labels, 928, 930f Fuel economy standards, 2, 3 Fully distributed cost pricing (FDC pricing), 528–530, 534 Future costs, predictions about, 541 Games bargaining, 773–774, 773t Cournot, 120–121 Dixit, 202–205, 204f infinitely repeated Cournot, 131–132 strategic form of, 118 Game theory, 115–119 Cournot model and, 120–121 formal definitions, 171 Gap, 178 Gary, E. H., 329 Gary Dinners, 329 Gas-guzzler tax, 3 Gasoline ethanol blended with, 461 lead in, 40 Gasoline cartels, 140 Gasoline price controls, 298–299 Gasoline taxes, 906 General Dynamics case (1974), 237 General Electric (GE), 4, 104, 145, 178, 214, 233, 246 Honeywell and, 107, 282 General Foods, 209, 340 General liability insurance, 846 premiums for, 847, 847t General Mills, 209, 249–250 General Motors, 4, 850, 851 General Passenger Fare Investigation, 644–645 Genetically modified organisms (GMOs), 19 Geographic markets, 234–236 Geographic regions electricity sector organization by, 670 in intermarket approach, 608 Georges Bank fishing area, 806–807 Gerber, 224 Glass-Steagall Act, 445, 706–707, 715 GlaxoSmithKline, 2, 241 Globally Harmonized System of Classification of Labeling of Chemicals, 883 Global warming, 52 assessing policies, 801–803, 802f irreversible environmental effects and, 797–799 policy options for addressing, 799–804 uncertainty and, 803–804 GMC Sierra truck, 851 Google, 101, 108, 178, 375, 376, 380, 381, 401, 402, 407 acquisitions by, 382, 404

advertisers and, 413 Android and, 419 antitrust analysis of, 416–419 Page Rank, 378 as two-sided platform, 412–413 Google+, 403 Google Play Store, 419 Google Product Search, 416 Google Shopping, 416–417 Government financial sector intervention and regulation by, 703–706 industrial organization and, 94–96 Government-controlled economies, 435 Government determination of product safety, 854–856 Government enterprises, 479–482 Government failure, 10, 15 Government policy, competition for control of, 474 Government subsidies ethanol, 460–461 problems, 517–518 Grain elevator regulation, 885 Gramm-Leach-Bliley Act of 1999, 707 Great Depression, 215, 365, 443, 444, 445 bank failures in, 702–703 oil demand reduction in, 691 Great Northern railroad, 215 Great Recession, 706, 714 causes of, 709–713 Green Deal, 934 Greenhouse gas emissions, 798, 924 Grimshaw v. Ford Motor Company, 851 Grinnell case (1966), 324 G. R. Kinney Company, 234–236, 278 Gross domestic product, 713f entry and exit controls and, 439–440 global warming and, 798–799 Group externalities, 804–807 G.S. Suppiger Co., 297 GTE Sylvania, 319 Haarman & Reimer, 143, 144 Haas Act, 614 Hahn, Robert, 584 Hall, Charles, 329 Hamilton, James T., 811 Hand, Learned, 329, 331, 420 Handrails, 880 Harberger, Arnold, 81 Harrington, Winston, 55 Hart-Scott-Rodino Act, 232, 233f Hartwell, 316 Hasbro, 152–153

Hazard communication regulation, 748, 766, 766t Hazardous wastes, 52 regulation enforcement and performance, 809–812 siting, 780–781 Hazards, decentralized, 748, 766, 766t Hazard warnings, 853–856 on drain opener labels, 857, 857t economic role of, 883–885 effective, 885 Hazlett, Thomas, 500, 505 Health, safety, and environmental regulations, 8–9, 742. See also Workplace health and safety emergence of, 721–722 workplace, 890 Health and Human Services Department (HHS), 20, 30, 31 Health risks, 872 Heckman, James, 573 Heinz, 224 Hepburn Act (1906), 450, 628 Herbicides, 315 Herfindahl, Orris, 177 Herfindahl-Hirschman Index (HHI), 177–178, 222t, 657 Hersch, Joni, 749, 875 Heterogeneity, 733–735, 786–787 Heuristics, 923–925 Hewlett-Packard, 294, 391 Hicks, John, 79 Hicksian potential compensation principle, 34 Hirschman, Albert, 177 Hoffman-LaRoche, 105, 143, 164 Holland Sweetener, 261–262 Home accidents, trends in, 844–846 Home appliances, 246 Home Depot, 246 Homeland security, 727–729 Home ownership, 710 Homer City Generation case, 53 Home Shopping Network (HSN), 281 Honda, 840 Honeywell, 107 General Electric and, 282 Horizontal Merger Guidelines (2010), 177, 415 Horizontal mergers, 213 effects of, 217–232 in New Economy, 382 real efficiency and, 224 Hot hands phenomenon, 925 Household energy usage, 930–931 Housing prices, 709–711, 711f bubble bursting, 711–713 Hub-and-spoke collusion, 151–153, 153f Hub-and-spoke system, 610, 650–651, 651f, 652f, 663–665 Hulu, 507, 526, 584

Hyde, Edwin, 311 Hyperbolic discounting, 918–919 Iacocca, Lee, 823 IBM, 104, 297, 327–328, 332–334 microprocessor purchases, 294 network effects and, 386 tying by, 284, 296 ICG Propane, 97, 98f Idaho Competition Act, 104–105 Image Technical Services, 358–360 Immigrant workers, 875–876 Imperfect competition, 88 equilibrium in models of, 130 Imperfectly competitive model of direct effects, price and entry/exit regulation, 595–598 Implementation, 449 InBev, 213 Incentive regulation, 539, 546–554 Income environmental policy voting patterns and, 736 family, 130 societal, accident trends and, 130 Income inequality, 74–75 Income taxes, 435 Incomplete long-term contracts, 490 Incremental cost, 568 Incumbent local exchange carriers (ILECs), 580–581 Independent regulatory commissions, 450, 451t Independent service organizations (ISOs), 358, 364 Individual investors, 700 Individuals deposits in banks from, 701 regulation of, 1 Industrial organization, 67, 88–96, 173 Industrial Revolution, 669 Industry classification, 175 Inefficiency allocative, 78, 632 dynamic productive, 640–641, 654–655 of franchise fees, 486–487 from free entry, 597 productive, 600–601, 632 static productive, 638–639 workplace health and safety and potential, 867–868 Inefficient entry avoiding, 530–531 entry/exit regulation and, 597 Inequality, 74–75 Inflation, 541 RCMs and, 552 Influence pattern theories, 54–55 Informational policies, as behavioral nudges, 927

Informational problems, 874–875 Informational regulation, 856 Information dissemination, 913–914 Information overload, 927–928 Information requirements consumer preferences, 486 of franchise bidding, 485, 486 Inherent effect test, 102 Injuries, factors affecting, 893–896 Injury tax, 901 Innovation ability for, 87–88 appropriability of, 87–88 equilibrium rate of, 424 major, 86–87, 86f minor, 84–85, 84f monopoly versus competition, 83–84, 88 in OSHA regulation, 885–886 price and entry/exit regulation and, 602–603 rapid and disruptive, 378, 419–426 regulation and, 602–603 regulatory lag and, 603 replacement effect and, 87 in surface freight industry, 640–641 types of, 83n15 Input foreclosure, 272 Input prices, entry costs and, 180 Insecticides, 807 Instagram, 382 Installed base, network effects and, 388, 389f Installed base opportunism, 364 InstantUPCCodes.com, 137 Insurance deposit, 703–705 smoking costs, 779, 779t Integrated monopolists, 270, 271t Intel, 355 exclusive dealing and, 294–295 market makers for stock of, 149 Intellectual property rights, 354–358, 375, 379 Intent, to monopolize, assessing, 327–328 Interagency Working Group on the Social Cost of Carbon, 800–801 Intercity freight modal shares of, 626t sectors of, 626–627 Intercity telecommunications, 625 Interconnection fees, 682 Interdependence of firms’ decisions, 117 recognized, 120 Interest groups, 459, 460 legislator election and, 467

pressure exerted by, 463–464, 466 theory of competition, 474–477 Interest rates, usury laws and, 611–614 Intergovernmental Panel on Climate Change (IPCC), 798 Intergraph, 355 InterLATA services, 576, 577–578 Intermarket approach, 607–610 Internalities, 931 International Paper, 214 International Salt case (1947), 310, 312 International Truck and Engine, 291 Internet broadband access services, 526–527, 582–586 cable service, 582 connection types, 584t structure, 582–583, 583f usage, 581, 581t Internet backbone services, 586 Internet Explorer (IE), 284, 392, 393, 419 Internet service providers (ISPs), 395, 396, 397 Interstate banking and branching, 471 Interstate Circuit case (1939), 156, 158 Interstate Commerce Act of 1887, 442, 449, 450, 628, 630 Interstate Commerce Commission (ICC), 6, 54, 442, 444, 450, 453, 628, 642 air mail and, 643 entry/exit regulation by, 631–632 exemptions from regulation, 635 interstate telephone service regulation by, 570–571 price regulation by, 630–631 reasons for creation of, 629–630 trucking industry and, 631 unprofitable routes and, 638 Interstate telecommunications market (ITM), 570–581 computers and, 572 economies of scale, 572, 574f microwave era regulatory policy, 574–576 regulated competition to unregulated competition, 577–578 regulated monopoly to regulated competition, 576–577 regulatory background, 570–571 Interstate telephone service, 443 Intertemporal approach, 605–607 Intertemporal irrationalities, 917–919 Investment bank activity monitoring and, 706 bank restriction for, 705 economic growth and, 699–700 over, in solar DG, 682, 682n18 railroad maintenance, 640 in safety, 891, 892f Investment-grade bonds, 705 iOS, 376, 405 iPhone, 405

iPod, 304 Irrationalities, 874–875 intertemporal, 917–919 prospect theory and, 911–914 Irreversible environmental effects, 797–799 ISIS, 727 ITA Software, 382 Japanese beetles, 807 Java, 397, 398 Jerrold Electronics, 298 JetBlue, 233 Jitter, 583 Job health and safety, 865–867 markets promoting, 867–868 Job-related fatalities, 756 Job safety regulations, 20 Johnson, Leland, 541 Joint Executive Committee (JEC), 146, 627, 628, 630 Joint Traffic case (1898), 214 Joskow, Paul, 348, 671 Judicial review, of RIAs, 52–53 Jungbunzlauer, 143 JVC, 117 Kahn, Alfred, 7, 449, 644, 653, 659 Kahneman, Daniel, 906, 909, 923 Kaiser Aluminum, 332 Kaufman, Irving, 323, 333 Kefauver-Harris amendments, 827 Kellogg, 185, 209 Kennedy, Edward, 644 Kessler, David, 860 Kevlar, 366, 368–369 Kia, 825 Klevorick, Alvin, 348 Knowledge creation and application of, 375 incomplete, of risks, 874–875 spillovers, 226 Kodacolor II film, 333 Kohlberg, Kravis, Roberts & Co., 217 Kovacs, Ernie, 823 Kraft, 250 Kroszner, Randall, 471, 473 Krupnick, Alan, 55 Label clutter, 927 Labor market job safety and, 865, 868–869 value of statistical life model, 751–756, 755t Labor unions

antitrust law and, 101 strategic use of, 206, 639 wages and, 80 La Guardia Airport, 233, 658 Latency, 583 Lead in gasoline, 40 Lead pollution, 817 League of Conservation Voters, 739 Learning curves, 197–198, 227 in recommendation systems, 400 Leegin case (2007), 316–317 Legislation deregulation, 448t economic regulation, 445t, 448t financial sector, historic, 706–709 regulatory, 449–450 regulatory reform, 32–33 Legislators, regulators and, 467 Lemons problem, 700–701, 853, 853t Lerner index, 324–325, 380 Leveraged buyout (LBO), 217 Leverage theory, 296 Leveraging, modern theories of, 304–310 Liability costs, 9 Libertarian paternalism, 925 Licenses blanket, 155 as entry cost, 180 Liggett, 349 Limit pricing, 193–195, 330 in airline industry, 200, 201f determining limit price, 195f strategic theories of, 196–200 Linear pricing, 516–519 LinkedIn, 377, 382 Liquidity services, 701 Liquid Paper, 856 Liquor price advertising, 609–610 Liquor stores, state, 482 Litigation personal injury, 924 private, 161 regulation through, 858 Littlechild, Stephen, 546 Littlewoods, 152 Live Nation, 260, 279 Load factor, 649, 650f Loblaws, 140 Local exchange and transport areas (LATAs), 576 Local telephone service, 187, 354 cable service and, 503, 506 economies of scope and, 513

Lock-in, 359–360, 364 Logrolling, 739 Long-distance passenger travel, 627 Long-distance telephone service, 437, 470 economies of scope and, 513 price cap regulation on, 438–439 Long Lines, 576 Long-run average cost (LRAC), 436, 437 Long-run efficiency, Coase theorem and, 776 Long-run marginal cost (LRMC), 436, 533–536, 535f Long-term contracts, incomplete, 490 Los Angeles Times, 149 Loss aversion, 906–910 Lost life expectancy (LLE), 725 Lotteries, 482 Lowe’s, 246 Low-probability events informational problems and, 874 overestimation of, 913 Loyalty discounts, 291 Lulling effect, 837–839 Lutter, Randall, 767 Lycos, 378 Lyft, 2, 18, 413, 414, 617 Lysine cartel, 67, 137, 142–143 MacAvoy, Paul, 881 McDonald’s, 851 McDonnell-Douglas (MD), 107, 227, 253–254, 254t McGee, John, 338, 339 Mac OS, 395, 396 Magat, Wesley, 55 Mail rates, 643 Maleic anhydride, 762 Malone, John, 506 Mandatory Oil Import Program (MOIP), 692–694, 693f Mann-Elkins Act of 1910, 443, 570 Mannesman, 139 Manufacturer-retailer relationship, 262 Manufacturer-retailer restraints, 312–317 Marginal benefits, in benefit-cost analysis, 35 Marginal cost pricing, 516–517 Marginal costs, 7, 230 in benefit-cost analysis, 34–35 cost-reducing capital and, 202, 203f in DWL estimation, 82 knowledge creation and, 375 long-run, 436, 533–536, 535f OSHA standard level setting and, 878 in Ramsey pricing, 522–526 short-run, 532–534, 533f, 535f, 536 of software distribution, 380–381

Marginal revenue curve innovation and, 84 in standard monopolies, 410 two-sided platforms and, 411–413 Marginal willingness-to-pay, 69, 230 Margin of safety, 783 Market-clearing price, 435 Market comparison method, 247–248 Market concentration price and, 380 price-cost margin and, 178–179 Market demand, economies of scale and, 76 Market dominance big data and, 402 network effects and, 378, 386, 388, 390, 402 sustained, 386 Market entry barriers to, 188–191 business stealing effect in, 183–184 profitability of, 183 rapid and disruptive innovation and, 420–426 regulatory control of, 439–440 Market equilibrium, 271 Market exit, regulatory control of, 439–440 Market extension merger, 213 Market failure, 15, 466 behavioral economics and, 905 consumer complaints and, 825 externalities and, 455–456 Market forces, 435 Market foreclosure, 265–266 antitrust enforcement and, 277 Marketing alliances, 659 Market makers, 148–151 Market power defining, 89–91 extension of, 266–270 mergers and, 218–223 predatory pricing and, 345 refusal to deal and, 359 restoration of, 270–272 tying and, 359 Market power hypothesis, 178 Markets competing for, 380 contestable, 191 defining, 89–91, 238–241 domination of, 80 inadequacies in, 874–877 with network effects, 383–399 organization of, 75–78 safety promoted through, 867–868

segmented, 875–877 tying to protect, 308–310 Market structure concentration, 173–179, 184–187 entry conditions, 179–184, 188–192 free-entry equilibrium and, 182–184 socially optimal, 182–184 Market trading policies, 792–795 Marlboro Friday, 94 Marshall, Thurgood, 102 Massachusetts Cable Television Commission, 499 cable television franchise bidding in, 499 cable television system post-bid performance, 501 liquor price advertising in, 610 Massachusetts State Commission, 443 MasterCard, exclusive dealing and, 293–294 Matsushita v. Zenith (1986), 348 Maxwell House, 340 MCI WorldCom, 217 MD-11 airplane, 227 Medco, 278 Medicare, 877 Merck, 278 Mergers, 89 in airline industry, 220 antitrust laws and trends in, 214–217 collusion and, 220 competition laws and, 106 conglomerate, 213 coordinated effects and, 250–251 coordination and, 220–221 cost savings from, 224–227, 225f deadweight loss from, 228 DOJ and FTC guidelines for, 176–177, 237–238 efficiencies and, 223–227 entry conditions and, 251–253 evaluation activity and procedures, 232–234 externalities and, 219 HHI and, 177–178 horizontal, 177, 213, 217–232, 382, 415 international issues, 253–254 law and enforcement, 232–254 market definition and, 238–241 for monopoly, 214 multimarket firms and, 223 reasons for, 218–227 settlement of cases, 104 upward pricing pressure and, 241–246 vertical, 213, 259–270, 272–279, 382 waves of, 214–216, 215f welfare analysis for, 227–232

Merger simulation, 248–250 Message-toll service (MTS), 570, 571 Methylene chloride, 882 Metropolitan Transit Authority, 618 Microsoft, 4, 95, 101, 104–105, 259, 324, 332–334, 375–376, 380–381, 383, 391, 419 acquisitions by, 382, 403 antitrust cases and, 392–399 exclusionary contracts, 394 exclusive dealing and, 426 market makers for stock of, 149 network effects and, 386 stock prices, 149, 150f tying by, 284, 296 Yahoo! and, 403 Microsoft Excel, 392 Microsoft I decision, 393, 394, 420 Microsoft II decision, 393, 394–395, 420 Microsoft Windows, 187, 375, 394, 395–399 Microsoft Word, 344, 376, 392 Microwave Communications Incorporated (MCI), 353, 439, 446, 452, 575, 577 Microwave relay stations, 492 Microwave telephone systems, 437 regulatory policy and, 574–576 Microwave transmission, 571 Milgrom, Paul, 261 Mill, John Stuart, 757 MillerCoors, 221 Milyo, Jeffrey, 609 Minimum efficient scale (MES), 186–187, 186t Minimum RPM, 312–317 Minimum wage, 435 Ministry of Commerce, China (MOFCOM), 107, 253 Mobil, 155 Modification of Final Judgment, 579 Modified English auction, franchise bidding via, 483–485 Molson Coors Brewing Company, 221 Monopolies, terminating, 585 Monopolistic industries, 463 Monopolization assessing intent, 327–328 competition laws and, 106 elements of, 328 establishing claims, 324–328 price discrimination and, 323 requirements for claims, 396 Monopoly competition versus, 71–73 deadweight loss from, 78 defining market and, 89–90 innovation and, 83–84, 87, 88 market power and, 90 merger for, 214

multiproduct, 511–516 natural, 75, 76f, 436 replacement effect and, 87 social cost of, 71 waste induced by, 79–80 welfare loss from, 80–83 x-inefficiency and, 78–79 Monopoly power, 1 in aftermarkets, 358–364 defining, 324, 326 measuring, 324–327 substitution and, 326–327 tying extending, 305–308 Monopoly prices, two-sided platforms and, 410–413 Monopoly pricing, competitive constraint on, 90 Monopoly rents unions and, 80 waste and, 79–80 Monsanto, 261–262 Monsanto v. Spray-Rite Service, 315 Montreal Protocol on Substances that Deplete the Ozone Layer, 804 Moody’s, 710 Moore, Thomas Gale, 642 Morgan, J. P., 214 Morgan Envelope, 297 Morrall, John, 767 Mortality cost, 767 Mortality risk, 722–723, 724t, 726t measuring, 724–725 perceived versus actual, 912–913, 912f Mortgage-backed securities, 713–714, 716 Mortgages bundling, 710 default rates on, 712–713, 712f disclosure requirements for, 927 housing prices and, 709 residential, 611–614 subprime, 710 usury laws and, 611–614 Morton Salt case (1948), 371 Mother Jones (magazine), 850 Motivation cost, 261–262 Motor Carrier Act of 1980, 446, 628–629, 632, 635–636, 638–639 Motorola Mobility, 382 Motor vehicle accidents, trends in, 844–846 Movie Channel, The, 278 MS-DOS, 392, 394 MSNBC, 283 Mueller, Dennis, 81–82 Multibank holding companies (MBHCs), 471 Multichannel programming distributors (MVPDs), 283–284 Multihoming, 414

Multimarket firms, mergers and, 223 Multipart tariffs, 521–522 Multiperson decisions, 804–807 Multiperson Prisoner’s Dilemma, 805, 806f Multiproduct monopoly Ramsey pricing, 523–525 subadditivity and, 511–516 Multiproduct natural monopolies, 568–569 Munn v. Illinois (1877), 442 MyPlate, 927, 929f MyPyramid, 927, 928f Nabisco, 249–250 Nader, Ralph, 823 Nasdaq market makers and, 148–151 price transparency and, 148–151 Nash equilibrium, 119 in Bertrand model, 127 in Cournot model, 122–124, 123n10, 129, 131–132 product differentiation and, 128–129 raising rivals’ costs and, 274 trigger strategies and, 135 Nashville Coal Company, 292 National Academy of Sciences, 782, 848 National Fire Protection Association, 880 National Highway Traffic Safety Administration, 2, 8, 28, 721, 762, 841 energy efficiency standards and, 921 National Pollutant Discharge Elimination System, 20 National regulations, advantages of, 18 National regulatory agencies, 15 National Research Council, 800 National Safety Council, 894 Natural Gas Act (1938), 450 Natural gas prices, 674 electricity prices and, 679 retail electricity prices and, 679 Natural monopolies, 75, 76f basis for regulation of, 561–564 multiproduct, 511, 568–569 normative rationale for regulation and, 454 permanent and temporary, 437–438, 437f price regulation and, 511 regulatory response to changes in, 567 sources of transformation of, 564–567 transformation of, 571–574, 572f Natural monopoly, cable television as, 494–498 Natural monopoly problem, 436, 479 NBC Universal, 213, 260 Comcast and, 283–284 Nebbia v. New York (1934), 443, 444 Negative externalities, 455

Negligence standard, 848–849 Nestlé, 250–251, 251t Netflix, 400, 507, 526, 584, 586 Net metering, 681, 682 Net neutrality, 498, 581–588 Internet structure, 582–583, 583f meaning of, 584–585 rationale for, 585–588 Netscape Navigator, 395, 397, 398 Netting, 793 Network capacity, 587, 587n41 Network effects, 187 in advertiser-funded platforms, 413 consumer expectations and, 385 defining, 376 economics of markets with, 383–392 installed base and, 388, 389f market dominance and, 378, 386, 388, 390, 402 markets with, 383–399 properties of markets with, 385–386 two-sided platforms and, 406 Network externalities, 344–345, 526 New Economy antitrust issues in, 379–383 economic fundamentals of, 375–379 mergers and acquisitions in, 382, 400, 403, 404 speed of, 420 two-sided platforms and, 405 New products, 83n15 News America Marketing, 138 New source bias, 733, 787 New York regulation by, 608 workers’ compensation claims in, 898 New York City housing prices in, 709 surge pricing in, 621–622 taxicabs in, 614, 615 taxi medallion values in, 616–617, 617f, 618 Uber and, 618, 621 New York Stock Exchange (NYSE), 148, 149, 606–607 NIMBY phenomenon, 781, 782 Nintendo, 284, 376, 378 Nissan, 840 Nitrous oxide emission credits, 674 Nixon, Richard, 26–27, 684 Noam, Eli, 496, 497 Noise pollution, 734 Noise standards, 734–735, 734t Noll, Roger, 54 Nominal interest rates, 611–612 Nonconvexities, 780

Nonlinear pricing, 519–522 Nonprice competition in airline industry, 649 excessive, 598–600 Nonprice concessions, 500 Nonprice practices, anticompetitive, 657–659 Nordhaus, William, 799, 802 No-risk society, infeasibility of, 725–726 Normative analysis as a positive theory (NPT), 454, 456–457, 459, 469 branch banking deregulation and, 471–473 North American Industry Classification System (NAICS), 175 Northeast Canyons and Seamounts Marine National Monument, 807 Northern Pacific case (1958), 310 Northern Pacific railroad, 215 Northern Securities case (1904), 215 Northwest Airlines, 350–351, 656, 659 Notice of Proposed Rulemaking (NPRM), 24 Nuclear power plants, 544–545 Nuclear Regulatory Commission, 8, 721 Nuclear wastes, 781–783 Obama, Barack, 30–31, 44, 45, 732, 807, 882 Occupational Safety and Health Act, 32, 865, 877, 878 Occupational Safety and Health Administration (OSHA), 8, 20, 26, 28, 51, 721, 722, 731, 733, 743, 767 accident rate impact of, 899 arsenic regulation, 768, 880 asbestos regulations, 762, 866 changes in standards, 882–883 changes to, 866–867 chemical labeling rules, 883–885 coke oven occupational exposure limits, 768 cost-benefit analysis by, 878, 880 cotton dust standard, 33, 878–879, 881, 900 creation of, 865 criticism of, 865–866 enforcement strategy, 886–890 enforcement targeting, 889–890 grain elevator regulation, 885 handrail regulation, 880 hazard communication regulation, 748, 766, 766t impact on safety, 896–900 injuries and, 893–896 injury tax, 901 innovations in regulation, 885–886 inspection policies, 887–888 nature of standards, 880–882 penalties, 888–889 potential inefficiencies, 867–868 reform of standards, 882 regulatory approach of, 877–883 regulatory reform initiatives, 882 standard level setting, 878–880

trivial violations, 888 worker safety impact of enforcement, 891–900 workplace standards, 899–900 Office Depot, 247–248, 248n39 OfficeMax, 247 Office of Credit Ratings, 716 Office of Electricity Regulation, 554 Office of Fair Trading (OFT), 404 Office of Information and Regulatory Affairs (OIRA), 29, 31, 45, 53 Office of Management and Budget (OMB), 16–17, 21, 27, 29–30, 33, 37, 43, 50–52, 51t, 732, 759, 767, 880 OSHA hazard communication regulation and, 748 regulation costs and, 762–763 reviews by, 24–25 Office superstores (OSS), 247, 248 Offsets, 793 Oftel, 549 O’Hare Airport, 658 Oil sector, 669 economic regulation in, 683–697 import restrictions, 692–694 industry background, 683–684 price ceiling effects on, 684–687 price freezes, 684 production rate decisions, 687–689 production restriction rationale, 687–692 property rights and, 689 prorationing, 690–691 Oil spills, 812–815 Oligopolies, 76–77, 115, 595 Bertrand model, 127, 130 Cournot model, 120–126, 129 defining, 119–120 entry and, 182 interdependence of firms’ decisions in, 117 product differentiation and, 128–130 Stackelberg model, 126–127 theory, 119–130 Olson, Mancur, 459 Olson, Mary, 832 110 Pocket Instamatic system, 333 Online auctions, 187 Online video distributors (OVDs), 507–508 On-the-job experience and, 873–874 Open-wireline system, 571, 572 Operating costs of government enterprises, 481 opportunistic holdup and, 491 Operating systems, 187 Opportunistic holdup, 490–491 cable company size limits and, 498 television programming and, 498 Optimal pricing

linear pricing, 516–519 of multiple services, 522–526 nonlinear pricing, 519–522 policies for, 516–522 price regulation and, 511 of single service, 516 in two-sided markets, 526–527 Oral ascending auction, 483 Organ donation, 926–927 Original equipment manufacturers (OEMs), 391, 394, 395, 399 OS/2, 386 Otter Tail Power Co. case (1973), 353–354 Output Restriction Rule, 347 Overcapitalization, 543, 544, 545 Overinvestment, in solar DG, 682, 682n18 Oversight process criteria applied in, 40–43 impact of, 43–53 Oxaliplatin, 832 Ozark Airlines, 658 Ozone layer, 804–805 PACCAR, 292 Pacific Bell, 550–551 Pacific Gas & Electric, 672 Paddock Laboratories, 356 Page Rank, 378 Pain-and-suffering awards, 852, 852t Panzar, John, 191 Parallelism conscious, 157 coordinated effects and, 250 Parallelism plus standard, 158–159 Parallel pricing behavior, 158 Pareto criterion, 68–69, 73 Pareto efficiency, political equilibrium and, 464 Pareto optimal equilibrium, 68 Pareto optimality, 68 Paris Agreement, 805 Partial deregulation, 567, 568 Patents, 91, 354, 356 Payday loans, 917 Payoff function, 118 Payoff matrix, 116, 116f PayPal, 382 Peaker units, 531, 679n13 Pecuniary efficiencies, 223 PEG channels, 500, 502 Peltzman, Sam, 54, 459, 462–463, 462f, 834, 836 Penetration pricing, 388 Penn Central Railroad, 629 Pentagon, 727

Pepsi, 261, 319 Pepsico, 213 Perfect competition, 77 market power and, 90 Perfectly contestable markets, 191 Perfect price discrimination, 299–300 Performance, 94 Performance based regulation (PBR), 539 in electricity sector, 553–554 Performance-oriented standards, 737 Permanent natural monopolies, 437–438 Perrier, 250, 251, 251t Per se rule, 102, 154 Personal injury litigation, 924 Pet Milk, 370 Pharmaceutical industry, 379 premanufacturing screening, 827–833 Phase IV, 684 Philadelphia, cable television franchise bidding in, 499 Philadelphia National Bank case (1963), 235 Philip Morris, 94 Photocopiers, 323 Physicians’ Desk Reference, 829 Pipelines, 625 Pittsburgh Plate Glass, 214 Pizza Hut, 213 Plastic bags, 934 Plastic water bottles, 934 Platforms, two-sided, 404–419, 405f, 406f Players, 118 rational, 119 PlayStation, 376, 378 Playtex, 185 Plus factor, 158 Pocket Instamatic photographic system, 333 Poisoning, 839 Policy evaluation, 731–739 benefit-cost analysis in, 732–733 heterogeneity and, 733–735 political factors in, 735–739 principles for, 744–746 regulatory standards, 732 survey approaches to valuing effects, 763–765 Policy triggers, for cleanups, 809 Political equilibrium, 464, 465, 466 Political lobbying, 464 in cable franchise bidding, 500 Politically imposed costs, 500, 501t Politicians, 451 Pollution air, 18, 735, 737, 765, 815 Coase theorem and, 774–776, 775t

markets for, 780 optimal policy selection, 783–792 property rights and, 775, 775t sources and enforcement of, 808 water, 735, 774–776, 775t Pollution control, 783, 784f heterogeneity role in policies, 786–787 marginal cost curves for, 787, 788f setting taxes, 784–785 Pollution standards, 733 Pollution taxes, 784–785, 790f prices versus quantities, 790–792 setting, 789–790 Pollution victims, 743 Positive externalities, 455 Positive feedback, 385 Posner, Richard, 158, 189–190, 365, 456, 467 Post-Chicago doctrine, 259 Postmerger prices, estimating, 247–250 Potential compensation principle, 34 Powder River Basin area, 796 Power production average daily load curve, 532f costs of, 531–532 prudence reviews and, 544–545 Power Ventures, 402–403 Prager, Robin, 630 Pre-Chicago doctrine, 259 Predatory pricing, 193, 323, 334–337 in airline industry, 659–661, 662f, 663 antitrust policy and, 345–351 Areeda-Turner and other single-parameter rules, 346–347 efficiency rationales, 344–345 reputational theory of, 340–344 theories of, 338–344 two-tier rule, 348–349 Preemption, 206–209 Premanufacturing screening, 827–833 Premium natural/organic supermarkets (PNOS), 240–241 Prescription Drug User Fee Act, 831, 832 Present value, 37–40 Present value of earnings approach, 747 Price agreements and, 156–158 entry/exit regulation and, 591–603 government enterprises and, 481 market concentration and, 380 New Economy companies and, 380 regulatory role of, 42–43 as two-sided platform, 407–419 Price cap period, 546 Price cap regulation (PCR), 438–439, 539, 546–549, 555–556

earnings variation, 549 in electricity sector, 553–554 ESR and, 550–553 pricing flexibility, 547–548 service quality and, 553 Z factors, 548–549 Price ceilings effects of, 684, 685f, 686f oil sector and, 684–687 Price controls, on oil, 684, 694–697 Price-cost margin, 242–243 in DWL estimation, 81 market concentration and, 178–179 Price discrimination, 299–304 monopolization and, 323 theory of, 365–369 types of, 365–366 Price fixing, 214, 663–664 antitrust law and enforcement and, 153–167 by cartels, 102 damages for, 161–164 electrical equipment, 144–145 enforcement policy, 161–167 fines for, 105, 164–165 legal procedures and, 159–161 by toy stores, 152 Price floors, 567 Price freezes, on oil, 684 Price regulation, 438–439 airline industry, 644–645 competitive model of direct effects, 591–594 cross-subsidization and, 601 excessive nonprice competition and, 598–600 imperfectly competitive model of direct effects, 595–598 indirect effects of entry regulation and, 598–601 indirect effects of exit regulation and, 601–602 innovation and, 602–603 optimal pricing and, 511 productive inefficiency and, 600–601 in railroad industry, 630–631 reduced capital formation and, 601–602 tying and, 298–299 Price signaling, 663–664 Price squeezes, 364 Price structure, 407, 569 Price-taking assumption, 68 Price transparency, Nasdaq market makers and, 148 Price umbrellas, 329 Price wars collusion and, 147 railroads and, 146–147, 147f, 442, 627 Pricing. See also Predatory pricing

anticompetitive practices, 659–665 dynamic, 387 limit, 193–200, 201f, 330 linear, 516–519 marginal cost, 516–517 nonlinear, 519–522 non-Ramsey, of telephone services, 525–526 optimal, 511, 516–527 penetration, 388 promotional, 344 Ramsey, 523–525, 527, 529 reputational models, 339–340 of ride sharing services, 618–619 surge, 619–622, 621f, 622f upward pressure on, 241–246 value of service, 525, 630–631 Pricing flexibility, in PCR, 547–548 Primary-line discrimination, 370 Prince William Sound, 813 Priority Mail, 587 Prisoners’ Dilemma, 117n2 application of, 806–807 defining, 805 multiperson, 805, 806f N-person, 805–806 Private actions, antitrust law enforcement through, 104 Private aircraft, 827 Private-line service (PLS), 570, 571, 575 Private litigation, 161 Procter & Gamble, 213, 241, 252–253, 340 Procter & Gamble case (1967), 252 Producer surplus, 69 Product differentiation, 77 conduct and, 93 market definition and, 240 market power and, 92 oligopolies and, 128–130 Product extension merger, 213 Product groups, 77 Production technology economies of scale and, 184 entry costs and, 180 Productive inefficiency, 600–601, 632 dynamic, 640–641, 654–655 static, 638–639 wages and, 639 Productivity, in surface freight industry, 640–641, 641t Product labeling, 19–20 Product liability, 846–853 escalation of damages, 851–853 limits on damages for, 852, 852t negligence standard, 848–849

strict liability standard, 849 tracing accident costs and causes and, 849 Products, new, 83n15 Product safety government determination of, 854–856 self-certification of, 854 social costs, 847–848, 848t Product safety decision process, 824–827, 824t Product safety regulations alternative to direct command and control, 856–858, 860 behavioral response to, 833–840 changing emphasis of, 826–827 consumer complaints and, 825 costs of, 840–841, 844 emergence of, 823 factors in producer and consumer actions, 825–826 premanufacturing screening, 827–833 product performance, 826 risk information and hazard warnings, 853–856 Product safety standards, 1 Product-specific economies of scale, 568 Professionals, 452 Professional sports teams, 101 Profit maximization in advertiser-funded platforms, 413 in Cournot model, 121–123, 122f Lerner index and, 325 Profit-maximizing firms, 116 Profit-maximizing price, 82, 139 Profit-shifting, 426 Promotional pricing, 344 Property rights, 67, 772 oil reserves and, 689 pollution and, 775, 775t transaction costs and, 776 Proposition 65, 19, 856, 911 Prorationing, 690–691 Prospect theory, 905, 907f irrationality and biases in risk perception, 911–914 loss aversion and reference dependence, 906–910 Prudence reviews, 544–545 PSKS, 316 Public Company Accounting Oversight Board, 708–709 Public enterprise, 479–482 Public information dissemination, 913–914 Public interest, 97–98 Public interest theory, 454 Public misperceptions, 913–914 Public utilities, 75–76 Public Utilities Regulatory Policy Act of 1978 (PURPA), 670 Putnam, Howard, 136–137

Qualifying facilities, 670 Quality air, 765, 783 environmental, 34–36, 35f regulatory role of, 42–43 Quality control defense, 297–298 “Quality of life” review process, 26 Quantity control regulations, 439 Quebec, gasoline cartels in, 140 QVC, 281 Radiation exposure, 782 Radio broadcasting, 183–184 Radionuclide regulation, 762 Railroad Revitalization and Regulatory Reform Act of 1976 (4R Act), 629 Railroads, 437, 470, 625, 641 average revenue, 633f deregulation and rates, 633–636 mergers, 215 passenger travel, 627 performance since Staggars, 636f price regulation, 630–631 price wars and, 146–147, 147f, 442, 627 rate setting, 631 reasons for regulation of, 629–630 regulation of, 628 regulation supported by, 456 Standard Oil and, 185 track maintenance investments, 640 truck rates and, 632 unprofitable routes, 638 Raising rivals’ costs effect entry conditions and, 206 vertical mergers and, 272–277 Ralston, 250 Ramsey, Frank, 523 Ramsey pricing, 523–525 FDC pricing and, 529 two-sided markets and, 527 Random shocks, 147 Rapid and disruptive innovation, 378, 419–426 Rate base, 540 Rate case moratorium (RCM), 551 Rate hearings, 540–541 Rate of return regulation (RORR), 539–546, 555 electricity sector and, 670 PCR advantages over, 547 Rate regulations cable television franchises, 502–506 factors in setting, 6–8 Rate structures, 528–531 Ratings agencies, 716

Rational players, 119 Ready-mixed concrete firms, 260 Reagan, Ronald, 29, 278, 446, 732, 880, 882 auto reform by, 841, 842t–843t Reagan National Airport, 233 Real efficiency, 223–224 Real interest rates, 611–612 Recognized interdependence, 120 Recommendation systems, 400 Recurrent short-term contracts, 488–490 Redistribution policies, 74 Reduced capital formation, 601–602 Reed-Bulwinkle Act of 1948, 631 Reference dependence, 906–910 Refusal to deal, 351–352, 355, 359 Regional Bell operating companies (RBOCs), 556, 575–576, 577, 579 Regulation. See also Antitrust regulation; Economic regulation alternative measures of scale of, 47–49 antitrust, 4–5 asymmetric, 568–570 benefits maximized by, 53–56 capture theory of, 54 comprehensive models of objectives, 55–56 costs and benefits of major, 44–47, 45t, 46t counterfactual approach to estimating effects, 610–614 economic theory of, 458–467 estimating effects of, 604–614 federalism debate in, 16–21 in financial sector, 699 financial sector rationale for, 701–703 hazard communication, 748 health, safety, and environmental, 8–9 heterogeneity inn, 733–735 incremental modifications in, 51–52 influence pattern theories for, 54–55 through information, 856 innovation and, 602–603 intermarket approach to estimating effects, 607–610 intertemporal approach to estimating effects, 605–607 lessons from surface freight industry on, 641–642 through litigation, 858 making, 15–16 normative rationale for, 454–456 Pelztman model of optimal, 462–463, 462f political factors, 735–739 price and quality role in, 42–43 rationale for, 3–4 responses to changes in natural monopoly industries, 567 success stories, 40 taxation by, 467 testing theories of, 469–470 theories of, 453–473

trends in major, 43–44 Regulators, legislators and, 467 Regulatory agencies budgets and staff of, 57, 58t–63t influences on benefits maximized by, 53–56 members of, 451–452 powers of, 450 selection of, 449–450 Regulatory Analysis Review Group, 27 Regulatory Calendar, 29 Regulatory commissions, rate hearings held by, 540–541 Regulatory Impact Analysis (RIA), 21, 24, 27, 757, 917 energy efficiency standards and, 920 judicial review of, 52–53 Regulatory lag, 545–546 innovation and, 603 Regulatory options, 555–556 Regulatory oversight process, 25–40 character of actions, 50–52, 51t Regulatory procedures, 452–453 Regulatory proceedings, delay and strategic manipulation of, 453 Regulatory reform initiatives, 882 Regulatory reform legislation, 32–33 Regulatory standards, 732 Remedies, 104 Rent-seeking behavior, 79–80 product liability and, 852 Replacement effect, 87 Representativeness heuristic, 925 Reputational pricing models, 339–340 Reputational theory of predation, 340–344 Resale price maintenance (RPM), 284, 312–314 maximum, 317–318 minimum, 315–317 territorial restraints and, 318–320 Research and development (R&D), 377 antitrust enforcement and, 421 entry barriers and, 421, 422 price and entry/exit regulation and, 602–603 Reserve requirements, 705 Residential mortgages, 611–614 Respirators, 881 Retail electricity prices cap on, 677 natural gas prices and, 679 Reverse-payment settlement agreements, 356–358 Revlon, 185 Reynolds Metals, 332 Rhode Island, 610 Ride sharing, 617–622 pricing of, 618–619 taxi drivers protesting, 618n23

RIIO, 554 Risk accident statistics and, 730 bank investment restrictions and, 705 cancer, 810 cleanup programs and, 810 deposit insurance and, 703–705 FDA approach to balancing, 828, 828f health, 872 incomplete knowledge of, 874–875 infeasibility of risk-free society, 725–726, 871 mortality, 722–725, 723t, 724t, 726t mortgage bundling and, 710 post-9/11 trade-offs, 727–728 public misperceptions of, 913–914 regulatory agencies and, 721 safety, 872 systemic, 715 of terrorism, 729t wage premiums for, 872 wealth and, 730–731 willingness to pay and, 744–746 Risk ambiguity, 915–916 Risk beliefs, 923–925 Risk information, 853–856, 871–873, 930 Risk perceptions, 873 availability heuristic and, 924–925 biases in, 911–914 prospect theory and, 911–914 Risk preferences, 833 Risk-risk analysis, 767–769 Rival firms mergers and decisions of, 230–231 raising costs of, 206, 272–277 R. J. Reynolds, 213 RJR Nabisco, 217 Roberts, John, 261 Robinson-Patman Act of 1936, 96, 100, 323, 365, 370–372 Robson, John, 644 Rockefeller, John D., 95 Rockefeller brothers, 328 Roux & Associates, 311 Rubinovitz, Robert, 503 Rulemaking process, 21, 22f–23f, 24–25, 452–453 Rule of reason, 102, 154 Runaway juries, 851 SABMiller, 221 Saccharin, 855–856 Safe harbors, 237 Safety caps, 837, 839 Safety devices

consumer’s perception of efficacy of, 839–840 consumer’s potential for muting benefits of, 835–837 Safety risks, 872 Safeway Stores, 236 Saltzer Medical Group, 104–105 San Diego Gas & Electric, 672 San Francisco, ride sharing services in, 618 Sarbanes-Oxley Act of 2002, 707, 708–709 Satcom I satellite, 492 Sawyer, Frank, 614 Scalia, Antonin, 115 Schelling, Thomas, 805 Schmalensee, Richard, 379 Schultz, Paul, 149 Schultze, Charles, 28 Schumpeter, Joseph, 88 SCI, 224 Scrubbers, 737 sd, 856 Search engines, 400, 403 market shares of, 401 Seat belts, 833, 834 Secondary-line discrimination, 370 Second-best effects, 594–595 Second Requests, 233 Securities Act Amendments of 1975, 607 Securities Act of 1933, 445 Securities and Exchange Act of 1934, 707–708 Securities and Exchange Commission (SEC), 6, 445, 606–607, 707–708 Dodd-Frank Act and, 716 Office of Credit Ratings, 716 Securities Exchange Act of 1934, 445 Securities rating agencies, 716 Segal-Whinson model, 421–426 Segmented markets, 875–877 Self-certification of safe products, 854 Senior discount for value of statistical life, 815–817 Sensitivity analysis, 765–766 September 11, 2001, terrorist attacks (9/11), 49, 727–729 Service quality franchise bidding and, 486 PCR and, 553 Services aftermarkets and, 358–364 optimal pricing of multiple, 522–526 pricing on two-sided platforms of, 407 Settlements of antitrust cases, 104 of merger cases, 104 reverse-payment agreements, 356–358 Shadow banking sector, 716 Share contracts, 291

Sharp, 316 Sherman Act of 1890, 4, 100, 101, 105, 107, 111, 137, 153, 215, 259 agreements and, 156 price fixing and, 154, 155 Section 1, 355 Section 2, 323, 328, 351, 394 vertical restraints, 285 Visa-MasterCard case and, 293–294 Shopping Bag Food Stores, 236 Short-run marginal cost (SRMC), 532–534, 533f, 535f, 536 Short-term contracts, recurrent, 488–490 Showtime, 278 Side effects, 829 Signaling, 199 predatory pricing and, 339 price, 663–664 Silicone breast implants, 858, 860 Single homing, 414 Single-parameter rules, 346–347 Skin cancer, 804 Skype, 382 Small but significant and nontransitory decrease in quality (SSNDQ), 416 Small but significant and nontransitory increase in price test (SSNIP test), 238, 415–416 Smartphone operating systems, 376 Smith, Adam, 869 Smith, Mac, 145 Smith, Robert S., 901 Smoking, 854 Smoking externalities, 777–780 Smoking restrictions, 778, 931–932 Social cost of carbon (SCC), 800–801, 802, 917 Social cost of monopoly, 71 Socially wasteful advertising, 80 Social networking services, 375 Social norms, 931 Social regulation, 731 cost per expected life saved, 760t–761t heterogeneity in, 733–735 policy evaluation principles, 744–746 political factors, 735–739 survey approaches to valuing effects, 763–765 value of risks to life for, 757, 759, 762–763 Social Security, 779 Social user cost, 689 Social welfare, competitive equilibrium and, 591 Societal income, accident trends and, 730, 731f Socony-Vacuum (1940), 155 Sodium bicarbonate, 884, 884t Soft drink syrup manufacturers, 319 Solar panels, 681 overinvestment in, 682, 682n18 Solow, Robert, 602, 812

Solvay Pharmaceuticals, 356 Sony, 117, 376, 378 Sotheby’s, 407 South Africa, 98 Southern Pacific railroad, 641 Southwest Airlines, 104, 200, 201f, 233, 646, 656–657, 660n54, 664 Specialized Common Carrier Decision (1971), 446, 575 Spillover effects, 839 Spirit Airlines, 350–351 Sports leagues, 101 Sprint, 120, 160, 577 Stability, compliance and, of collusion, 143–146 Stackelberg, Heinrich von, 126 Stackelberg model, 126–127 Staggers Act of 1980, 446, 629, 632, 636f, 638 Stand-alone cost, 530 Stand-alone market, 305–307, 308 Standard Fashion Company v. Magrane-Houston Company (1922), 292, 352 Standard Oil, 215, 324, 328–329, 392, 399 cost advantages of, 185 predatory pricing and, 338–339 Standard Oil Company, 292 Standard & Poor’s, 710 Standards compatibility of, 117 setting under uncertainty, 787–789, 788f Staples, 247–248, 248n39 State liquor stores, 482 State Oil Company v. Khan (1997), 318 State regulation federalism debate and, 16–21 federal regulation overlap with, 20–21 State usury laws, 611–614 Static productive inefficiency, 638–639 Steel industry, 743 Stern Review, 798–799 Stewart, Richard, 54 Stigler, George, 10, 54, 89–90, 189–191, 457, 459, 467 Stigler/Peltzman model, 459–460, 461, 467, 735 St. Luke’s Health System, 105 Stock prices, 606 market makers and, 149 Microsoft, 149, 150f transparency in, 148–151 Strahan, Philip, 471, 473 Strategic capacity expansion, in casino industry, 205–206 Strategic entry deterrence, 193 Strategic form, 118 Strategic withholding, of electricity, 675–676 Strategy, 118, 119 Strict liability standard, 849 Strip-mining, 738, 738t

Structural presumption, 234–237 Structural remedies, 95 Structure-conduct-performance paradigm (SCPP), 88–89, 89f StubHub, 382 Subadditivity, 511–516 Subprime borrowers, 709, 710 Subsidies economic regulation and cross, 467 ethanol, 460–461 problems with, 517–518 Substitution, 89–90 demand-side, 327 market definition and, 240 monopoly power and, 326–327 supply-side, 327 Sulfur dioxide, 737 trading system for, 795–797, 797f Summary judgment, 160 Summers, Larry, 755 Sunk costs, 191–192 Sun Microsystems, 397 Superfund, 52, 739, 767, 809, 810, 925 cost effectiveness, 811, 811t risk assessment for cleanups under, 917 Superior Propane, 97, 98f Supermarkets, 240–241 Supply-side substitution, 327 Surface freight industry airline industry regulation comparison to, 645 innovation in, 640–641 regulation lessons from, 641–642 Surface freight transportation, 627–642 deregulation, 628–629, 633–636 dynamic productive inefficiency in, 640–641 regulation effects, 632–641 regulatory history, 627–629 regulatory practices, 630–632 static productive inefficiency in, 638–639 Surface Transportation Board, 642 Surgeon General, 854 Surge pricing, 619–622, 621f, 622f Survey approaches, to policy effect valuation, 763–765 Sustained dominance, 386 Switching costs, 199, 359 SyFy, 283 Sylos-Labini, Paolo, 193 Sylvania case (1977), 319 Symmetric bargaining, 774 Systemic risk, 715 Systems market, 305–307 Syufy Enterprises case (1990), 252

Tacit collusion, 93, 136 Tampa Electric Company, 292 Tariffs, 188 declining-block, 521 multipart, 521–522 Telpak, 575 two-part, 520 Task Force on OSHA, 881 Taxation, by regulation, 467 Tax compliance, 931 Taxes carbon, 789, 799 gas-guzzler, 3 gasoline, 906 income, 435 injury, 901 pollution, 784–785, 789–792, 790f product-specific, 3 value of life and, 747 Taxicabs, 625 entry restrictions, 615 regulatory history, 614–615 ride sharing and, 617–622 Taxi medallions, 188, 615 value of, 616–618, 617f TCI, 280–282, 506 Teamsters Union, 639 Technical progress, 94 Technology disruptive, 87–88 product differentiation and, 92 production, 180, 184 Technology-based standards, 737 Telecommunications Act of 1996, 354, 446, 502, 506, 552, 578–581 Telecommunications companies, 625 Telemundo Television Network, 283 Telephone companies, video programming and, 507 Telephone service, 283 cable service and, 503, 506 economies of scope and, 513 efficiencies in, 187 ESR and, 550–551 interstate, 443 local, 187, 354, 503, 506, 513 long-distance, 437, 438–439, 470, 513 non-Ramsey pricing of, 525–526 price cap regulation on, 438–439 Television programming, 498 Television set manufacturers, 348 Telpak tariff, 575 TELRIC, 580 Temporary natural monopolies, 437–438, 437f

Terminal Railroad case (1912), 352–353 Terminating monopoly problem, 585 Territorial restraints, 285, 318–320 Terrorism, 727–729, 727f, 729t Test market predation, 340 Thalidomide, 827–828 Theatre Enterprises case (1954), 157 “Theory of Economic Regulation” (Stigler), 459 Thomson, Ian, 152–153 Throttling, 585 Ticketmaster, 260, 279 Time-of-use pricing (TOU pricing), 531–537, 534f, 536f Time Warner, 1, 217 Turner Broadcasting and, 280–282 Tirole, Jean, 87 T-Mobile, 120, 139, 160, 161 TNT, 884, 884t Tobacco Trust, 328 Tom’s of Maine, 241, 245 Toothpaste, 241–243 Tort liability, 825–827 Total surplus, 69 Total welfare or surplus standard, 97 Toxics Release Inventory (TRI), 808 Toy Fair, 152 Toyota, 840 Toys “R” Us, 151–152 Toy stores, hub-and-spoke collusion and, 151–153, 153f Trade associations, cartels and, 144 Trade-offs, 743 risk-risk analysis, 767–769 wage-risk, 751 Transaction costs Coase theorem and, 776–777 vertical mergers and, 261–262 Trans-Missouri Freight case (1897), 154, 214 Transparency net neutrality and, 585 in securities markets, 716 stock price, 148–151 Transportation Act (1920), 450, 628 Transportation Act (1940), 628 Transportation industry, 625–627 Transportation Safety Administration, 2 Treaty on the Functioning of the European Union (TFEU), 107, 112 agreements and, 156 Treble damages, 161–167 Trenton Potteries case (1927), 154 Trigger strategy, 132, 134–135 Trinko case (2004), 354 Trucking industry, 470 deregulation and rates, 633–636

entry restrictions, 638–639 prices, 632, 633 rail rates and, 632 rate setting in, 631 Truck transmissions, 291 Trump, Donald, 31–32 Turner, Donald, 345 Turner Broadcasting, Time Warner and, 280–282 Tversky, Amos, 906, 909, 923 TWA, 658 Twombly case (2007), 159, 160 Two-part tariffs, 520 Two-sided markets BIAS suppliers in, 587 optimal pricing in, 526–527 Two-sided platforms, 402, 404–407, 405f, 406f antitrust analysis challenges, 415–416 equilibrium quantity of users, 408–410 monopoly prices and, 410–413 price as, 407–419 prices for competing, 413–415 Two-tier rule, 348–349 Tying, 259, 284, 296–297 antitrust law and policy and, 310–312 extending firm’s monopoly power, 305–308 market power and, 359 Microsoft and, 394–395 price discrimination and, 299–304 price regulation and, 298–299 profits and, 297 protecting firm’s primary market, 308–310 variable proportions and, 301 Tylenol, 839 Type I error, 99, 345, 830 Type II error, 99, 160, 345, 830 Uber, 2, 18, 413, 414, 617–619, 618n23, 621 UberPop, 618n23 UberX, 618n23 Ultramar, 140 Unbundled network elements (UNEs), 580 Unbundling, vertical, 671, 672 Uncertainty conservatism and, 916–917 global warming policies and, 803–804 standard setting under, 787–789, 788f Unconstrained competition, 67 Uniform standards, 735 Unilateral effect, 218 Unilever, 241 Unions antitrust exemption for, 101

higher wages and, 80 monopoly rents and, 80 strategic use of, 206 trucking industry profits and, 639 Unit banking, 471 Unit benefit values, 757 United Airlines, 656–661, 660n54, 663, 664 United Auto Workers, 206 United Nations, chemical labeling standards, 883 United Shoe Machinery, 323, 329, 332 United States anticompetitive enforcement and remedies in, 103–105 anticompetitive offense categories in, 102–103 cap and trade in, 795–796 crude oil price controls, 694–696 ethanol subsidies in, 460–461 federal antitrust law and enforcement, 100–105 merger approvals, 107 oil import restrictions, 692–694 oil price controls in, 684 price fixing fines in, 105 United States Steel Corporation, 214, 328–329 United States v. Colgate & Co. (1919), 355 Unitization, 692 Unit transport costs, 437 University of Michigan Survey of Working Condition, 871 UNIX operating system, 396 Unreasonable restraint of trade, 154 Unsafe at Any Speed (Nader), 823 Upstream competition, 275 Upstream monopolists, 271 Upward pricing pressure (UPP), 241–246 USA, 283 US Airways, 104, 233, 656, 658, 659 USDA food plate, 927 U.S. Department of Agriculture (USDA), 1, 927 User equilibrium, 408–410 U.S. Postal Service, 587, 643 Usury laws, 611–614 Utah Pie case, 370–371 Utility distribution companies (UDCs), 672, 673, 674, 677 U-verse service, 507 Vaccinations, 807 Vail Resorts, 213 Valassis Communications, 138 Valuation of air quality, 765 contingent, 763, 812–815 survey approaches for, 763–765 Value functions, 906 Value of a statistical life, 745, 746t, 873

empirical estimates of, 756–757 labor market model, 751–756, 755t regulatory agency values for, 758t–759t senior discount for, 815–817 variations on, 749–751 Value of risks to life, 757, 759, 762–763 Value of service pricing, 525, 630–631 Valuing policy effects, survey approaches to, 763–765 Variable proportions, 268–270, 269f tying and, 301 Verizon, 120, 160, 354 Fios service, 507 Vertical integration, 272, 297 in electricity sector, 670 price effects from, 275 Vertical mergers, 213, 259 anticompetitive effects, 265–266 antitrust law and policy historical development, 277–279 benefits, 260–265 double marginalization and, 262–265 market power extension and foreclosure, 266–270 in New Economy, 382 raising rivals’ costs and, 272–277 technological economies and transaction costs, 261–262 Vertical monopolization, fixed-proportions production and, 266–268, 267f Vertical restraints, 259, 284–285 Vertical unbundling, 671, 672 VHS videotape format, 117, 345 Videocassette recorder (VCR), 117 Video game platforms, 376 Video services, competition among suppliers of, 507–508 Vietor, Richard, 443 Visa, exclusive dealing and, 293–294 Visa-MasterCard case, 293–294 Viscusi, W. Kip, 748, 749, 768, 811, 837, 875 Vitamin C cartel, 145 Volcker Rule, 715 Volvic, 250–251, 251t Von case (1966), 236 von Weizsäcker, Christian, 189 Voting patterns, 737–739 congressional, 735–736, 736t factors affecting, 738, 738t Wage premiums, 872 Wage-risk trade-offs, 751 Waldfogel, Joel, 609 Walker, Jimmy, 614 Wallsten, Scott, 584 Walmart, 375–376 economies of scale and, 185–186 Warnings, 927

Warren, Earl, 96 Washington, DC, taxicabs in, 615, 616 Water barge transportation, 628 Water pollution, 735, 774–776, 775t Waverman, Leonard, 573 Waze Mobile, 404 Wealth risk and, 730–731 value of statistical life and variation in, 749 Wealth inequality, 74–75 Weather forecasts, 923–924 Webb, G. Kent, 495, 497 Webb-Pomerene Act of 1918, 101 Web browsers, 394–395, 398f Weidenbaum, Murray, 57 Weidenbaum Center, 57 Welfare competition and, 68–88 competitive equilibrium and social, 591 consumer, standards and, 97, 238 cost reductions and, 229–230 estimating loss from monopoly, 80–83 market entry and, 596 mergers and analysis of, 227–232 monopoly-induced waste and, 79–80 nonprice competition and, 598–600 price and quality changes in airline industry and, 653–654 tools for, 68–71 total, standards based on, 97 Western Airlines, 651, 652f Western Electric, 576 Westinghouse, 145 Weyerhaeuser case (2007), 351 Whaling, 806–807 WhatsApp, 382, 400 Whirlpool, 246 White, Edward D., 296 Whole Foods, 240 Wholesale electricity prices, 673 Wide-area telephone service (WATS), 570, 571 Wii, 376 Wild Oats, 240 Williamson, Oliver, 347, 488 Williamson model, 227, 231, 235 Willig, Robert, 191, 641 Willingness to accept (WTA), 908–910, 932 Willingness to pay (WTP), 744–746, 908–910, 932 Exxon Valdez damage and, 813–814 other approaches versus, 746–748 senior discount and, 815–817 Wilson, James, 54, 451 Wilson, Neil, 152

Windows 95, 393, 397 Windows 98, 259 WordPerfect, 376 Word processing programs, 376 Worker fatality rates, 749–750, 750t Worker quit rates, 873–874 Workers’ compensation, 894, 898, 900–902 Workplace deaths, 894–895, 895f, 901 Workplace health and safety chemical labeling, 883–885 compensating wage differential theory, 869–871 factors affecting injuries, 893–896 failure of compensating differentials, 875–877 historical background, 865–867 inefficiency potential, 867–868 informational problems and irrationalities, 874–875 inspection policies, 887–888 market inadequacies, 874–877 markets promoting, 867–868 on-the-job experience and, 873–874 OSHA enforcement impact on, 891–900 OSHA impact on, 896–900 OSHA regulatory approach, 877–883 risk information and, 871–873 segmented markets, 875–877 worker quit rates, 873–874 workers’ compensation role in, 900–902 Workplace standards, 899–900 Work-related accidents, 730 World Bank, 755 World financial crisis of 2008, 217 World Trade Center, 49, 727 Wrinkle experiment, 799 Xbox, 376 Xerox, 333 X factor, 546, 547, 555, 556 X-inefficiency, 78–79 Yahoo!, 187, 378, 382, 401 Microsoft and, 403 Yahoo! Auctions, 407 Yandex, 401 Yardstick regulation, 556–557 YouTube, 382, 406 Yucca Mountain site, 781–783 Z factors, 548–549 Zupan, Mark, 502