262 58 6MB
English Pages 360 [530] Year 2014
Downloaded by University of Lethbridge At 02:39 03 October 2015 (PT)
HANDBOOK OF MICROSIMULATION MODELLING
Downloaded by University of New South Wales At 09:01 12 April 2016 (PT)
HANDBOOK OF MICROSIMULATION MODELLING
CATHAL O’DONOGHUE Rural Economy and Development Programme, Teagasc, Athenry, Ireland
United Kingdom North America India Malaysia China
Japan
Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2014 Copyright 2014 Emerald Group Publishing Limited
Downloaded by University of Newcastle At 01:12 11 December 2016 (PT)
Reprints and permission service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-78350-569-2 ISSN: 0573-8555 (Series)
ISOQAR certified Management System, awarded to Emerald for adherence to Environmental standard ISO 14001:2004. Certificate Number 1985 ISO 14001
List of Tables Chapter 2 Table 2.1 Table 2.2 Table 2.3 Table 2.4
Fields of hypothetical data application Geographic scope of hypothetical models Unit of analysis Period of analysis
28 32 36 36
Model characteristics Baseline data source Updating tax-benefit rules Reweighting Use of projections
53 55 60 61 64
Downloaded by University of Newcastle At 01:13 11 December 2016 (PT)
Chapter 3 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5 Chapter 5 Table 5.1
Table 5.2
Decomposition of the changes in inequality and poverty indices over time (France and Ireland, late 1990s) Decomposition of the changes in inequality and poverty indices over time (UK, 1998 2001)
121 123
Chapter 7 Table 7.1 Table 7.2
Table 7.3
Tax and benefit analyses based on structural microeconometric models Estimates of the parameters of the welfare function for individuals 20 to 62 years old, Norway 1994 Distributional weight profiles of four different social welfare functions
204
Decomposition of the welfare cost into income effect and price change for decrease in social security contributions and compensatory increase in standard VAT-rate (h’s per year)
259
192 200
Chapter 8 Table 8.1
xx
List of Tables
Chapter 9 Table 9.1
Applications of CGE microsimulation techniques
288
Chapter 10 Table 10.1 Table 10.2 Table 10.3
Downloaded by University of Newcastle At 01:13 11 December 2016 (PT)
Table 10.4
Uses of dynamic microsimulation models An overview of the technical choices made by dynamic microsimulation models Base dataset selection of dynamic microsimulation models Sample size distribution
307 311 311 313
Chapter 14 Table 14.1 Table 14.2 Table 14.3
Table 14.4
Table 14.5
Microsimulation models of health expenditure, 2000 2013 Spatial microsimulation models of health and disease Microsimulation models of broad economic systems, incorporating health and mortality outcomes Microsimulation models specifically developed of model health and mortality outcomes Microsimulation models of health workforce, 2000 2013
423 426 430 431 436
List of Figures Chapter 1 Figure 1.1 Figure 1.2
Sources of complexity in policy design and evaluation Enhanced complexity in inter-temporal and spatial microsimulation models
3 4
Downloaded by University of Newcastle At 01:13 11 December 2016 (PT)
Chapter 4 Figure 4.1 Figure 4.2
Figure 4.3
Figure 4.4
Marginal effective tax rates across the EU, 2007 (%) Child-contingent payments (as a % of National per Capita Disposable Income) and the share of children by decile group Percentage change in household disposable income due to fiscal consolidation measures 2008 2012 by household income decile group EU child basic income h50 per month per child aged under six: net flows as a percentage of national GDP
89 91
95 98
Chapter 5 Figure 5.1
Observed versus policy changes in income shares of different quintiles (US 1979 2007)
125
Counterfactual decomposition of change in poverty in Bangladesh, 2000 2010
158
Chapter 6 Figure 6.1 Chapter 8 Figure 8.1
Figure 8.2
Figure 8.3
Share of VAT, sales taxes and excise duties as percentage of total tax revenue OECD-average 1976 2011 Share of VAT, sales taxes and excise duties as percentage of total tax revenue for different OECD countries in 2011 Baseline VAT incidence in the Belgian EUROMOD baseline (2012 policies on 2010 SILC)
227 228 257
xxii
Chapter 12 Figure 12.1
List of Figures
Classification of spatial microsimulation models
369
Chapter 13 Figure 13.1 Figure 13.2 Figure 13.3
Downloaded by University of Newcastle At 01:13 11 December 2016 (PT)
Figure 13.4 Figure 13.5 Figure 13.6 Figure 13.7 Figure 13.8
Figure 13.9 Figure 13.10 Figure 13.11
Figure 13.12
The UTMS or four-step process Aggregation bias in transit mode choice modelling Time space prism constraints on feasible shopping episode locations The ARC tour-based model system The TASHA activity scheduling model system fug Household auto usage conflict resolution in TASHA Household ridesharing in TASHA SimTravel framework for integrated activity/travel and network performance modelling Route choice and network performance model structure Deterministic queuing model for a single intersection approach. Instantaneous versus experience path travel times. (a) Instantaneous travel time calculation. (b) Experienced travel time calculation Synthesizing population attributes using IPF
387 389 391 396 397 398 399 400 401 404
405 411
Chapter 17 Figure 17.1
Figure 17.2
Example of probability density function and cumulative distribution functions from a simulation of NPV for a farm business Example of a stoplight chart to rank risky alternatives
519 519
CHAPTER 1
Introduction Cathal O’Donoghue
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
1.1. Introduction Microsimulation modelling is a simulation-based tool with a micro unit of analysis that can be used for ex-ante analysis. It is a micro-based methodology, utilising micro units of analysis such as individuals, households, firms and farms, using surveys or administrative datasets. It is a simulation-based methodology utilising computer programmes to simulate public policy, economic or social changes on the micro population of interest. The methodology allows one to evaluate and improve the design of public policy on a computer before rolling out often costly programmes on the general population. To some extent the methodology can be regarded as a computer-based laboratory for running policy experiments. They can thus help to improve the effectiveness and efficiency of public programmes. The field spans users in government who use the method in the design and evaluation of public policy and as part of main stream academic researcher. As a field, it has its roots in a proposal by Orcutt (1957, 1960). Although the field has existed since the late 1950s, progress was relatively limited until the 1990s except for pockets of development in the United Kingdom, Sweden, and the United States due to challenges in relation to computing power and data. However it was only the advent of the personal computer in the 1980s and the availability of microdata that has allowed the field take-off, where the field has grown very rapidly with model use and development in many countries. Whether formally called microsimulation modelling or not, micro-based ex-ante simulation-based analysis is now used extensively around the world for policy analysis and design.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293001
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
2
Cathal O’Donoghue
The field is multi-disciplinary reflecting the different policy focuses, but are bound together as researchers who utilise computer-based simulation models to simulate the impact of public policy and/or economic and social change on micro units such as households, firms and farms. Depending upon the policy area the discipline has different names. For some particularly those working in public finance, social policy and rural development, the field is called microsimulation, for others in the agricultural policy it is farm level modelling, for others in labour economics, it is a branch of applied micro econometrics. However methodologically, there is much in common and much that can be learned from the different fields methodologically. It is particularly appropriate in this time of economic crisis, to focus on methodologies that can facilitate better policy design. While there have been a number of survey articles written such as Merz (1991, 1994), Mot (1992), Martini and Trivellato (1997), Bourguignon and Spadaro (2006), Dekkers and van Leeuwen (2010), Anderson and Hicks (2011) generally, Sutherland (1995) for Static Models, Klevmarken (1997) for behavioural models, O’Donoghue (2001), Zaidi and Rake (2001), Spielauer (2007) and Li and O’Donoghue (2013) for Dynamic Models, Rahman, Harding, Tanton, and Liu (2010), Hermes and Poulsen (2012), Tanton and Edwards (2013), Tanton (2014) and O’Donoghue, Morrissey, and Lennon (2014) for Spatial Models, Creedy and Duncan (2002) and Creedy and Kalb (2005) and Bargain and Peichl (2013) for Labour Supply models, Figari and Tasseva with a special issue on the Cross-country model EUROMOD, Brown (2011) for Health models and Ahmed and O’Donoghue (2007), Cockburn, Corong, and Cororaton (2010) and Bourguignon, Bussolo, and Cockburn (2010) for Macro-Micro models. Given the development over the past 20 years, it makes sense therefore to bring together these developments in a Handbook, bringing together in one location a description of the current best practice in the field.
1.1.1. Target audience The target audience for this book are academics, policy analysts, researchers, students and libraries. As the field is relatively diverse, one currently needs to consult a wide variety of sources in order to brief oneself on current developments. The authors come from both academia and governmental agencies reflecting this audience. Currently, modelling teams exist in most OECD countries inside and outside of government. As the field broadens into the developing world and into new policy areas, this book can be an initial ‘once stop shop’ in relation to modelling opportunities, options and choices. It also could serve as preliminary reading for targeted microsimulation modelling courses, for example those given at the different universities (National University of Ireland, Maastricht,
3
Introduction
Turin, etc.), Summer Schools (Essex) or bespoke training courses (e.g. NATSEM, Australia; CEMMAP, IFS London). Thus, the primary audience for the work are active and potential researchers, analysts and model builders in the field of microsimulation modelling, while it will also appeal to a secondary audience of social scientists with an interest in simulation modelling such as those within the Social Simulation field.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
1.1.2. Modelling complexity As a modelling framework, microsimulation modelling is a mechanism of abstracting from reality to help us understand complexity better. Figure 1.1 outlines potential sources of complexity in a static, single time period microsimulation model. In the context of policy design and evaluation, complexity can take the form of • Population structure of the population • Behavioural response to the policy • Policy structure These levels of complexity themselves interact with each other, resulting in a degree of complexity that is difficult to disentangle without recourse to a model. Consider first the dimension of population complexity. The first dimension of complexity in relation to population is whether an analysis takes place on a population with limited or extensive heterogeneity. Many analyses focus merely on the impact of policy on typical families, abstracting almost entirely from population complexity such as the OECD tax-benefit model based upon workers at the average production wage (Immervoll & Pearson, 2009). Another dimension of population complexity considered is the unit of analysis. Some microsimulation models have taken businesses such as firms (Van Tongeren, 1998) or farms (Hynes, Morrissey, O’Donoghue, & Clarke, 2009) as the micro unit of analysis, however Figure 1.1.
Sources of complexity in policy design and evaluation Population
Policy
Behaviour
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
4
Cathal O’Donoghue
most have carried out analysis at the level of individuals or households (Bourguignon & Spadaro, 2006). The next dimension of complexity is policy complexity. This relates to the range of different policy or socio-economic impacts. This can relate both to the degree of policy complexity; many microsimulation models try to replicate the fine detail of legislation in modelling policy to the different types of policy modelled, whether it is tax and benefit policy, indirect taxation, health policy, pension’s policy, rural policy, transport policy or macro-economic change etc. Different geo-political contexts may influence the nature of the policy complexity. For example the set of policies simulated in an OECD country may be different to those simulated in a developing country, with the former typically having a greater reliance on income-related systems such as income taxation and means tested benefits and the latter more reliant on consumption-based taxes and in-kind instruments. The third dimension of complexity is behaviour. Many policies are explicitly aimed at influencing policy as in the case attempts to improve work incentives as in the case of in-work benefits and environmental policy. Models that abstract from behavioural response are known as static microsimulation models. Types of behaviour often incorporated in microsimulation models include labour participation and supply, consumption decisions, benefit take-up, tax evasion, transport decisions, farm and firm level investment decisions, etc. In the case of models that incorporate either spatial dimensions or inter-temporal dimensions, the level of complexity is increased further (Figure 1.2). Land use and spatially targeted policy or spatially targeted socio-economic effects require spatial models. Policies which depend upon long-term contribution histories such as pensions and long-term care policy or require long-term repayments as in the case of education financing utilise inter-temporal or dynamic models.
Figure 1.2. Enhanced complexity in inter-temporal and spatial microsimulation models Inter-temporal Microsimulation Models
Spatial Microsimulation Models
Population
Behaviour
Population
Policy
Time
Behaviour
Policy
Place
Introduction
5
Finally across all dimensions, there may be an interest in understanding the performance of policy in different country contexts. Multi-country models and comparative analyses have been developed to analyse questions of differential complexity. 1.2. Overview of the handbook
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
The objectives of this handbook are to provide an overview of existing applications of microsimulation modelling and associated methodologies and to provide guidance in relation to potential future research directions in the field. The book is divided into a number of chapters that build upon the different dimensions of complexity outlined above in which microsimulation models attempt to model and understand. 1.2.1. Population complexity The first dimension of complexity that we consider in the Handbook relates primarily to population complexity. These models are typically defined as static models, focusing on the impact of policy change before there is an opportunity for a behavioural reaction; the day after effect. The simplest type of static microsimulation is one that abstracts from population complexity, entirely, analysing the static impact of policy and policy change on hypothetical families. Burlacu et al. in Chapter 2 describe the main focus of these models which include • • • • •
Illustrative purposes; Validation; Cross-national comparisons; Replacement of insufficient or lack of microdata; Communication with the public.
Hypothetical models have been applied to many fields, but the dominant policy area is that of Social security and Taxation. Given their relative simplicity, their geographic spread has been widespread, with the United Kingdom, the United States, Australia and Ireland having the highest share of such models. Methodologically the chapter focuses on a variety of choices including, the unit and period of analysis, updating and analytical output measures which are common to other types of model as well as modelling choices specific to hypothetical such as the unit of variation by which heterogeneity is introduced to hypothetical models. Static microsimulation models involve the interaction of population and policy complexity, but abstract from behavioural response or other endogenous change in the model and as such they model the day after effect of a policy change. Li et al. in Chapter 3 consider the uses and methodological choices of static models. Like other types of
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
6
Cathal O’Donoghue
microsimulation model, they model in detail the policy system with the aim to understand how policy impacts at the individual and household level, thus modelling the interaction of population and policy complexity. Given the complexity of tax and benefit policy legislation, these models, even ignoring behavioural responses, can be highly complicated. In terms of their policy focus, while historically tax and social security policy has been the main focus, the chapter notes the increasing importance of other policy areas particularly in relation to health and social care. The geographical spread of these models has increased as availability of micro-data has increased. Given their relative policy complexity and availability of data, static microsimulation models have proliferated in OECD countries, however with improved data availability, the field has expanded outside of OECD countries in recent years. In terms of analytical scope, static models focus on distributional and redistribution incidence as well as on the drivers of behaviour, even if behaviour is not modelled endogenously. The chapter utilised a study funded by EUROSTAT to consider a number of methodological choices made by a subset of models globally. A particular focus was placed upon updating and static ageing assumptions, the degree of parameterisation in the model, the choice of baseline data, reweighting and general model maintenance. The development of microsimulation models in different countries has brought with it inevitably, the demand for comparative research to learn about the relative functioning of policy in different countries. Sutherland in Chapter 4 considers the development and use of Cross-Country models and highlighted that considering several countries within the same analysis provides a kind of ‘laboratory’ in which to analyse the effects of similar policies in different contexts or different policies with common objectives. In principle cross-country comparisons could be undertaken across any of the areas considered in this handbook. However given the methodological and data challenges in relation to comparability, except for the significant work programmes within the EUROMOD project and smaller initiatives in Latin America and African countries for static modelling, the OECD hypothetical modelling system, IFCN farm level modelling system, described elsewhere in this book, examples are relatively rare. The MIDAS project compared a number of countries using a dynamic model. Cross-country models can be used for a number of purposes including • Understanding the detailed operation of existing policy systems • Policy Swapping to consider the impact of utilising a policy from one country on another country • Considering the impact of similar policy reforms on different countries (Levy, Matsaganis, & Sutherland, 2013). The key methodological choice in undertaking comparative research such as this is whether to utilise individual national models or to develop
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
7
a consistent comparative framework. While the former choice may reduce the need to develop an additional model, specific modelling choices made in individual national models may reduce the capacity for comparative research, limiting users to a least common denominator approach. Methodological choices faced by cross-country models include software, the harmonisation of data, and the comparable policy scope. Given the capacity to model the incidence of policy on a population, microsimulation models have been used to understand in greater detail the way in which policy impacts upon the income distribution. Bargain in Chapter 5 considers the methodology used to decompose the drivers of the income distribution. Traditionally models have modelled inequality with and without particular policies or modelled the impact of an actual policy change. This chapter highlights challenges in relation to the decomposability of inequality and poverty indices and the decomposition of changes contemporaneously into factor and income components. It considers a methodology developed in a recent year using microsimulation to construct counterfactual situations and to disentangle the pure effect of a policy change from changes in the environment in which the policy operates. The methodology decomposes inequality change into Policy, Income Growth and Other effects, although linearity in tax-benefit systems sees the income growth component eliminated. Methodological choices made within the framework include • Choice of Nominal Adjustments • Choice of Definition of the Policy Effect The methodology has been applied to study the effect of policy changes in France and Ireland, policies implemented under the first New Labour government (1997 2001) in the UK, the effect of tax-benefit policies on non-welfarist aggregated measures that value leisure in addition to disposable income, the role of policy developments occurring during 2008 2010, the first “dip” in the Great Recession, in four European countries and over the long term in the United Kingdom and the United States. Increasing availability of data in developing countries has allowed for the field to be extended beyond OECD countries over the past two decades. Nssah in Chapter 6 consider microsimulation techniques commonly used to assess variations in individual and social outcomes associated with the process of development to try to ‘explain’ distributional change by decomposing it into its various determining factors. The chapter describe five basic elements that characterize a decomposition method: (i) domain, (ii) outcome model, (iii) identification; (iv) scope and (v) estimation. It also identifies three broad channels through which the development process affects the size distribution of economic welfare: (i) the distribution of factor endowments and socio-economic characteristics among
8
Cathal O’Donoghue
the population, (ii) the returns to these assets, and (iii) the behaviour of socio-economic agents with respect to resource allocation subject to institutional constraints. Methodologically the chapter describes the following approaches • • • • •
Statistical Non-Parametric Regression Model Reduced-Form Model Structural.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
The chapter describes a number of different applications of the methodology to • • • •
Change in the Distribution of Earnings Change in the Distribution of Household Income Growth Incidence Analysis Analysis of the Impact of Shocks and Policy Interventions.
1.2.2. Behavioural complexity Many public policies have the objective of changing behaviour in addition to financing or distributional objectives. Behavioural objectives include the reduction of work disincentives, and the reduction pollution. Behavioural responses may also have indirect effects via changes in the wider macro-economy or directly in terms of responses to price changes. It is unsurprising therefore that part of the field of microsimulation has focused on behavioural response. Labour market behaviour in relation to labour supply is one of the most important areas for behavioural analysis in the microsimulation field. Aaberge and Colombino in Chapter 7 provide a detailed discussion in relation to the development of the field of labour supply-focused microsimulation models and methodological choices. The chapter identifies three methodologies for modelling labour supply • The Reduced Form Approach • The Structural ‘Marginalist’ Approach • The Random Utility Maximisation Approach The chapter considers issues associated with the reliability of random utility maximisation relative to (ex-post) experimental or quasi-experimental analysis. Recognising however the need to undertake ex-ante analysis, it questions, whether there are alternatives to structural models and how can we evaluate structural models and how they are compared with other approaches. The chapter then describes approaches to utilising these models for policy simulation in terms of producing and interpreting simulation
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
9
outcomes, outlining an extensive literature of policy analyses utilising the approach. Also labour supply is not only central to modelling behavioural response but also modelling optimal tax-benefit systems, with a focus on a computational approach, given some of the challenges of the theoretical approach. Combining labour supply results with welfare functions enables the social evaluation of policy simulations. Combining welfare functions and labour supply functions, the chapter then identifies how to model socially optimal income taxation. While the modelling of policy measures that depend upon current income and characteristics such as Direct Taxation and social transfers has been the focus of most of the models included in the chapters described thus far, Indirect Taxation or taxation that is a function of expenditure is quite important. Cape´au et al. in Chapter 8 review models that depend upon consumption and indirect tax. Given the choice between changing consumption or savings rates when prices or taxes of goods change, behavioural assumptions or models, whether explicitly modelled are not, are intrinsic to all indirect tax models. It is one of the more important sources of tax revenue in OECD countries and frequently the most important in non-OECD countries, while there have been substantial reforms in the past 15 years. Methodologically most current indirect tax models take the household as the unit of analysis, with some extensions into firm level units. The chief coverage-related methodological choice is which taxes are to be modelled and which goods and services are to be included. Another choice relates to whether to incorporate the impact on intermediate prices via the use of input-output models. Another modelling choice relates to the level of aggregation modelled and how durable goods or housing are treated. The chapter describes four alternative approaches to modelling the impact of a price to tax change • • • •
Fixed quantities Fixed expenditure shares Using estimated Engel curves Using estimated demand systems
The chapter outlines a variety welfare analyses that utilised in indirect taxation modelling, including modelling winners and losers, the progressivity of a reform and distributional analyses involving both direct and indirect tax reform. The chapter finishes by providing some examples of models used to model indirect tax changes, contrasting a Mexican model METAX with EUROMOD. Another chapter relating to behavioural complexity focuses on the micro impact of responses within the wider macro-economy. Cockburn et al. in Chapter 9 describes the use and development of models that incorporate this macro-economic context utilising a CGE or Computable General Equilibrium framework. Combining the microsimulation and the CGE
10
Cathal O’Donoghue
framework brings the advantage of the CGE model that can capture sectoral, macro and price effects of policy reforms, with the capacity of microsimulation to incorporate distributional impact. These linked models require a Social Accounting Matrix mapped to underlying micro survey data. This chapter identifies a number of approaches to linking macro and micro models:
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
• • • • • •
The representative household approach The fully integrated approach The top down micro-accounting approach The top down with behaviour approach The bottom up approach The iterative approach
Factors the influence the choice include, data availability, the research question, and available development time. Many of the applications relate to micro impacts of macro-economic changes such as • • • • • • •
Structural Adjustment Programmes Trade Liberalisation Poverty-Reduction Policies Fiscal Reform Agricultural Policies Labour Market Policies Environmental Policies
1.2.3. Temporal and spatial complexity Thus far, we have focused on models that utilise datasets for a specific point in time and a specific place. However policies often have a temporal dimension as in the case of the accumulation of pension entitlement over time, the impact of long-term demographic change, or location specific policy and behaviour as in the case of urban renewal, rural development or migration behaviour. Dynamic Microsimulation models simulate inter-temporal transitions in the population for use in policies that require this information such as pensions and student loans or to provide an inter-temporal analytical dimension such as life-cycle redistribution. Li et al. in Chapter 10 describe the methodological choices faced by builders of these models and summarise the uses and applications of these models. There is now quite a significant literature on dynamic microsimulation modelling. The main applications are unsurprisingly in the area of pensions, life-course redistribution and inter-generational redistribution. However the methodology is broadening to include health-, spatial- and education-focused models.
Introduction
11
The chapter describes a number of methodological choices including
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
• • • • •
Choice of Base Data in terms of cross-section or cohort Discrete versus continuous time Open or closed models Use of alignment Use of behaviour
Most models are aligned, closed, cross-section models focused using discrete time. In relation to the last choice, about a third of the models incorporate endogenous behaviour, incorporating this dimension of complexity discussed in the previous sub-section. The chapter also discusses methodological linkages and similarities with macro-models and agent-based models as well as modelling complexity and model validation. Underlying any dynamic microsimulation model is a model of demographic, typically inter-temporal, behaviour. There is a parallel field focusing on the microsimulation of demographic phenomena described in Chapter 11 (Mason). The primary or direct focus of the field is to understand demographic transitions, whilst policy impact is an indirect objective. Rather, similar to agent-based models, although pre-dating the term, demographic-based microsimulation is an alternative to scientific reasoning using statistical models. Demographic microsimulation models, as in the case of other microsimulation models, can enable greater heterogeneity than macro or cohort-based models. The chapter defines the main uses of demographic microsimulation models and quantify the implications of changes in kin availability as they relate to: • Long-term changes in fertility and mortality, • Rules, policies and preferences such as incest taboos or pro or antinatalist policies, • HIV/AIDS, • Developing indirect methods of estimating demographic quantities in the absence of data. In terms of methodology, some of the same issues apply as in the case of dynamic microsimulation models such as • Open or closed models in relation to how now people such as births or partners are simulated in a household • Discrete or Continuous Time • Changing behaviour over time • The choice of events to be simulated • Specific simulation methodologies for specific event types Location also adds a dimension of complexity over which the population varies such as different labour markets, on which policy can vary as
12
Cathal O’Donoghue
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
in the case of localised development policy, where distance may have an effect in terms of commuting behaviour, or where behaviour may vary, with spatial clustering of preferences. This spatial dimension is discussed in Chapter 12 (Tanton and Clarke). One of the main methodologies used in the field relates to spatial micro data generation. Typically there are data gaps in relation to data that is both representative at the unit of analysis such as the individual or household and also representative at the spatial scale. Data that are rich in terms of individual contextual data may have poor locational information or due to sample size may not be representative at fine spatial data. On the other hand data that has a fine spatial resolution may have limited contextual data. Occasionally as in the case of administrative data in the Nordic countries, both may coincide. The chapter characterises models in terms of • Static versus Dynamic Ageing • Reweighting-based data generation versus Synthetic Reconstruction • If Reweighting, Deterministic or Probabilistic The choice of method will depend upon data availability and model objectives. In terms of external data requirements, the data generation approach requires spatial benchmark aggregates. As in the case of other models, given the degree of complexity involved, validation is critical. Model applications are divided up into the simulation of the spatial incidence of socio-economic phenomena such as income and poverty, crime, housing stress, obesity, water demand, smoking rates, well-being, trust and disability on the one hand and policy reform on the other. Spatial models are sometimes linked to other models such as static tax-benefit microsimulation models, CGE models, location-allocation models and to spatial interaction models. Another use for spatial microsimulation models is for projections.
1.2.4. Policy complexity While most of the chapters thus far incorporate policy complexity interacting with the dimensions of complexity described, often with a focus on cash taxes and transfers, the variety of policies for which microsimulation models are used for has expanded. This Handbook contains chapters with a focus on Transportation, Health and Environmental policy.1
1
Other policy areas that have extensive literatures include land use (Miller & Salvini, 2001; Salvini & Miller, 2005; Waddell, 2002; Wegener & Spiekermann, 1996) and education (Courtioux, 2012; Ferreira & Leite, 2004; Flannery & O’Donoghue, 2011, 2013; Grimm, 2004; Nelissen, 1994, O’Donoghue, 1999). There are smaller literatures in relation to the modelling of financial markets (Iori, 2002; Lux & Marchesi, 2000).
Introduction
13
Given the heterogeneity of transport choices, this field was one of the earlier policy areas to utilise microsimulation models. Miller in Chapter 13 outlines different dimensions of heterogeneity in relation to transportation: • Space • Mode • Time
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Most of the literature focuses on person movement rather than freight movement, with much of the focus on intra-city rather than inter-city travel, albeit larger scale models have been developed recently. The chapter divides the objectives of the model into three different sub-objectives • Travel Demand • Route Choice and Network Performance Travel demand is typically modelled using a random utility maximisation model similar to those used in the labour market chapter. The chapter highlights that route and network simulation involves three basic components: • Path choice (routing). • Simulating vehicle (person) movements through chosen paths. • A procedure for determining algorithm convergence. Methodological choices for each component are described in turn. Health care is a relatively large area of public expenditure with many complex systems and outcomes (Schofield et al., Chapter 14). However it has only been in recent times when a microsimulation field has developed. The largest are of use has been in modelling health expenditures and financing such as health insurance and expenditure on specific healthrelated programmes like pharmaceutical benefits. To some extent these are a variant of the types of policies found in a static tax-benefit microsimulation model. Others model, in more detail, health and disease attributes, sometimes incorporating the attributes of dynamic microsimulation models in an inter-temporal setting. Increasingly however models, recognising the spatial incidence of ill-health and particular conditions such as obesity and diabetes, are increasingly incorporating a spatial dimension also. They face similar methodological challenges as the spatial microsimulation models described above in terms of data generation. The chapter also focuses on simulation of demographic process related to health such as mortality. A more recent dimension within the field is the development of work force modelling. Given the size of health institutions and the complexity of the professions involved in the delivery of services and the differential demand for specific services as population’s age, microsimulation
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
14
Cathal O’Donoghue
modelling can usefully be applied to identify demand-supply gaps in the health service industry. The environment as a policy issue has increased dramatically over the past four decades. Research in this area extends from global challenges such as climate change, access to water and soils, ozone emissions, biodiversity loss to issues with a smaller geographical scope such as water quality and congestion to the impact of the environment on health. Hynes and O’Donoghue in Chapter 15 describe the use and development of environmental microsimulation models. The use of microsimulation modelling in the realm of the environment overlaps with many traditional areas of microsimulation modelling such as the distributional incidence of public policies or the impact on behaviour in relation to the incidence of these policies. Within the environmental and natural resource economics literature the interaction between human activity and the environment has also been shown to be strongly influenced by spatial location and the use of spatial microsimulation models has proven a useful tool for modelling socio-economic-environmental interactions and policies in this regard. Given the impact of the environment on agriculture and vice versa, there has also been an increased focus of the modelling of agriculture in this context and in particular related to environmental regulation. Another area where simulation has been increasingly employed within the environmental and natural resource economics literature is in regards to the quantification of the non-market value of environmental and natural resources, via the estimation of valuation models (and in particular discrete choice models) and the subsequent calculation of related welfare measures for environmental goods that are not generally traded in an established market. Methodological choices made in environmental microsimulation models include: • • • • • •
Static versus behavioural response Modelling Pollution Macro-Micro Linkages Units of Analysis Scope and Spatial disaggregation The valuation of non-market public goods and benefit transfer
1.2.5. Unit of analysis Thus far the unit of analysis that most chapters have focused on has been the individual or the household. However there is policy interest in other units of analysis such as the firm or the farm as they are legal entities in their own right with specific policies and related incidence and as decision making entities the capacity to make decisions.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
15
Bach et al. in Chapter 16 highlight that there has been an interest in modelling at the firm level since the outset of the field. Decisions that can be made include decisions on the location, the factor demand (including investment in fixed assets), and the production process, the supply of goods at certain prices and on different markets, portfolio investment, and the financing of corporations. This chapter focuses specifically on models with significant firm level heterogeneity, ignoring hypothetical firm models or CGE models that incorporate a firm dimension. They also ignore analyses focused for example on self-employed people contained within household datasets and modelled in for example static microsimulation models. Most models are developed by government given their interest in firm behaviour and firm level taxation. This is also driven by their access to large-scale data that is often off limits to researchers. Microsimulation models are developed for firms, when there is an interest in issues such as distributional incidence or individual behaviour or the environment in which a firm operates and which are not possible using more aggregated models. Methodological choices made by firm level models include: • • • • •
Static versus dynamic modelling With or without behavioural response Narrow vs. broad set of decision types Closed/open (national/international) With/without modelling macro repercussions
The choices are thus similar to those made in other sub-fields within the field of microsimulation modelling. The chapter also discusses data requirements. The chapter characterises three main uses • Basic Government models • Advanced approaches for forecasting and policy analysis • Microsimulation as a component of broader academic studies A specific firm level type model is models at the Farm Level. There is separate interest in firms of this type as the sector is quite specific and policy is a very important driver of the sector. Data availability is often quite good particularly in OECD countries as it is collected to support the scale of policy intervention, where micro level analysis may be a mandatory part of policy formation. Richardson et al. in Chapter 17 describe the use and methodological choices made by models of this type. The nature of farm level models is similar to models in other sub-fields with farm level versions of Hypothetical, Static, Cross-Country, Dynamic, Behavioural, Expenditure, Environmental, and Spatial Models. Methodological choices made in the development of farm level models relate to the nature of the farm level model. These include
16
• • • • • • •
Cathal O’Donoghue
Income Generation processes or Production Functions National scale and spatial scale analyses Modelling Environmental Impacts Data Choice Static versus Dynamic With or Without Behaviour Hypothetical and Heterogeneous populations
Unlike most other areas, except to an extent, some labour supply models, Optimisation in relation to an objective function with constraints is more common.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
1.3. Future directions The field of microsimulation has progressed significantly since the early days of Orcutt. The field has expanded in both geographical scope in terms of the range of countries, in terms of the breath in terms of the range of policies and issues simulated and in terms of the analytical depth. In this chapter we have made attempt to summarise the outline of the book in relation to different the dimensions of complexity that microsimulation models attempt to address in terms of Population, Behaviour, Policy, and Temporal and Spatial complexity. Despite the different objectives of the different sub-fields there are very many overlaps in relation to data decisions, methodological decisions, particularly in relation to how these dimensions of complexity are dealt with. One dimension of advancement in the field is the adaptation of existing techniques to new countries, for example recent extensions to Serbia (Randelovic & Rakic, 2013), Russia (Popova, 2013) and China (Linping, Weidong, & Hong, 2011). In addition to the use of microsimulation models in new countries, there is also the potential for the application of more crosscountry comparative research in different sub-fields of microsimulation. Another quick win is in the application of microsimulation methods to other areas of policy where microsimulation models’ capacity to model complexity on heterogeneous agents is useful. Examples where there are relatively small literatures but given the large-scale public expenditures, could benefit from the use of microsimulation for ex-ante policy simulation include Education, Social Care and Local Government policy. There are more opportunities for linking models of different kinds. One of the early objectives of these models was to develop micro-based models of the entire economy as in the case of Basu, Pryor, and Quint (1998). However the challenge of doing this is that the level of complexity created by the model itself may become intractable and effectively reducing the advantage of modelling or abstracting from all the complexity of reality to say something useful.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
17
Much of the field has focused on static incidence or non-behavioural reduced form simulations, with exceptions in areas such as labour supply, retirement choice, expenditure and environmental modelling. There are opportunities both to deepen the use of existing tools and to incorporate behavioural responses to different dimensions of human activity such as commuting, migration, education participation, demographic-related decision making, health improving activities, etc. Each of the individual chapters highlights specific methodological and data improvements that could be made, which we will not repeat here. However a common methodological challenge of many models as that most microsimulation models only report outputs as point estimates. Despite Pudney and Sutherland’s (1994) work on the use of confidence intervals in microsimulation models, their use is still low, with some exceptions (Creedy, Kalb, & Kew, 2007; Pudney, Hancock, & Sutherland, 2006) Goedeme´, Van den Bosch, Salanauskaite, and Verbist (2013) model the statistical significance of estimates. Given the complexity of model outcomes, more research would be valuable in relation to the visualisation of model outputs (Mueller, 2011). A challenge highlighted in O’Donoghue (forthcoming) which has hindered development in the field has been the paucity of chronicling methodological choices and evaluations in the development of models, with an over emphasis on application and of verbal and short-term communication mechanisms such as conferences and internal team documentation. The advent of the International Journal of Microsimulation in recent years has helped to the field to record these choices and thus provides a methodological foundation for the field on which we can build for the future. The field of microsimulation has had a relatively ad hoc organisation which to some extent has hindered development, with occasional conferences and published books. However the development of the International Microsimulation Association over the past decade, together the International Journal of Microsimulation has however improved matters. However more gains can be had by greater organisation within sub-fields (Dekkers & Zaidi, 2011). Similar scientific advance can be quickened through greater interaction with parallel, relevant fields such as agent-based modelling (Niazi & Hussain, 2011) and Social Simulation (Troitzsch, 1996).
Acknowledgement I gratefully appreciate the administrative support of my colleague Ursula Colohan who coordinated the process the delivery of the individual chapters in such an efficient way.
18
Cathal O’Donoghue
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
References Ahmed, V., & O’Donoghue, C. (2007). CGE-microsimulation modelling: A survey. MPRA Working Paper No. 9307. Anderson, R. E., & Hicks, C. (2011). Highlights of contemporary microsimulation. Social Science Computer Review, 29(1), 3 8. Bargain, O., & Peichl, A. (2013). Steady-state labor supply elasticities: A survey. ZEW Discussion Paper No. 13-084. Basu, N., Pryor, R., & Quint, T. (1998). ASPEN: A microsimulation model of the economy. Computational Economics, 12(3), 223 241. Bourguignon, F., Bussolo, M., & Cockburn, J. (2010). Guest editorial macro-micro analytics: Background, motivation, advantages and remaining challenges. International Journal of Microsimulation, 3(1), 1 7. Bourguignon, F., & Spadaro, A. (2006). Microsimulation as a tool for evaluating redistribution policies. The Journal of Economic Inequality, 4(1), 77 106. Brown, L. (2011). Editorial special issue on ‘health and microsimulation’. International Journal of Microsimulation, 4, 1 2. Cockburn, J., Corong, E., & Cororaton, C. (2010). Integrated computable general equilibrium (CGE) micro-simulation approach. International Journal of Microsimulation, 3(1), 60 71. Courtioux, P. (2012). How income contingent loans could affect the returns to higher education: A microsimulation of the French case. Education Economics, 20(4), 402 429. Creedy, J., & Duncan, A. (2002). Behavioural microsimulation with labour supply responses. Journal of Economic Surveys, 16(1), 1 39. Creedy, J., & Kalb, G. (2005). Discrete hours labour supply modelling: Specification, estimation and simulation. Journal of Economic Surveys, 19(5), 697 734. Creedy, J., Kalb, G., & Kew, H. (2007). Confidence intervals for policy reforms in behavioural tax microsimulation modelling. Bulletin of Economic Research, 59(1), 37 65. Dekkers, G., & van Leeuwen, E. (2010). Guest editorial-special issue on ‘methodological issues in microsimulation’. International Journal of Microsimulation, 3(2), 1 2. Dekkers, G., & Zaidi, A. (2011). The European network for dynamic microsimulation (EURODYM) A vision and the state of affairs. International Journal of Microsimulation, 4(1), 100 105. Ferreira, F. H., & Leite, P. G. (2004). Educational expansion and income distribution: A microsimulation for Ceara´. In van der Hoeven & Shorrocks (Eds.), Growth, inequality and poverty (pp. 222 250). London: Oxford University Press.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
19
Flannery, D., & O’Donoghue, C. (2011). The life-cycle impact of alternative higher education finance systems in Ireland. Economic & Social Review, 42(3), 237 270. Flannery, D., & O’Donoghue, C. (2013). The demand for higher education: A static structural approach accounting for individual heterogeneity and nesting patterns. Economics of Education Review, 34, 243 257. Goedeme´, T., Van den Bosch, K., Salanauskaite, L., & Verbist, G. (2013). Testing the statistical significance of microsimulation results: A plea. International Journal of Microsimulation, 6(3), 50 77. Grimm, M. (2004). The medium-and long-term effects of an expansion of education on poverty in Coˆte d’Ivoire: A dynamic microsimulation study (No. 2004/32). Research Paper, UNU-WIDER, United Nations University (UNU). Hermes, K., & Poulsen, M. (2012). A review of current methods to generate synthetic spatial micro data using reweighting and future directions. Computers, Environment and Urban Systems, 36(4), 281 290. Hynes, S., Morrissey, K., O’Donoghue, C., & Clarke, G. (2009). Building a static farm level spatial microsimulation model for rural development and agricultural policy analysis in Ireland. International Journal of Agricultural Resources, Governance and Ecology, 8(3), 282 299. Immervoll, H., & Pearson, M. (2009). A good time for making work pay? Taking stock of in-work benefits and related measures across the OECD (No. 3). IZA Policy Paper. Iori, G. (2002). A microsimulation of traders activity in the stock market: The role of heterogeneity, agents’ interactions and trade frictions. Journal of Economic Behavior & Organization, 49(2), 269 285. Klevmarken, N. A. (1997). Behavioral modeling in micro simulation models. A survey. Working Paper No. 1997: 31, Department of Economics, Uppsala University. Levy, H., Matsaganis, M., & Sutherland, H. (2013). Towards a European Union child basic income? Within and between country effects. International Journal of Microsimulation, 1(6), 63 85. Li, J., & O’Donoghue, C. (2012). A methodological survey of dynamic microsimulation models (No. 002). United Nations University, Maastricht Economic and social Research and training centre on Innovation and Technology. Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 3 55. Linping, X., Weidong, T., & Hong, L. (2011). Constructing a basefile for simulating Kunming’s medical insurance scheme of Urban employees. International Journal of Microsimulation, 4(3), 3 16.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
20
Cathal O’Donoghue
Lux, T., & Marchesi, M. (2000). Volatility clustering in financial markets: A microsimulation of interacting agents. International Journal of Theoretical and Applied Finance, 3(04), 675 702. Martini, A., & Trivellato, U. (1997). The role of survey data in microsimulation models for social policy analysis. Labour, 11(1), 83 112. Merz, J. (1991). Microsimulation A survey of principles, developments and applications. International Journal of Forecasting, 7(1), 77 104. Merz, J. (1994). Microsimulation A survey of methods and applications for analyzing economic and social policy (No. 9). FFB Discussion Paper. Miller, E. J., & Salvini, P. A. (2001). The Integrated Land Use, Transportation, Environment (ILUTE) microsimulation modelling system: Description & current status. In D. Hensher & J. King (Eds.), The leading edge of travel behavior research. Travel behaviour research: The leading edge (pp. 711 724). Amsterdam: Elsevier. Mot, E. S. (1992). Survey of microsimulation models. Social Security Research Committee. The Hague: VUGA. Mueller, G. P. (2011). Microsimulation of virtual encounters: A new methodology for the analysis of socio-cultural cleavages. International Journal of Microsimulation, 4(1), 21 34. Niazi, M., & Hussain, A. (2011). Agent-based computing from multiagent systems to agent-based models: A visual survey. Scientometrics, 89(2), 479 499. O’Donoghue, C. (1999). Estimating the rate of return to education using microsimulation. Economic and Social Review, 30(3), 249 266. O’Donoghue, C. (2001). Dynamic microsimulation: A methodological survey. Brazilian Electronic Journal of Economics, 4(2), 77. O’Donoghue, C. (2014). Increasing the impact of microsimulation modelling: Presidential address world congress of the international microsimulation association, Canberra. International Journal of Microsimulation. (forthcoming). O’Donoghue, C., Morrissey, K., & Lennon, J. (2014). Spatial microsimulation modelling: A review of applications and methodological choices. International Journal of Microsimulation, 7(1), 26 75. Orcutt, G. H. (1957). A new type of socio-economic system. Review of Economic Studies, 39(2), 116 123. Orcutt, G. H. (1960). Simulation of economic systems. The American Economic Review, 50(December), 894 907. Popova, D. (2013). Impact assessment of alternative reforms of child allowances using RUSMOD The static tax-benefit microsimulation model for Russia. International Journal of Microsimulation, 1(6), 122 156. Pudney, S., Hancock, R., & Sutherland, H. (2006). Simulating the reform of means-tested benefits with endogenous take-up and claim costs. Oxford Bulletin of Economics and Statistics, 68(2), 135 166.
Downloaded by RMIT University At 09:03 12 April 2016 (PT)
Introduction
21
Pudney, S., & Sutherland, H. (1994). How reliable are microsimulation results? An analysis of the role of sampling error in a UK tax-benefit model. Journal of Public Economics, 53(3), 327 365. Rahman, A., Harding, A., Tanton, R., & Liu, S. (2010). Methodological issues in spatial microsimulation modelling for small area estimation. International Journal of Microsimulation, 3(2), 3 22. Randelovic, S., & Rakic, J. Z. (2013). Improving work incentives in Serbia: Evaluation of a tax policy reform using SRMOD. International Journal of Microsimulation, 1(6), 157 176. Salvini, P., & Miller, E. J. (2005). ILUTE: An operational prototype of a comprehensive microsimulation model of urban systems. Networks and Spatial Economics, 5(2), 217 234. Spielauer, M. (2007). Dynamic microsimulation of health care demand, health care finance and the economic impact of health behaviours: Survey and review. International Journal of Microsimulation, 1(1), 35 53. Sutherland, H. (1995). Static microsimulation models in Europe: A survey (No. 9523). Faculty of Economics, University of Cambridge Microsimulation Unit Discussion Paper. Tanton, R. (2014). A review of spatial microsimulation methods. International Journal of Microsimulation, 7(1), 4 25. Tanton, R., & Edwards, K. (Eds.). (2013). Spatial microsimulation: A reference guide for users. Netherlands: Springer. Troitzsch, K. G. (Ed.). (1996). Social science microsimulation. Springer. Van Tongeren, F. W. (1998). Microsimulation of corporate response to investment subsidies. Journal of Policy Modeling, 20(1), 55 75. Waddell, P. (2002). UrbanSim: Modeling urban development for land use, transportation, and environmental planning. Journal of the American Planning Association, 68(3), 297 314. Wegener, M., & Spiekermann, K. (1996). The potential of microsimulation for urban models. Microsimulation for Urban and Regional Policy Analysis. European Research in Regional Science, 6, 149 163. Zaidi, A., & Rake, K. (2001). Dynamic microsimulation models: A review and some lessons for SAGE. Simulating Social Policy in an Ageing Society (SAGE) discussion paper, (2).
CHAPTER 2
Hypothetical Models Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
2.1. Introduction Microsimulation models can be used with various types of datasets, including the synthetic data constructed based on a number of simple assumptions. Models based on hypothetical data offer useful insights despite their simplicity. In fact, their usefulness stems largely from the simplicity of the approach as it avoids the population complexity. In this chapter we review the development of microsimulation models using hypothetical data. Modern policy problems increasingly require methodological approaches that are able to capture the complexity of interactions between policies and socio-economic realities, and between policies and the institutional framework. These are further complicated when one takes into account the behavioural, temporal and/or space dimension. Microsimulation models are increasingly used for analysing these problems as they are able to handle a high degree of complexity. Microsimulation models use data on micro-units (e.g. individuals, households, firms, farms, etc.) to simulate the effect of policy or other socio-economic changes on the population of micro-units (Mitton, Sutherland, & Weeks, 2000). The need for microsimulation arises from the impossibility of observing simultaneously the outcomes for the same micro-unit under the treatment and in the absence of the treatment (e.g. policy change), and also crucially as a tool to understand the complexity of a policy problem. Microsimulation models are thus ex-ante evaluation tools that generate synthetic micro-level data which illustrate counterfactual situations that would prevail under alternative situations (e.g. policy reform scenarios), ceteris paribus.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293000
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
24
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
Microsimulation models attempting to understand the complexity of a problem can sometimes make the problem more complex by virtue of their own intricacy. Therefore there is a need to abstract from complexity to better understand an issue. A model based upon hypothetical data by definition abstracts from the complexity stemmed from population and behaviour heterogeneity, which allows the user to focus on a single dimension of complexity: policy. They are thus particularly useful in helping us to understand specific effects on particular micro-unit types. For example, national tax benefits systems incorporate income tax rules, social security contribution and benefit rules, means-tested benefit rules which interact with one another, resulting in serious kinks in terms of work incentives and interactions between various benefits, and between benefits and taxes. Hypothetical data are particularly useful in this context by enabling a deeper understanding of the functioning of complex policies. Hypothetical data is applied in many fields, ranging from social policy to engineering1 and medicine.2 In the field of social science, it is the workhorse of the OECD job strategy, where policies are evaluated in terms of unemployment replacement rates (Pearson & Scarpetta, 2000); of studies of the European Commission exploring the unemployment crisis in Europe (see Buti, Sestito, & Wijkander, 2001; OECD, 1996). The use of hypothetical data is particularly useful in the ex-ante evaluation of a policy reform that is aimed to target a certain type of micro-units (e.g. households with particular situations). Microsimulation models based on hypothetical data can be used as a communication tool regarding the likely impact of the policy reform on the target micro-units (e.g. households). In times of careful allocation of economic resources, provisional improvisation of policy implementation can serve an important means for time and finance saving. Ex-ante policy analysis encompasses the possibility to simulate modifications in multiple policy parameters in a fast and efficient manner by using synthetic or hypothetical data. This chapter aims to illustrate the increased applicability of such data in policy analysis, and to explain its considerate advantages and main drawbacks in policy analysis. In the literature, hypothetical or synthetic data is used in a wide variety of settings ranging from hypothetical family calculations, as in the case of OECD Jobs Study (OECD, 1996), to situations where the complexity of
1
2
One example is the study investigating the performance of the Washington Metropolitan Area Transit Authority’s Metrorail for large-scale evacuations in the event of a hypothetical attack on the Pentagon (VanLandegen & Chen, 2012). Another example is a hypothetical transit network consisting of 22 routes and 194 stops which has been developed within a microsimulation platform (Paramics) (Wahba & Shalaby, 2005). An exploratory microsimulation model was developed that follows hypothetical individuals and simulates the course of HIV/AIDS, treatment with highly active antiretroviral therapy, and transmissions (Cragin et al., 2012).
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
25
the hypothetical data reaches that of a sample of the population. Examples include dynamic cohort models which take hypothetical samples of the population at age zero and simulate hypothetical age profiles of the population (Harding, 1993; O’Donoghue, 2002a, 2002b). Spatial microsimulation modelling almost entirely deals with synthetic data, generating synthetic localized distributions of the population; albeit often based upon actual micro-data, the resulting distributions are synthetic. There is also an intermediate level with models that contain some heterogeneity, for example taking typical family types and varying one or other marginal distribution such as the hours worked, wage rate or number of children. In this study we differentiate between models which are synthetic or use hypothetical data in general, and those specifically that concentrate on abstracting from the complexity of the population. We focus primarily on the latter and refer in passing to models which incorporate a greater degree of heterogeneity. Despite its large applications, the literature lacks, to the best of our knowledge, a thorough survey on the various uses of hypothetical data. This chapter attempts to fill this gap in the literature, thereby complementing the existing surveys on dynamic, static and behavioural applications of microsimulation techniques. The aim is to summarize and describe the multiple areas where hypothetical data is used, to synthesize its methodological dimensions and to discuss some of its limitations. We then review the progress made by various disciplines starting with the earliest models and suggest some directions for future development. The remainder of the chapter is organized as follows. The next two sections cover the contextualization of microsimulation using hypothetical data as a research and policy tool, followed by a detailed discussion of the methodology. The last two parts discuss several applications, advantages and disadvantages, and future trends and perspectives. 2.2. Context and uses This section provides an overview of the existent types of microsimulation utilizing hypothetical data,3,4 and investigates various areas in which this
3
4
This section relies on two methods of data collection and revision: our literature review and the literature synthesized by Lithoscope v1.review tool. In total 140 units of items have been identified as applying hypothetical data. Out of which 129 are studies that involve the actual analysis applying hypothetical model and 11 sources are either tax-benefit calculator or newspaper articles about the usage of hypothetical families. Thus, we rely on 129 studies as constituting our 100% of literature review database. Alternative names for hypothetical data are projected, synthetic, family based model, model family technique. Counteract of real versus hypothetical data can be found under following names: ‘actual vs. hypothetical data’ (Berger, 2001); ‘empirical data vs. stylized households’, (Immervoll & O’Donoghue, 2002); ‘empirical data vs. model family approach’ (Cantillon, Van Mechelen, Marx, & Van den Bosch, 2004); or ‘representative vs. synthetic data’ (cite).
26
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
type of data is applied. We divide our analysis taking into account, first, the policy scope or the types of policy or discipline areas which apply these models; second, the geographic scope, both in terms of range of countries and spatial dimensions, and third, the analytical scope, which refers to the type of analysis undertaken. 2.2.1. Context of the hypothetical models
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical microsimulation models are often used in a different context than the traditional static or dynamic microsimulation models, where the focus in largely on the inference of the policy impact at the population level. Hypothetical microsimulation model, on the other hand, tend to be applied in areas that other methods could not reach easily. It is often driven by one or more of the following factors. (a) (b) (c) (d) (e)
Illustrative purposes; Validation; Cross-national comparisons; Replacement of insufficient or lack of microdata; Communication with the public.
While microsimulation models can incorporate complex policies and the population dynamics into the analyses, it can sometimes be difficult to understand due to the overlapping complexities of the population, policy and the behaviours processes. The result of the microsimulation models can be affected by many factors, which makes it difficult to illustrate the net effect of a single factor. Hypothetical models, on the other hand, often focuses on a particular scenario under certain predefined assumptions. This allows the model developer to examine a simplified version of the simulated observations. Synthetic data are used to examine the practical significance of hypothetical policy reforms (Creedy & Scutella, 2004). Hypothetical families are applied to illustrate different policy scenarios. In the case of Modgen models, such as LifePaths, this is often used to create a narrative of the storyline of the simulated individuals (Spielauer, 2006). This also helps to highlight the basic features of the simulation model. STINMOD also uses this method to identify the potential defect within the model and illustrate the model changes both within and outside of the team. The results obtained using hypothetical data are not solely presented in form of academic papers. There are also policy notes and briefs (e.g. Nichols, Clingman, Burkhalter, Wade, & Chaplain, 2009), policy papers (e.g. Rake, Falkingham, & Evans, 1999) and reform papers, such as “Making work pay” (OECD, 1995). Besides illustrative purposes, an important use of hypothetical models is validation. Regular static and dynamic models can be difficult to debug due to its internal complexity (Dekkers, 2010). A simplified stylized method based on hypothetical data can also be used to validate the results
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
27
obtained from simulations of other microsimulation models that use observed data. The main difference between the two approaches is that, whereas the approach based on observed data captures the population and labour market diversity, the stylized approach emphasizes the mechanism, interactions and outcomes of the tax-benefit system for typical cases. In EUROMOD, for example a stylized analysis was frequently used for validation purposes (Sutherland et al., 2008). One other reason to use hypothetical models is to compare results across countries. However, cross-country comparison is difficult given the prevalence of a particular household situation may vary as certain family situations (divorce rates, lone parents’ rates, etc.) are more common in some countries than in others.5 Similarly, the earnings distribution and the number of household recipients of social benefits vary among countries (Immervoll, Marianna, & d’Ercole, 2004). To compensate for crosscountry exchanges of data in cases of migrants or cross-border workers, hypothetical model can be used to simulate policy rules in a two, three or more country context to assess the simultaneous impact of two or more fiscal and social security systems (Burlacu & O’Donoghue, 2014). Given the nature of the data generation process, hypothetical models are also widely used to reconstruct a data set when the data is missing. The first identified study that mentions hypothetical cases dates back to 1983 (Atkinson, King, & Sutherland, 1983), however informal evidences can be found from earlier years. This method is applied to reconstruct or construct data series since the early, mid and late 19th century (e.g. since 1913 in Bakija (2009), from 1953 in Cannon and Tonks (2004) and from 1979 in Evans and Williams (2009)). The usefulness of hypothetical data was proven in the context of policy evaluations when data does not exist or is scarce. For example, in the field of migration studies, there is no comparable international data on cross-border commuting rates (Bonin et al., 2008). Mobility for work in the European Union is widely analysed using the European Labour Force Survey, however the “small number of annual cross-border moves makes it problematic to use them for showing detailed and statistically reliable breakdowns by country cross-border mobility” (Bonin et al., 2008). Moreover, datasets merged from two social security and fiscal systems to assess its impact on income are rare. Burlacu and O’Donoghue (2014) for instance use hypothetical data to cover this gap. Lastly, hypothetical models are an effective way of calculating the impact of policies and communicating their results. One can apply it in any situation, fast and inexpensive using Excel (e.g. work incentives, family benefits, housing benefits, corporate tax, farms, unemployment, pensions, taxation, education grants, rates of return). A good example is
5
See further discussions in Chapter 3: Cross-country Models.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
28
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
provided by the American Tax Policy Center, who explains how the cuts from certain years affect the income of middle-income families. A one page sheet is provided to the wide audience. One of the most illustrative examples of the practicability of synthetic data is the tax-benefit calculator (Bakija, 2009). Atkinson et al. (1983) mentions that specialists in the field of tax-benefit calculations, when considering tax and benefit changes in a country, in the first instance reach for their pocket calculators and work out the effect of the changes on a series of hypothetical families. Another speedy and straightforward calculation tool refers to the so-called ‘Ready Reckoners’. For example, the Australian government offers Weekly Tax Table,6 which is a sort of calculation platform for the citizens. It answers specific questions posed by employers: what if your employee is a foreigner? What if there are 53 pay periods in a financial year? Some county councils in the United Kingdom offer a similar service, addressing questions regarding housing benefits, education benefits and council tax reductions.7,8,9 2.2.2. Policy scope Hypothetical models are applied in many types of policies analyses. Table 2.1 outlines the number of papers cited by Burlacu, O’Donoghue, and Sologon (2014) by field of application, which although not fully Table 2.1.
Fields of hypothetical data application
Area Social Security and Taxation Media Health Economics Transportation, Land Use and Engineering Geography, Spatial Planning & Development Political Science and Public Administration, Education Computational Social Science Agriculture Total
Number of studies 70 18 26 23 17 5 5 17 181
Source: Burlacu et al. (2014).
6
7 8
9
http://www.ato.gov.au/uploadedFiles/Content/MEI/downloads/BUS00319019N10050512. pdf http://www.conwy.gov.uk/doc.asp?cat=10048&doc=31354 http://www.conwy.gov.uk/upload/public/attachments/559/Conwy_Ready_Reckoner_2013_ eng.pdf http://www.conwy.gov.uk/doc.asp?cat=10049&doc=31356
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
29
referencing all papers, it is reasonably comprehensive. The most significant application is in the area of social security and taxation, largely through the use of limited heterogeneity typical family type simulations. Health economics is the next largest field of application, followed by Transportation and Land Use, Spatial Analysis and Agriculture. Therefore, there is a significant variety in the fields using microsimulation models based on hypothetical data. We consider first the models used for social security and taxation-based analyses. Hypothetical cases are highly used in applied social policy to investigate changes in disposable income due to taxation and social transfers for typical families. These models have a Tax-Benefit model at their core. Tax-Benefit models are computer programs that represent the rules governing a country’s tax and benefit system; by applying these rules to a set of households that is representative of the population, it is possible to evaluate the impact of existing tax-benefit systems on income changes (Immervoll & O’Donoghue, 2002). Most applications are in the area of the social policy or social security, such as taxation and social insurance contributions, pension systems, family benefits and unemployment. As indicated in Table 2.1, the field of social security and taxation,10 followed by pension systems are topics where hypothetical cases are applied the most. At its most basic, tax-benefit hypothetical models are used to simulate tax-benefit policies for different typical families. At its simplest, these models are calculations undertaken by newspapers (Shetty, 2012), economic papers of financial affairs departments (Carone & Salomaki, 2004; Harding, Payne, Vu, & Percival, 2006; Irish Department of Finance, 2014; Starsky, 2006) or local governments (Conwy County Council, 2014) to explain policy changes to their readers or customers or citizens. They thus can serve as a communication device with the public. Tax-benefit calculators, case studies are a fast and effective ways of highlighting policy impact to the wide audience (e.g. Ready Reckoner). For example, in this Research Note (Phillips & Toohey, 2013), the authors report the fiscal position of low and middle Australian income families, after receiving benefits and paying taxes. They develop 16 ‘hypothetical’ families using STINMOD, NATSEM’s model of the Australian tax and transfer system to determine the cash benefits received and personal income taxation paid by such families and their position within the income distribution. After attributing household characteristics and income components to each type of family, they summarize the impact of public policy.
10
Taxes in this section refer only to personal income taxes, few other tax related works are identified in the field of business.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
30
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
These models are also used by researchers to understand the impact of policy changes. For example O’Donoghue (2002a, 2002b) describes longrun changes in the Irish tax-benefit system over 50 years, thus enabling the analysis of the distributional implications of the policy change over a period when microdata did not exist. Another very common use of these models is in international comparisons. The hypothetical microsimulation models used by the OECD (Immervoll et al., 2004; OECD, 1995) are frequently used to understand comparative work incentives and unemployment replacement rates across OECD countries. In cross-national comparisons of tax-benefit systems, the stylized method based on hypothetical data has the advantage of allowing the analyst to properly consider the functionality tax-benefit system in a comparable and consistent manner (see e.g. Buti et al., 2001). Many other studies such as Evans and Lewis (1999) compare the performance of different tax-benefit systems across countries. Some papers focus on subsets of the tax-benefit system such as family taxation (Pechman & Engelhardt, 1990) or minimum income systems (Cantillon et al., 2004). At the corporate level, Gordon and Tchilinguirian (1998) analyse incentives associated with tax systems for firms across OECD countries. Hypothetical data is particularly useful when no data is available. One example is the lack of internationally comparable data on cross-border workers, context in which the use of hypothetical data offers a valuable insight (Bonin et al., 2008). Considering an alternative perspective to comparative research, Burlacu and O’Donoghue (2014) analyse using a hypothetical microsimulation model, the welfare implications of crossborder working. These models are not just used to consider working age policy impacts and incentives of policy, but also pension age impacts. Rake et al. (1999) consider the distributional implications of pension policy reform over the life-cycle. Thorburn, Rocha, and Morales (2007) consider the rate of return to pensions in Chile. Another good example is the study of Wittenburg, Stapleton, and Scrivner (2000) that examines how raising both the normal retirement age and the Medicare eligibility age would affect Social Security Disability Insurance (DI) eligibility, Medicare eligibility, and Medicare expenditures under two hypothetical policy scenarios. Medicine and health economics is the second largest field where synthetic data is applied. Models are used to consider for example the costeffectiveness of different interventions (Andrews et al., 2012; Beukelman, Saag, Curtis, Kilgore, & Pisu, 2010; Hiligsmann, McGowan, Bennett, Barry, & Reginster, 2012; Schousboe et al., 2007). Several health problems are addressed such as: tuberculosis (Andrews et al., 2012), cancer (McMahon et al., 2012), osteoporosis (Beukelman et al., 2010), coronary artery disease (Drescher et al., 2012), as well as demographic phenomena, such as mortality (Akushevich, Kravchenko, & Manton, 2007).
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
31
Microsimulation models with synthetic data are used to evaluate the clinical impact and cost-effectiveness of tuberculosis, mammal, cancer screening; to estimate the impact of the effects and costs of food and cancer programmes. Spatial models using synthetically generated data have been used to map environments that may influence health outcomes and conditions such as obesity (Edwards & Clarke, 2009), while Morrissey, Hynes, Clarke, Ballas, and O’Donoghue (2008) study the likelihood that individuals in different jurisdictions will attend a GP surgery, employs both question of spatial microsimulation and health care; albeit with greater heterogeneity than being considered in this study. Transport planning is a very significant part of the microsimulation literature. As data may not always be available, hypothetical scenarios are often simulated. For example, in order to evaluate and to analyse the level of reliability one can test the performance of a road network or system by simulating a hypothetical distribution of traffic flows or travel times (Hollander & Liu, 2008). The authors argue that transport analysis is in a serious need for tools to estimate this distribution in hypothetical scenarios, but there are currently few such tools. Geography, mobility, spatial planning and development are another core area where synthetic data is applied also primarily relies on generating synthetic or hypothetical data given the typically low spatial resolution of survey data (Hermes & Poulsen, 2012). The generation of synthetic population estimates through spatial microsimulation has been a popular technique in recent years, with applications to research and policy problems in many areas of social sciences (Birkin & Clarke, 2012). Spatial microsimulation models typically match census of population data with survey data in order to simulate synthetic populations of individuals and households within small-scale geographic areas (O’Donoghue, 2012). Estimation techniques typically involve cloning, sampling or matching households in surveys with small-area census data (Birkin & Clarke, 2012). When model estimates are benchmarked against real-world data, the models are typically well behaved and very robust, but they can struggle to capture the diversity of spatial variations shown by observed data (Birkin & Clarke, 2012). Nevertheless that typically involves more heterogeneity than the majority of models contained here. Simulation with synthetic cases has been identified in geographical mobility of labour force area. Migration and mobility of work force face scarcity in data due to difficulty in assessing (International Organization for Migration, 2011). Thus, synthetic cases of active wage earners working across the borders enable researchers with possible quantifications. We refer to cross-border work (e.g. Burlacu & O’Donoghue, 2014; Collins, 2008). Another significant area utilizing hypothetical microsimulation models is agriculture. Models such as those developed by the International Farm Competitiveness network have been used to compare farming systems
32
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
across countries (Hemme, Deblitz, Isermeyer, Knutson, & Anderson, 2000). Like hypothetical microsimulation models used for tax-benefit analysis, they can be used for comparative research where there where microdata is not comparable. They are particularly used to compare the relative competitiveness of different farming systems such as Dairy (Manjunatha, Shruthy, & Ramachandra, 2013; Thorne & Fingleton, 2006), Oil See Crops (Prochorowicz & Rusielik, 2007). They are well suited for farming systems where there is a paucity of microdata such as in relation to organic farming (Zander, Thobe, & Nieberg, 2007). They can also be used to for policy analysis (Doucha & Vaneˇ k, 2006).
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
2.2.3. Geographical scope The geographical scope refers to the geographic spread of the use of hypothetical models. Much of their early development took place in describing the impact of policy changes within and between European and OECD countries (Atkinson & Sutherland, 1983; Buti et al., 2001; OECD, 1996). As policies become more complicated, microsimulation models help to explain their differential impact. Table 2.2 describes the geographical scope or spread of models used in the analysis undertaken by Burlacu et al. (2014). The number of papers Table 2.2. Australia Belgium Belgium and Luxembourg Canada Chile Czech Republic EU countries Germany Hungary India Ireland Italy Japan New Zealand OECD countries The Netherlands Spain Ukraine UK USA
Geographic scope of hypothetical models 16 3 5 2 1 1 10 2 2 2 15 1 1 1 17 1 1 1 17 18
Source: Burlacu et al. (2014). Note: For models that dealt with theoretical issues, it was not possible to identify the geographical scope of the models.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
33
referred to is fewer than in Table 2.1 as some papers are theoretical or are not country-specific, particularly in the medical and transport sphere. The distribution corresponds to what one would expect to find in other areas of microsimulation, with the highest share in the United Kingdom, the United States, Australia and Ireland. Given their extensive use for international comparisons, multicountry analyses across the OECD and the EU are also preponderant. One of the primary advantages of hypothetical microsimulation models is their simplicity and relatively low data requirements. This adds both to their understandability and also to the possibility of using them in countries where data availability or access to the human capital required for developing a data-based model is more limited. There are thus a wide variety of analyses across Eastern European countries, often from a comparative perspective. There are also examples from India.
2.2.4. Analytical scope While the policy scope refers to the area of application, the analytical scope refers to the type of analysis undertaken. The types of policy mirror that of other types of microsimulation models reported in this handbook. The breadth of analysis is thus extremely broad. Here, we focus on the main types of analysis and their rationale. At its simplest, particularly in the case of tax-benefit analysis, hypothetical models simulate the impact of taxes and benefits to produce disposable incomes or the impact of a particular policy on a particular family type (Phillips & Toohey, 2013). Some models generate net incomes while others, where the focus is on a particular measure, model just a single or subset of instruments (Immervoll & Barber, 2006). As a result, they are good communication tools, abstracting from the complexity of the population. The flipside, however, is the main criticism of hypothetical models. Even where a range of typical families are considered, they are in fact “typical” of a very small part of the income distribution and as a result can be misleading (Atkinson et al., 1983). For example, Atkinson and Sutherland (1983) highlight that typical families used by the government in explaining policy change cover only about 4% of all families. A parallel type of analysis is to consider incentives associated with entering or leaving work, (Immervoll & O’Donoghue, 2002) or incentives to vary hours (Carone, Immervoll, Paturot, & Saloma¨ki, 2004) or investment (Gordon & Tchilinguirian, 1998). In addition to considering the effect of existing policy, hypothetical models are also used to evaluate the ex-ante effects of a policy. For instance, in one of the most widespread areas of hypothetical data, such as social security, one could simulate the effect of introducing of new
34
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
pension regulation and how that would affect the income of pensioners (Evans & Lewis, 1999). Synthetic data has high relevance:
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
• in ex-ante analysis, namely the application of such data when a policy has not yet been implemented and its effects on actual populations are difficult to be estimated; • in applications in various policy areas with missing or insufficient data; • for the academic or policy debates about the validity and reliability on such data when discussing the effects of policies on the population. Most of our examples of analytical scope thus far have focused on taxbenefit policies for working age people. Different analyses are undertaken for pension age-related policies. Rake et al. (1999) model the level of benefits received under alternative pension rules at different stages over the life-cycle post-pension age; akin to the budget constraint analysis above, but where age is varied rather than hours worked. Keenay and Whitehouse (2003) compare the average and marginal tax rate for older workers across OECD countries; again to working age type analyses. Many analyses of pension age-related policy focus on the rate of return to pensions. For example O’Donoghue (2002a, 2002b) considers the lifetime return to a social insurance pension system depending upon the number of contributions made. There are many examples also of what are called ‘Money’s Worth’ analyses (Cannon & Tonks, 2004, 2009; James & Villas, 2000; Thorburn et al., 2007). These models play a prominent role in social security debates. For medical and health economic analyses, instead of looking at disposable income returns to policy measures, the focus is on measuring the most cost-effective of different medical interventions (Beukelman et al., 2010; Hiligsmann et al., 2012; Schousboe et al., 2007). Within the field of agriculture, the primary focus is on quantifying the cost associated with a given level of production, reflecting different agronomic, market and skill conditions in different countries (Hemme et al., 2000). As in the case of tax-benefit policy, where hypothetical models are often used as part of the successful use of traffic microsimulation software depends crucially on the verification and validation (V&V) procedures for testing the core behavioural algorithms (Reinke, Dowling, Hranac, & Alexiadis, 2004). These must be sufficient to convince researchers, software developers, and practitioners that the algorithms are accurate and robust (Reinke et al., 2004).
2.3. Methodological characteristics and choices In this section we discuss the methodological choices facing hypothetical microsimulation model developers. While hypothetical models may seem to be more simplified, it involves many of the similar methodological
Hypothetical Models
35
choices as a full microsimulation model may face, although some of the simplified assumptions are used to reduce the complexity of a hypothetical model. A wide range of dimensions and types of parameters have been identified throughout the literature. These dimensions can include
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
• The type of microsimulation (what) • The unit of analysis & variation (who) • Measurements (how & how much) In this study, we solely focus on the choices that are more relevant or important for the hypothetical models, identifying the parameters that have been selected and the context of their application. Further, the unit of analysis and unit of variation are discussed. After deciding upon the type of analysis, the choice of the target group or the unit to be investigated (is it a farm, a company, a household) is the next step in carrying out the analysis. These choices include: • • • • • •
Interaction with another model The unit of analysis Period of analysis Unit of Variation Analytical measures Methods to update the underlying attributes.
2.3.1. Interaction with another model Within the context of using hypothetical family microsimulation models to facilitate validation or to exemplify analytical results without recourse to the full variability of the population, there are quite examples (approximately 3011) papers in the literature where population microsimulation models are utilized to simulate over a subset of hypothetical family models. Examples of models used for both population and hypothetical family calculation include EUROMOD (Berger et al., 2001), STINMOD (Harding et al., 2006), SWITCH (Callan, Nolan, & O’Donoghue, 1996; Callan, O’Donoghue, & O’Neill, 1994; Callan, O’Neill, & O’Donoghue, 1995). Most analyses use bespoke models for the analysis such as the OECD models (Immervoll et al., 2004; Martin, 1996) or models for specific analyses such as typical family benefits (Bradshaw & Finch, 2002) pension calculations (Rake et al., 1999), comparative social policy (Hansen, 2002), tax policy (Pechman & Engelhardt, 1990), pensions policy (Johnson, 1998) or cross-border analysis (Burlacu & O’Donoghue, 2014; Burlacu et al., 2014). In all of the health care or agriculture focused models, bespoke models were used.
11
As outlined in Burlacu et al. (2014).
36
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
2.3.2. Unit of analysis
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
As in the case of most microsimulation models, the unit of analysis is a key decision of a model builder. Across the hypothetical microsimulation models, considered by Burlacu et al. (2014), the choice varies for person level models between single individuals and multi-individual. Other units of analysis include the firm or the farm. Table 2.3 outlines the share of studies considered that utilized different units of analysis. Nearly half of all studies considered utilized a multi-individual unit of analysis. This was because the primary focus of these studies was on tax-benefit policy whether in the case of single country studies (O’Donoghue, 2002a, 2002b) or multicountry studies (Berger et al., 2001; Immervoll et al., 2004; Martin, 1996).
2.3.3. Period of analysis The next methodological choice is the period of analysis. This refers to the time period over which an analysis applies. In Table 2.4, we divide the studies into two types, the first focusing on the current period which can relate to current week, month or year, and the second focusing on longer periods such as the life-cycle or the work history. About three quarters of analyses focus on the current period. These are typical in the case of studies that explore the effect of tax-benefit policies (Evans, 1996a, 1996b, 1996c) or model work incentives (Immervoll et al., 2004; Martin, 1996). Policy instruments that require longer-term information such as pensions
Table 2.3.
Unit of analysis
Number of studies
Percentage of the total number of studies
35 55 8 17
30.4 47.8 7.0 14.8
Unit of analysis Individual Household/family/benefit unit Company Farm Source: Burlacu et al. (2014).
Table 2.4.
Current period Life-cycle Source: Burlacu et al. (2014).
Period of analysis
Number of studies
Share of studies
94 31
75.2 24.8
Hypothetical Models
37
(Cannon & Tonks, 2004, 2009) or look at returns in terms of health improvement over the life-cycle (Beukelman et al., 2010; Schousboe et al., 2007) utilize a life-cycle period of analysis.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
2.3.4. Unit of variation While many hypothetical models report a single measure in relation to the specific policy (Bradshaw & Finch, 2002) or undertake an analysis for a particular family type (OECD, 1995), some models vary the characteristics of the unit of analysis to assess the impact on the measure of concern, the unit of variation. The marginal effective tax rate is the simplest example, where there is a marginal change to a specific income type, or a replacement rate calculation. A variant of this type of analysis is where a single dimension is varied as in the case of a budget constraint diagram where the disposable income is calculated as something such as hours vary (O’Donoghue, 2002a, 2002b) or where the wage rate changes (Burlacu & O’Donoghue, 2014). Alternatively a family characteristic such as the presence of children (O’Donoghue, 2002a, 2002b) or marital status (Dickert, Houser, & Scholz, 1995) is varied, where one can model the marginal impact of having a child on tax liabilities and benefit entitlements. These types of analysis plot how disposable income varies with gross income and thus can be used to highlight the redistributive nature of public policy for particular family types. Another dimension of variation is used in comparative research, whether across countries (Bradshaw & Finch, 2002; OECD, 1995; Thorne & Fingleton, 2006) or inter-temporal (O’Donoghue, 2002a, 2002b; Redmond, 1999). About 37% of the papers surveyed by Burlacu et al. (2014) compared across countries, while 22% were comparable over time. 2.3.5. Analytical measure Fundamental to the analytical choice is the specific analytical measures used. In the simplest of models the level of a policy (Bradshaw & Finch, 2002) or the relevant income concept is used (Phillips & Toohey, 2013). A variant of this type of analysis is where a single dimension is varied as in the case of a budget constraint diagram where the disposable income is calculated as something such as hours vary (Berger et al., 2001) or where the wage rate changes (Burlacu & O’Donoghue, 2014). These types of analysis plot how disposable income varies with gross income and thus can be used to highlight the redistributive nature of public policy for particular family types. Once one knows how disposable income varies with gross income it is relatively straightforward to calculate marginal effective tax rates, which can provide information in relation to incentives such as poverty traps (Carone et al., 2004). Marginal effective tax rates can be calculated for
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
38
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
other units of analysis such as firms in terms of the incentives associated with different types of investment (Gordon & Tchilinguirian, 1998). A parallel type of analysis is to consider incentives associated with entering or leaving work, known as the replacement rate (Immervoll & O’Donoghue, 2002), which measures the ratio of out of work to in-work income. Sometimes gross replacement rates (OECD, 1995), which provide the ratio of unemployment benefits to work income are used in international comparisons. However increasingly given the importance of income taxation, social insurance contributions and other benefits both in-work and out of work, net replacement rates are used in international comparisons (Martin, 1996). Amongst pension age models, Keenay and Whitehouse (2003) compare the average and marginal tax rate for older workers across OECD countries; again to working age type analyses. Also ‘Money’s Worth’ type analyses (Cannon & Tonks, 2004, 2009; James & Villas, 2000; Thorburn et al., 2007) utilize a variety of different methods such as the benefit-totax ratio, the internal rate of return (Burlacu & O’Donoghue, 2014) and the net present value (Geanakoplos, Mitchell, & Zeldes, 1998) to model the rate of return to pension contributions. The medical and health economic analyses measure mostly the costeffectiveness of different medical interventions, also the impact of alternative treatment interventions on Quality Adjusted Life Years (Beukelman et al., 2010; Hiligsmann et al., 2012; Schousboe et al., 2007). Within the field of agriculture, the primary focus is on quantifying the cost associated with a given level of production, quantifying the relative profit margin, the total cost and the opportunity costs of different production systems (Thorne & Fingleton, 2006). 2.3.6. Methods to update the underlying attributes As one of the main features of the hypothetical models is to simulate changes, the methods that underlying the numeric changes also forms the core part of the decisions one need to make in developing models with synthetic data. Similar to a full-fledge microsimulation model, the data update mechanism underlying the process can share of the methodological challenges as in a static model (see Chapter 2) or a dynamic model (see Chapter 8). Based on how the new values are generated, a hypothetical model can use simple arithmetical calculation, probabilities transitions, and/or structural behavioural model. A simple tax-benefit hypothetical model may only need to update variables related to income and tax, and it is often estimated through an arithmetical tax calculator. This type of model is often used for analyses that have a clearly understood policy consequence and the behaviours do not form the core part of the analyses. Compared with population dynamic microsimulation models, hypothetical models
Hypothetical Models
39
tend to have a lean design in both the data set and the underlying numeric models. However, some complex hypothetical model may use methodologies commonly adopted in a population models, where the transitions over time and behavioural responses are modelled. This is often the case when hypothetical models are derived from a population model.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
2.3.7. Limitations It is worthwhile to note that the use of hypothetical models has some theoretical drawbacks as well. First, the approach does not to cover a wide heterogeneity of cases, failing to take into account the wide variation of family circumstances, or the differences in details (e.g. source of income, existence of tax relief on expenditure, etc.) which might have a significant effect in some situations (O’Donoghue & Sutherland, 1999). Second, this approach estimates one single point at a time (Berger et al., 2001). Similar to the study of Berger et al. (2001), we argue that the short-term perspective may limit the scope of some types of analysis. The interaction between two tax-benefit systems happens at one point at a time, the national rules do not change often. An evaluation of this relationship in the long run would be useful, but extremely complex, given the differences and multidimensional problems that already exist. Third, this is a static type of analysis, which usually does not incorporate behavioural responses when evaluating the impact of tax-benefit systems and reforms (Berger et al., 2001). 2.4. Summary and future directions In this chapter we have highlighted the wide variety of uses, users, analytical choices and methodological choices associated with the development of hypothetical microsimulation models. Their use is quite broad spanning tax-benefit, pension, medical transport and agricultural analyses and their chief advantage is their simplicity helping users to understand complexity while abstracting from population-related complexity, because they are quite useful communication devices and methods for understanding inter-country differences. Their main limitation however is their simplicity, with typical families and other units of analysis not in fact being ‘typical’ accounting for a relatively small share of variability in the population. In terms of future directions for research, we ignore the possibilities of adding greater complexity, given this is their main strength. However development opportunities arise in extending the methodology to other policy areas. There are relatively few analyses undertaken in the context of education policy and there are more potential opportunities within
40
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
housing or health policy. There are also opportunities to incorporate behaviour impacts or probabilities with hypothetical calculations. There are many opportunities to improve their usefulness in terms of communication. It is feasible now for most rules-based policy to have a hypothetical policy calculator calculated, which if modelled on the web allows for individual citizens to estimate their own entitlement or liability. They can also use more interactively with media across different platforms adding value relative to typical family calculations presented in newspapers or on television.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
References Akushevich, I., Kravchenko, J. S., & Manton, K. G. (2007). Health-based population forecasting: Effects of smoking on mortality and fertility. Risk Analysis, 27(2), 467 482. Andrews, J. R., Lawn, S. D., Rusu, C., Wood, R., Noubary, F., Bender, M. A., … Walensky, R. P. (2012). The cost-effectiveness of routine tuberculosis screening with xpert MTB/RIF prior to initiation of antiretroviral therapy in South Africa: A model-based analysis. AIDS (London, England), 26(8), 987. Atkinson, A. B., King, M. A., & Sutherland, H. (1983). The analysis of personal taxation and social security. National Institute Economic Review, 106(1), 63 74. Atkinson, A. B., & Sutherland, H. (1983). Hypothetical families in the DHSS tax/benefit model and families in the family expenditure survey 1980. London School of Economics and Political Science, London. Bakija, J. (2009). Documentation for a comprehensive historical U.S. federal and state income tax calculator program. USA: Heritage Foundation. Berger, F., Borsenberger, M., Immervoll, H., Lumen, J., Scholtus, B., & De Vos, K. (2001). The impact of tax-benefit systems on low income households in the Benelux countries. A simulation approach using synthetic datasets. EUROMOD Working Paper Series No. EM 3/01. Beukelman, T., Saag, K. G., Curtis, J. R., Kilgore, M. L., & Pisu, M. (2010). Cost-effectiveness of multifaceted evidence implementation programs for the prevention of glucocorticoid-induced osteoporosis. Osteoporosis International, 21(9), 1573 1584. Birkin, M., & Clarke, G. (2012). The enhancement of spatial microsimulation models using geodemographics. Annals of Regional Science, 49, 515 532. Bonin, H., Eichhorst, W., Florman, C., Hansen, M. O., Skio¨ld, L., Stuhler, J. L., … Zimmermann, K. F. (2008). Report No. 19: Geographic Mobility in the European Union: Optimising its Economic and Social Benefits. No. 19. Institute for the Study of Labour (IZA).
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
41
Bradshaw, J., & Finch, N. (2002). A comparison of child benefit packages in 22 countries. Department for Work and Pensions, Research Report No. 174. Burlacu, I., & O’Donoghue, C. (2014). The impact of differential social security systems and taxation on the welfare of frontier workers in the EU. Social Europe, European Commission, Journal on Free Movement of Workers, 7, 27 40. Burlacu, I., O’Donoghue, C., & Sologon, D. (2014b). An evaluation of the differential impact of tax-benefit policy on the welfare of frontier workers. The case of Luxembourg and Belgium. In the PhD thesis of I. Burlacu. Maastricht: Boeken Plan (to be published late 2014). Buti, M., Sestito, P., & Wijkander, H. (Eds.). (2001). Taxation, welfare, and the crisis of unemployment in Europe. Northhampton: Edward Elgar Publishing. Callan, T., Nolan, B., & O’Donoghue, C. (1996). What has happened to replacement rates? Economic and Social Review, 27(5), 439 456. Callan, T., O’Donoghue, C., & O’Neill, C. (1994). Analysis of basic income schemes for Ireland. ESRI Policy Research Series Paper No. 21, Dublin. Callan, T., O’Neill, C., & O’Donoghue, C. (1995). Supplementing family income. ESRI Policy Research Series Paper No. 23, Dublin. Cannon, E., & Tonks, I. (2004). UK annuity rates, money’s worth and pension replacement ratios 1957 2002. Geneva Papers on Risk and Insurance. Issues and Practice, 371 393. Cannon, E., & Tonks, I. (2009). Money’s worth of pension annuities. Department for Work and Pensions, Research Report No 563. Cantillon, B., Van Mechelen, N., Marx, I., & Van den Bosch, K. (2004). The evolution of minimum income protection in 15 European countries, 1992 2001. Antwerp: Centre for Social Policy Herman Deleeck. Carone, G., Immervoll, H., Paturot, D., & Saloma¨ki, A. (2004). Indicators of unemployment and low-wage traps (marginal effective tax rates on employment incomes). DELSA/ELSA/WD/SEM(2004)3. Carone, G., & Salomaki, A., (2004). Reforms in tax-benefit systems in order to increase employment incentives in the EU. Labor and Demography, EconWPA. Collins, K. A.. (2008). The ‘taxing’ issue of interprovincial and crossborder migration. Canadian Public Policy Analyse de Politiques, Xxxiv(4), 481 499. Conwy County Council. (2014). Online benefit calculator Calculate your entitlement for housing benefit and council tax reduction. Retrieved from http://www.conwy.gov.uk/doc.asp?cat=10049&doc=31356. Accessed on February 28, 2014. Cragin, L., Pan, F., Peng, S., Zenilman, J. M., Green, J., Doucet, C., … De Lissovoy, G. (2012). Cost-effectiveness of a fourthgeneration combination immunoassay for human immunodeficiency
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
42
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
virus (HIV) antibody and p24 antigen for the detection of HIV infections in the United States. HIV Clinical Trials, 13(1), 11 22. Creedy, J., & Scutella, R. (2004). The role of the unit of analysis in tax policy reform evaluations of inequality and social welfare. Australian Journal of Labour Economics, 7(1), 89 108. Dekkers, G. (2010). On the impact of indexation and demographic ageing on inequality among pensioners. European Workshop on Dynamic Microsimulation Modelling. Dickert, S., Houser, S., & Scholz, J. K. (1995). The earned income tax credit and transfer programs: A study of labour market and program participation. In Tax policy and the economy (Vol. 9, pp. 1 50). MA: MIT Press. Drescher, C. W., Hawley, S., Thorpe, J. D., Marticke, S., McIntosh, M., Gambhir, S. S., & Urban, N. (2012). Impact of screening test performance and cost on mortality reduction and cost-effectiveness of multimodal ovarian cancer screening. Cancer Prevention Research. Doucha, T., & Vaneˇ k, D. (2006). Interactions between agricultural policy and multifunctionality in Czech agriculture. In D. Diakossavas (Ed.), Coherence of agricultural and rural development policies. Paris: OECD. Edwards, K. L., & Clarke, G. P. (2009). The design and validation of a spatial microsimulation model of obesogenic environments for children in Leeds, UK: SimObesity. Social Science & Medicine, 69(7), 1127 1134. Evans, M. (1996a). Means-testing the unemployed in Britain, France and Germany. Welfare State Programme, Suntory and Toyota International Centres for Economics and Related Disciplines, WSP 117. Evans, M. (1996b). Housing benefit problems and dilemmas: What can we learn from France and Germany? Welfare State Programme, Suntory and Toyota International Centres for Economics and Related Disciplines, WSP 119. Evans, M. (1996c). Families on the dole in Britain, France and Germany. Welfare State Programme, Suntory and Toyota International Centres for Economics and Related Disciplines, WSP 118. Evans, M., & Lewis, W. (1999). A generation of change, a lifetime of differences? Model lifetime analysis of changes in the British welfare state since 1979. Oxford: Policy Press. Evans, M., & Williams, L. (2009). A generation of change, a lifetime of difference? Model Lifetime analysis of changes in the British Welfare State since 1979. ESRC though award number RES-000-27-0180-A. Geanakoplos, G., Mitchell, S. O., & Zeldes, S. P. (1998). Social security money’s worth. NBER Working Paper Series No. 6722.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
43
Gordon, K., & Tchilinguirian, H. (1998). Marginal effective tax rates on physical, human and R&D capital. OECD Economics Department Working Paper No. 199. OECD Publishing. Hansen, H. (2002). Elements of social security. Copenhagen: The Danish National Institute of Social Research. Harding, A. (1993). Lifetime income distribution and redistribution: Applications of a microsimulation model, contributions to economic analysis (Vol. 221). Amsterdam: North Holland. Harding, A., Payne, A., Vu, Q. N., & Percival, R. (2006). Interactions between Wages and the Tax-Transfer System, National Centre for Social and Economic Modelling. Report commissioned by the Australian Fair Pay Commission, No. 6/06. Hemme, T., Deblitz, C., Isermeyer, F., Knutson, R., & Anderson, D. (2000). The International Farm Comparison Network (IFCN) Objectives, organisation and first results on international competitiveness of dairy production. Zu¨chtungskunde, 72(6), 428 439. Hermes, K., & Poulsen, M. (2012). A review of current methods to generate synthetic spatial micro data using reweighting and future directions. Computers, Environment and Urban Systems, 36(4), 281 290. Hiligsmann, M., McGowan, B., Bennett, K., Barry, M., & Reginster, J. Y. (2012). The clinical and economic burden of poor adherence and persistence with osteoporosis medications in Ireland. Value in Health, 15(5), 604 612. Hollander, Y., & Liu, R. (2008). Estimation of the distribution of travel times by repeated simulation. Transportation Research Part C: Emerging Technologies, 16(2), 212 231. Immervoll, H., & Barber, D. (2006). Can parents afford to work? Childcare costs, tax-benefit policies and work incentives. IZA Discussion Paper No. 1932. Immervoll, H., Marianna, P., & d’Ercole, M. M. (2004). Benefit coverage rates and household typologies: Scope and limitations of tax-benefit indicators (No. 20). Paris: OECD Publishing. Immervoll, H., & O’Donoghue, C. (2002). Welfare benefits and work incentives an analysis of the distribution of net replacement rates in Europe using EUROMOD, a multi-country microsimulation model. EUROMOD Working Paper No. EM4/01. International Organization for Migration. (2011). World migration report: Communicating effectively about migration. Geneva: Imprimerie Courandet Associe´s. Irish Department of Finance. (2014). How does the Budget affect me? Retrieved from http://budget.gov.ie/Budgets/2014/Documents/ Fairness%20infographic.pdf. Accessed on February 28, 2014. James, E., & Villas, D. (2000). Annuity Markets in comparative perspective: Do consumers get their money’s worth? Policy Research Working Paper No. 2493 (World Bank).
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
44
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
Johnson, P. (1998). The measurement of social security convergence: The case of European public pension systems since 1950. Florence: European University Institute, Robert Schuman Centre. Keenay, G., & Whitehouse, E. (2002). The role of the personal tax system in old-age support: A Survey of 15 countries. Discussion Paper 02/07, Centre for Pensions and Superannuation. Manjunatha, A. V., Shruthy, M. G., & Ramachandra, V. A. (2013). Global marketing systems in the dairy sector: A comparison of selected countries. Indian Journal of Marketing, 43(10), 5 15. Martin, J. P. (1996). Measures of replacement rates for the purpose of international comparisons: A note. OECD Economic Studies, 26(1), 99 115. McMahon, P. M., Kong, C. Y., Johnson, B. E., Weinstein, M. C., Weeks, J. C., Tramontano, A. C., … Gazelle, G. S. (2012). The MGH-HMS lung cancer policy model: Tobacco control versus screening. Risk Analysis, 32(s1), S117 S124. Mitton, L., Sutherland, H., & Weeks, M. (2000). Microsimulation modelling for policy analysis: Challenges and innovations. Cambridge: Cambridge University Press. Morrissey, K., Hynes, S., Clarke, G., Ballas, D., & O’Donoghue, C. (2008). Analysing access to GP services in rural Ireland using microlevel analysis. Area, 40(3), 354 364. Nichols, O., Clingman, M., Burkhalter, K., Wade, A., & Chaplain, C. (2009). Internal real rates of return under the OASDI program for hypothetical workers. Actuarial Note No. 2008.5. Social Security Administration, Maryland. O’Donoghue, C. (2002a). Redistribution over the lifetime in the Irish taxbenefit system: An application of a prototype dynamic microsimulation model for Ireland. Economic and Social Review, 32(3), 191 216. O’Donoghue, C. (2002b). Redistributive forces of the Irish tax-benefit system. Journal of the Statistical and Social Inquiry Society of Ireland, XXXII, 33 69. O’Donoghue, C. (2012). Spatial microsimulation for rural policy analysis. Springer-Verlag Berlin and Heidelberg GmbH & Co. K. O’Donoghue, C., & Sutherland, H. (1999). Accounting for the family in European income tax systems. Cambridge Journal of Economics, 23, 565 598. OECD. (1995). Employment outlook. Paris: Organisation for Economic Cooperation and Development. OECD. (1996). Taxation, employment and unemployment: The OECD jobs study. Paris: Organization for Economic Cooperation and Development. Pearson, M., & Scarpetta, S. (2000). An overview: What do we know about policies to make work pay? OECD Economic Studies, 31(1), 12 24.
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Hypothetical Models
45
Pechman, J. A., & Engelhardt, G. V. (1990). The income tax treatment of the family: An international perspective. National Tax Journal, 43(1), 1 22. Phillips, B., & Toohey, M. (2013). Working Australia: What the Government Gives and Takes Away. NATSEM Research Note R13/1. Prochorowicz, J., & Rusielik, R. (2007). Relative efficiency of oilseed crops production in the selected farms in Europe and the world in 2005. Acta Scientiarum Polonorum, 6(4), 57 62. Rake, J., Falkingham, R., & Evans, M. (1999). Tightropes and tripwires: New labour’s proposals and means-testing in old age. Case Paper 23. London: STICERD. Redmond, G. (1999). Tax-benefit policies and parents’ incentives to work. The case of Australia 1980 1997. Discussion Paper 00104, University of New South Wales, Social Policy Research Centre. Reinke, D., Dowling, R., Hranac, R., & Alexiadis, V. (2004). Development of a high-level algorithm verification and validation procedure for traffic microsimulation models. Transportation Research Record: Journal of the Transportation Research Board, 1876(1), 151 158. Schousboe, J. T., Taylor, B. C., Fink, H. A., Kane, R. L., Cummings, S. R., Orwoll, E. S., … Ensrud, K. E. (2007). Cost-effectiveness of bone densitometry followed by treatment of osteoporosis in older men. Jama, 298(6), 629 637. Shetty, A. (2012). Income tax on property: A ready reckoner. The Indian Express. Spielauer, M. (2006). The lifecourse model, a competing risk cohort microsimulation model: Source code and basic concepts of the generic microsimulation programming language modgen. MPIDR Working Paper No. 2006-046. Starsky, S. (2006). Tax freedom day: A cause for celebration or consternation? Canadian Parliamentary Information and Research Service, Economics Division. Sutherland, H., Figari, F., Lelkes, O., Levy, H., Lietz, C., Mantovani, D., & Paulus, A. (2008). Improving the capacity and usability of EUROMOD. EUROMOD Working Paper No. EM4/08. Thorburn, C., Rocha, R., & Morales, M. (2007, November). An analysis of money’s worth ratios in Chile. Journal of Pension Economics and Finance, 6(03), 287 312. Thorne, F. S., & Fingleton, W. (2006). Examining the relative competitiveness of milk production: An Irish case study (1996 2004). Journal of International Farm Management, 3(4), 49 61. VanLandegen, L. D., & Chen, X. (2012). Microsimulation of large-scale evacuations utilizing Metrorail transit. Applied Geography, 32(2), 787 797.
46
Irina Burlacu, Cathal O’Donoghue and Denisa Maria Sologon
Downloaded by RMIT University At 09:06 12 April 2016 (PT)
Wahba, M., & Shalaby, A. (2005). Multiagent learning-based approach to transit assignment problem a prototype. Transportation Research Record, 1926, 96 105. Wittenburg, D. C., Stapleton, D. C., & Scrivner, S. B. (2000). How raising the age of eligibility for social security and medicare might affect the disability insurance and medicare programs. Social Security Bulletin, 63(4), 17 26. Zander, K., Thobe, P., & Nieberg, H. (2007). Economic impacts of the adoption of the common agricultural policy on typical organic farms in selected new member states. Jahrbuch der O¨sterreichischen Gesellschaft fu¨r Agraro¨konomie, 16, 85 96.
CHAPTER 3
Static Models
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
3.1. Introduction Social science microsimulation models are usually categorised as ‘static’ or ‘dynamic’. Static models take individual characteristics and behaviours as exogenous. These models are commonly used to evaluate the immediate distributional impact upon individuals/households of possible policy changes, for example EUROMOD (Mantovani, Papadopoulos, Sutherland, & Tsakloglou, 2007). Static models are thus commonly referred to as models that estimate the day after impact of a policy reform ignoring the behavioural response impact due to policy. The literature describing the design of static microsimulation models is large and has existed for a relatively long time (Citro & Hanushek, 1991a, 1991b; Hoschka, 1986; Merz, 1991; Sutherland, 1995). Dynamic models on the other hand, for example DESTINIE, PENSIM and SESIM (Bardaji, Se´dillot, & Walraet, 2003; Curry, 1996; Flood, 2007), estimate the evolution of characteristics for individual units within a population based on endogenous factors within the models (O’Donoghue, 2001). In contrast, static models are those that exclude a temporal dimension and a behavioural response component. Thus in essence most dynamic models could potentially be regarded as static as their behaviours rest on statistical or reduced form models rather than those that explicitly incorporate behavioural response (Li & O’Donoghue, 2013). For the purposes of this chapter, we shall therefore define static models as those that neither incorporate time nor behaviour. Across the world, static microsimulation models are more widely used compared with dynamic microsimulation models. Static models have clear advantages and disadvantages. Compared with dynamic models, static
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293002
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
48
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
models focus mainly on the day-after effect and provide a range of distributional analyses. This is highly sought after in government departments worldwide. While it does not have the behavioural refinement to its results, it avoids the added complexity of continuous population change and policy settings accompanied by the time dimension. Instead, it focuses on the internal complexity of population and policies. Additionally, it is generally found that static models have a limited capacity in simulating social benefits that depend on the employment histories, as static microsimulation models are mostly based on the cross-sectional database. Private contribution-based pension, for instance, is difficult to be simulated due to the lack of data. While static models can sometimes be perceived as the ‘simpler’ microsimulation model, the simulation can still be complex as the modelling process tries to represent the heterogeneous patterns of interacting population structures and policy complexity. Compared with dynamic models, static models tend to have the advantage of being relatively straightforward to develop and maintain. Even without the time dimension, static models can retain patterns that we have more understanding compared with the complex dynamic evolution of both population and policies. With some assumptions, the insights from a static microsimulation model can also be useful for projections. In addition, the relative simple structure often reduces the time demands of running the model. Sutherland (1995) reviewed the static microsimulation models developed in Europe at that time. Microsimulation models have since become more popular, especially in the field of tax and benefit analyses. A large number of new models have been developed and the EUROMOD has expanded to 27 countries (Sutherland & Figari, 2013). A collection of methods is applied during the development of static microsimulation. For instance, there is a frequent need for adjusting the distributional characteristics of a household income survey data set. This is particularly useful in tax-benefit microsimulation modelling where the data required for analysis is always historic, (Callan, Keane, Walsh, & Lane, 2010; Immervoll, Levy, Nogueira, et al., 2006; Immervoll, Lindstro¨m, Mustonen, Riihela¨, & Viitama¨ki, 2005; Sutherland, 2001). This is also the case where one wishes to make projections of the income distribution from one point to another (See Brewer et al., 2012). This may also be required where one is assessing the impact of macro-economic shocks or policy changes on the distribution of income. A number of technical terms are used to describe this process and are sometimes used in different ways, for instance uprating, reweighting, etc. Uprating typically refers to issues associated with indexing market income for wage or price growth. Reweighting or static ageing may apply to changing weights to account for changed population structure, while dynamic ageing refers to simulating changes to the population structure (O’Donoghue & Loughrey, 2014).
Static Models
49
This chapter aims to discuss the typical uses and common issues in the field of static microsimulation and surveys the current practices from a number of leading static microsimulation models. The chapter is structured as following, the next section will focus on the uses of static microsimulation and detailed discussions of various methodological issues, among leading static models, will be discussed in Section 3.3. Section 3.4 concludes.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
3.2. The use of static microsimulation models In discussing the use of static microsimulation models, we differentiate between the policy use area, the geographical scope of these models and the analytical scope. 3.2.1. Policy scope Static microsimulation models in social science traditionally focus on modelling tax-benefit policy instruments based upon current income and characteristics (Sutherland, 1995). These instruments include social insurance contributions, income taxation, family benefits, social assistance benefits and less frequently, unemployment benefits or housing benefits. Instruments that depend upon historical information such as social insurance pensions are typically not modelled, albeit are technically possible in a static microsimulation model when historical data is available as in the case of administrative data. Consumption tax-based models are referred to in another chapter in this handbook. While many of these models incorporate behaviour, some models are static in nature such as Decoster, Loughrey, O’Donoghue and Verwerft (2010, 2011), O’Donoghue (2004) where indirect taxes were modelling in EUROMOD and for environmental taxes (Casler & Rafiqui, 1993). Another field with a rapid adoption of microsimulation methods is public health, where the analysis of disease progression can now be linked with individual social and economic profiles due to the increasing availability of data. While many health-focused microsimulation models are dynamic (see Li & O’Donoghue, 2013), some models are developed based on a static framework with potential capacity of projection through static ageing. An example of this is the Type 2 Diabetes simulation model in Australia (Thurecht, Brown, & Yap, 2011). Static health microsimulation models can, for example, offer policy-makers the opportunity to assess the contribution of health status towards estimates of future work participation, for example Schofield et al. (2011) who have examined the relationship between population ageing, trends in health status and future work participation. The authors clarify that the results can be sensitive to the assumptions made with respect to the trends in health status but that important policy implications can be indicated.
50
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
With the ageing of the population in many developed countries, it is likely that the social benefits and services will play an ever-more important role in the public policy. Brown (2011) concludes that the recent development of health care microsimulation models in part ‘reflects the urgent need by governments worldwide to have effective policy tools to manage the rising prevalence of chronic long term illness in their populations and the escalation in health expenditures’. Other health-focussed models have modelled health-related policies such as pharmaceutical-related benefits (Brown, Abello, Phillips, & Harding, 2004; Harding, Abello, Brown, & Phillips, 2004; Walker, Fischer, & Percival, 1998) or medical insurance (Xiong & Ma, 2007; Xiong, Weidong, & Hong, 2011). However more detail is presented in relation to these models in another chapter in this handbook. 3.2.2. Geographic scope In North America, TRIM (version 3) model is one of the oldest static microsimulation models having been first developed in 1972 (Giannarelli, 1992). It has been actively used since then to examine tax and spending policy in the United States (Giannarelli, Morton, & Wheaton, 2007). The Canadian government have used static microsimulation models to evaluate the distributional impact of the sales tax reform introduced in 1991 (Gupta, Kapur, & McGirr, 2000). Statistics Canada have used static microsimulation models to analyse income and health distributions (see Morisson, 1990; Will, Berthelot, Nobrega, Flanagan, & Evans, 2001). Le´gare´ and De´carie (2011) have used this data to produce 20-year projections for Canada of the number of elderly in poor health, for those aged 75 and over. In Europe, EUROMOD, a cross-country static tax-benefit model, is used by a number of users, both academics and government analysts across many European countries. It is now extended to all EU member states, including the new member states as EU expanded over the past decade. Danish Ministry of Finance and the Ministry of Economic Affairs have both analysed the distributional outcome of the Danish tax-benefit system (Hansen, 2000; Pedersen, 2000). Foxman (2000) from the Danish tax office used a static tax-benefit model to evaluate the tax reform act. Ericson and Flood (2012) in Sweden has also used microsimulation model for taxbenefit analysis. Recently, the European Commission has used EUROMOD model results in publishing reports including the DG-EMPL (European Commission, 2012, 2013a) and DG-ECFIN (European Commission, 2013b). Besides the flourish of the microsimulation models in western European and the United States, a number of Eastern European countries have also built their own national microsimulation models over the last years. Czech Republic, Hungary, Estonia, Slovenia have all built their own models for various benefit analysis (Lelkes & Sutherland, 2009).
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
51
Australian Treasury uses STINMOD, a static microsimulation model, to evaluate the budget impact of potential reform. Department of education in New South Wales in Australia uses a static microsimulation model to analyse the impact of education policy reform. A number of models have been developed in New Zealand (Broad, 1982; Creedy & Tuckwell, 2004). China and Korea are also starting to develop their own microsimulation models for policy analysis purpose (Na & Hyun, 1993; Sung & Song, 2011; Zhang & Wan 2008). Use of static microsimulation models outside of the OECD has been limited until recently. Atkinson and Bourguignon (1990) in evaluating the possibility of developing static tax-benefit microsimulation models outside OECD countries found that although often more difficult to implement, simulating tax-benefit systems for these countries should ‘lead to a comprehensive, powerful and yet simple instrument for the design of an efficient redistribution system adapted to the specificity of developing countries’. Focusing on Brazil as a case study, they found that much of the redistribution in the existing Brazilian system in the 1980s relied on instruments that were less important in OECD countries. For example, indirect taxes, subsidies and the provision of targeted non-cash benefits such as public education and subsidised school meals were found to be more important. Instruments more important in OECD systems and often the main instruments in tax-benefit models (personal income taxes, social insurance contributions and pensions) were largely confined to the modern sector in Brazil and thus of less importance to policy-makers. Nevertheless they argued that sufficient data existed at the time to simulate many of the Brazilian specific instruments in addition to the ‘classic’ ones. They stressed however that merging of data from different data sets may be necessary for this purpose. As a consequence of advances in the analysis of related data sets (see Deaton, 1997) as well as improvements in the availability of data for less-developed countries, the use of tax-benefit modelling techniques needs no longer be limited to countries where such models have been in use for some time. Over time however many of these obstacles have been eroded with the development of models in Latin America (Immervoll, Levy, Nogueira, O’Donoghue, & de Siqueira, 2008; Levy, Immervoll, Nogueira, O’Donoghue, & de Siqueira, 2010; Urzu´a, 2012), Africa (Wilkinson, 2009) and South Asia (Ahmed & O’Donoghue, 2009). As another chapter in this handbook deals with microsimulation modelling in developing countries, we will not discuss issues associated with these models in greater detail here. 3.2.3. Analytical scope The final use dimension we consider is the analytical scope of static microsimulation models. By this we mean the type of analysis that
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
52
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
the models are used for. The most basic analysis is to simulate the distributional incidence of particular policies. The main added value of a static microsimulation model is the ability to simulate the specific detail of a particular policy on a cross-section of the population. Modelling the distributional incidence of actual policy is a prerequisite to other analyses.1 Example analyses include the distributional impact or redistributional impact of tax-transfer systems (Callan, Coleman, & Walsh, 2006; Immervoll, Levy, Nogueira, et al., 2006; Mercader-Prats & Levy, 2004; Paulus et al., 2009; Verbist, 2004, 2006; Wagenhals, 2011), family or household taxes (Bach, Haan, & Ochmann, 2013; O’Donoghue & Sutherland, 1999; Pellegrino, Piacenza, & Turati, 2011), indirect taxes (Decoster et al., 2010; O’Donoghue, 2004), family transfers (Evans, O’Donoghue, & Vizard, 2000; Matsaganis et al., 2006a, 2006b), composite social indicators (Mantovani & Sutherland, 2003) and other nonmonetary policies (Paulus & Peichl, 2008). The next stage within the analytical scope is the simulation of the distributional incidence of alternative reforms. These include the analysis of actual reforms in the case of tax reform (Benedek & Lelkes, 2008; Fuest, Niehues, & Peichl, 2010; Haan & Steiner, 2005; Palme, 1996; Paulus & Peichl, 2008) or hypothetical reforms such as a minimum pension (Atkinson, Bourguignon, O’Donoghue, Sutherland, & Utili, 2000, 2002), potential child benefit reforms (Levy, 2003). In a related fashion static models are also used to look at the impact of other non-policy changes such as economic change (Dolls, Fuest, & Peichl, 2011; Figari, Salvatori, & Sutherland, 2011). The final stage within the analytical scope is to convert the distributional incidence into potential drivers of incentive to change (Immervoll, Kleven, Kreiner, & Saez, 2007). While this stage does not involve behavioural econometrics described in other chapters, it can be used to make inferences in relation to behaviour (Immervoll, 2002; Jara & Tumino, 2013). Example measures include replacement rates (Callan, Nolan, & O’Donoghue, 1996; Immervoll & O’Donoghue, 2004; O’Donoghue, 2011), Marginal Effective Tax Rates (Beer, 1998, 2003; Creedy, Kalb, & Kew, 2003; Dickert, Houser, & Scholz, 1994; Dolls, Fuest, & Peichl, 2012; Harding & Polette, 1995; Scholz, 1996) and the rate of return to education (O’Donoghue, 1999).
1
While pure distributional incidence analysis does not require simulation if the policies are contained in data, very often micro-data does not contain all policies and so simulation may be necessary.
53
Static Models
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
3.3. Methodological characteristics and choices This section aims to survey the practices adopted by some of the leading modelling teams and reports a study funded by Eurostat. Table 3.1 provides a summary of the characteristics of some selected static microsimulation models. The majority of the models are updated to the present with the exceptions are the German IZA model and the TRIM3 model. In this chapter, we may expect that the non-updating of the model is not a serious limitation for the purposes of discussions on the methodological choices. As shown in Table 3.1, the software used for development varies across teams, ranging from generic purpose programming language, for example C++, TRIM3 to some statistical package-based model, for example STINMOD. EUROMOD framework is sometimes used to build static models although it is highly specific to tax-benefit simulations in its current form. Contrary to the development in dynamic microsimulation modelling tools, other than the EUROMOD platform, there seems to be little effort in creating a generic purpose platform that Table 3.1.
Model characteristics
Country
Model name
Update the data to the present
Australia Belgium
STINMOD EUROMOD; Mefisto TUJA
Yes Yes
Yes Yes
SAS EUROMOD; Stata
Yes
Yes
APL (A Programming Language) Stata Java MSQL C++ Stata
Finland Germany Germany Hungary Ireland Ireland
Future Software projections
IZAΨMOD MIKMOD TARSZIM SWITCH Simulation Model of the Irish Local Economy (SMILE) Luxembourg EUROMOD Spain Sweden FASIT UK (Essex) EUROMOD UK (IFS) TAXBEN
No Yes No Yes Yes
No Yes Yes Yes Yes
Yes Yes Yes Yes Yes
No Yes Yes Yes Yes
USA
No but in Progress
Yes
Transfer Income Model, version 3 (TRIM3)
C + + & EXCEL SAS/Stata SAS .NET Mostly Delphi, with some Stata components C + +, with other software used for the model’s databases and interface
54
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
encapsulates the typical routines used in a static microsimulation model, for example reweighting, rule adjustment, alignment, and Monte Carlo simulation. Since microsimulation modelling is a data-driven process, many methodological choices are also centred around data-processing. This chapter will review all major steps and choices in a typical static microsimulation model. This ranges from data set selections to the methods used to adjust the income distribution of the population over time, as shown below • • • • • • •
Parameterisation in static microsimulation models. Baseline Data in static microsimulation models Indexation and uprating Updating of tax-benefit rules Reweighting Static ageing and projections Maintenance and other issues.
3.3.1. Parameterisation in static microsimulation models A static microsimulation framework which aims to accommodate many requirements will necessarily need to be generalised and relates to the degree at which a model is ‘parameterised’ so that model code can be used for different purposes without re-coding. The ‘parameterisation’ of calculations ensures that their operation remains transparent and adaptable. Generalisation makes a model more flexible, but can also make it more costly and difficult to develop and also potentially less transparent, conceptually and computationally more complex (and, hence, slower) than a similar model which is built for a narrow set of applications. Immervoll and O’Donoghue (2009) outline the desirable characteristics of tax-benefit microsimulation models. The characteristics depend upon the purpose of the analysis, involving requirements such as revenue neutrality, improving work incentives, reducing poverty, etc. and that the design of tax-benefit models should enable the user to specify a wide range of different policy scenarios and make it easy to change between different scenarios. Using tax-benefit microsimulation models as an example, Immervoll and O’Donoghue (2009) describe the elements of the model, which can be generalised, including: • the data handling component, • the model ‘manager’ which manages the order of the tax-benefit routines, • the tax-benefit algorithms, • the output component.
Static Models
55
Elements of the tax-benefit component that should be parameterised include:
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
the definition of aggregate income concepts used by an instrument (e.g. ‘taxable income’) or as an output of the model (e.g. ‘disposable income’); the definition of the fiscal units relevant for an instrument (e.g. who belongs to a ‘family’ receiving family allowances); the definition of sharing rules within the unit (e.g. which family member receives the family allowance; who is responsible for paying a tax).
3.3.2. Baseline data in static microsimulation models Most static microsimulation models use data from either government sources such as administrative data set, or representative survives such as European Union Statistics on Income and Living Conditions (EU-SILC). The choice of data is very important (Ceriani, Fiorio, & Gigliarano, 2013). Table 3.2 lists some of the data sources used by the existing static microsimulation models. Administrative records have the advantages of large observation numbers and having complete records on tax-related information (e.g. Flory & Sto¨whase, 2012). However, as tax records are often collected for those who paid tax, there could be a potential selection bias issue when analysing reforms with potential labour market behaviour consequences. Flory and Sto¨whase (2012) conclude that a focus on current taxpayers and the absence of key variables in administrative data can make such models
Table 3.2.
Baseline data source
Country
Model name
Base data source
Australia Belgium
STINMOD EUROMOD; Mefisto
Finland Germany Ireland Luxembourg UK (IFS)
TUJA IZAΨMOD SWITCH EUROMOD TAXBEN
USA
Transfer Income Model, version 3 (TRIM3)
Survey of income and housing (SIH) EUROMOD:BE-SILC MEFISTO:BE-SILC; Belgian Household Budget survey Income Distribution Survey (IDS) German Socio-Economic Panel (GSOEP) SILC (national) EU-SILC/Spell 3 Family Resources Survey, Family Expenditure Survey/Expenditure and Food Survey/Living Costs and Food Survey, British Household Panel Survey and Labour Force Survey Current Population Survey Annual Social and Economic Supplement (CPS-ASEC) American community Survey (ACS)
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
56
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
unsuitable for the analysis of major tax reforms particularly where taxation is expanded to ‘previously unaffected persons’. Additionally, the information collected through administrative data are usually not comparable across countries due to different definitions and tax evasions. Household survey data sets, on the other hand, are usually population representative although the number of observation is much lower and the recordings of different income components may not be as detailed or accurate as the administrative data set, for example the under-reporting of market incomes at the top of the income distribution (Ehling & Rendtel, 2004). Figari, Iacovou, Skew, and Sutherland (2012) have identified other difficulties with household survey data in the case of the EU-SILC where the data is output-harmonised rather than input-harmonised and explain that ‘the only requirement is for countries to generate a set of variables to be included in the data set, without specifying the means by which these data are gathered’. In addition, household surveys do not typically include individuals living in institutions. Sometimes indexation is required to correct for differences in the period of analysis. For example in New Zealand, the period over which income refers to depends upon the year prior to interview. Accordingly the first task of the data preparation is to ‘synchronise’ the data (see Ota & Stott, 2007), adjusting the data through indexation to be consistent for the same period of analysis, to ‘align all of the income and spell information reported by the respondents to a common period’. Indexation to later years as described above can then happen. Another dimension involved in the preparation of data is general need to have data gross of taxes and benefits. Unfortunately many data sets used as input into microsimulation models are on a net basis, requiring the use of a net to gross algorithm to convert the data back to gross from net; essentially the inverse of what a typical model does (Betti, Donatiello, & Verma, 2010; Immervoll & O’Donoghue, 2001). 3.3.3. Indexation and updating Due to the time involved in collecting and constructing the base data set, it is common to see a gap between the time when the data is collected and the current policy year. This means that the income profile could be underestimated, which has an impact on the budgetary analyses. To address this issue, indexing the key variables becomes a common practice. Redmond, Sutherland, and Wilson (1998) describe this method as a process whereby they ‘pick up our sample in the survey year and parachute it into the modelled year with most survey year demographic and economic characteristics intact, but with incomes and expenditures that reflect changes between the survey year and the policy year’. This issue of price and income updating was listed by Sutherland (2001) as being among six technical problems in the formation of a
Static Models
57
microsimulation model such as the EUROMOD microsimulation model and it remains an important issue for the future development of such models. Users face two choices in updating or indexing market income data, namely
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
• Calibrating levels of income to aggregate national accounts information or • Updating existing levels of income to account for changes (via indices) in external control totals. O’Donoghue, Sutherland, and Utili (2000) find that calibrating to national accounts in a simplistic manner can distort the distribution of income, because there is often substantial under-estimation of incomes relative to national accounts, particularly amongst capital and selfemployment income. Sutherland (2001) also argues that the former approach risks ‘distorting the micro-level updated data and hence the impact of policy’. There is thus a debate (see Atkinson, Rainwater, & Smeeding, 1995) on whether it is appropriate to adjust micro-data to account for income underestimation in micro-data. However, it is a nontrivial task to correct the distortion. Income can be underestimated because of the non-reporting of income sources, and the lack of reliable income data of top income earners. As ‘correcting’ for the income under-statement has a risk of further biasing the income distribution, it may therefore be more appropriate sometimes to update the subcomponent of the variables rather than the gross income variable. In this case, the choice often lies on the availability of appropriate data to estimate indices of average growth of incomes or expenditures. Aggregate statistics on income source (such as employee earnings, self-employment earnings, capital income, etc.) and demographic category (sex, industry, etc.) can sometimes be used to index each individual income source, which helps to the microsimulation model to reproduce a realistic income distribution. Sutherland (2001) provides an overview of choices made within the EUROMOD model development and a mixture of approaches was used due to the absence of suitable data. The first version of EUROMOD applied the micro-level index numbers according to the type of income. They highlight however that the same index was often used for both farm and non-farm self-employment. Some taxes and benefits are not simulated for the policy simulation year, but rather updated similarly to the way market incomes are updated. Additionally, several countries provided separate indexes for the employment income of civil servants and other employees, which makes indexation difficult. The authors explained that ‘in many countries benefits are uprated by a price index but this is by no means universal practice’.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
58
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
For most income variables, the Australian model STINMOD relies upon average growth rates from external data sets from the official statistics. In addition, both Housing values and imputed rents rely upon this source. The model uses the Welfare Agency reports for the updating of benefits and family transfers while the Consumer Index is applied for rent and other housing costs. The average Mortgage payment is aligned with numbers from statistics bureau as part of the reweighting procedure. In the case of the SWITH model in Ireland, index values of weekly salary are applied while benefits are simulated according to the tax-benefit rules. The data sources include the Quarterly Economic Commentary for employee earnings, and official forecasts for self-employment income. A variety of sources are used to update the asset values of housing and imputed rents but no updating is applied for mortgage interest. For both the Irish SWITCH model and the STINMOD model, the updating of earnings is not disaggregated according to fine-grained population subgroups. The SMILE model uses disaggregated occupational and industry specific indices for earnings and GNP per capita for other incomes sources. Projections are undertaken using linear extrapolation for a year. In the case of Belgium, the National Bank of Belgium provides the required updating factors for incomes. The Consumer price Index is used for rental income. Benefits are usually simulated but a government website provides index numbers for those variables that are not simulated. Employee earnings are updated based upon separate indices for blue and white collar workers while benefit values are updated separately for Pensions, Minimum income, child benefits, health insurance, education and disability-related payments. The Swedish FASIT model does not update self-employment incomes. The National Institute for economic research provides most of the required data (including mortgage interest) while benefits and family transfers rely upon the rules and calculations in the FASIT model. The IFS model in the United Kingdom uses national statistics to uprate employee, self-employment and farm income. For rental and investment income, the IFS uses the base interest rate to impute capital stock, then uprates the capital stock in line with GDP and uses the current interest rate to impute investment income. The numbers for GDP and the interest rate are due to National Statistics. The Luxembourg EUROMOD model uses data from the Ministry of Social Security and the Consumer Price Index to update employee and self-employment income. Mortgage interest is updated using and indexation factor. Some benefits such as the family allowance are not updated in the Luxembourg model. Despite all efforts in correcting the base data sets, simulation results could still deviate from the actual observations for many reasons. One common problem is the benefit take-up, where individuals simulated to receive a benefit do not receive it in the actual data. It is particularly difficult to simulate when the benefit information is poor, or there is a perception of
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
59
stigma, or the benefit values are small (see Hancock, Pudney, Sutherland, & Unit, 2003; Pudney, Hancock, & Sutherland, 2006). Individuals may also be observed to receive a benefit in the data set but not be simulated to receive the benefit. This can arise through (a) a lack of information in relation to eligibility criteria such as historical contributions, illnessrelated conditionality, (b) issues associated with the time period where perhaps benefit entitlement depends upon income from a different time period to that available in the data or (c) due to possible fraud. On the revenue side, income taxes may be over-simulated due to differences in the period of analysis, as in the case where we simulate income tax based upon weekly or monthly income. Simulations may be under-reported due to simplifications in the simulation of the income taxation as in the case of the non-simulation of some deductions (Callan, 1991) or due to tax evasion (Fiorio & D’Amuri, 2005; Matsaganis & Flevotomou, 2010). While measurement errors impact upon the level of inequality measured, it is often assumed that the measurement error is not significantly correlated with marginal policy reform or economic change (Immervoll, O’Donoghue, & Sutherland, 1999). Unfortunately, this assumption may sometimes lead to biased result. One potential solution is to model the process that drives the bias in measured variables such as benefit take-up, if the data is available. This process however, is uncommon in the existing static models because of the high data requirement for such models and the complexity in human behaviours. 3.3.4. Updating tax-benefit rules Tax-benefit rules are of core part of many social science microsimulation model as the tax system forms an important part of any modern society. As shown in Table 3.3, the development of microsimulation in social science was driven primarily by the need to estimate the impact of taxbenefit policy changes upon the distribution of income. While the uprating of pre-tax incomes and population reweighting have been incorporated in many cases, the primary goal is still the distributional analysis of taxbenefit policy changes. Table 3.3 lists some usual practices of tax-benefit rule updating from a number of modelling teams. As seen, most institutions update rules as policy changes, which is more frequently than annually. There is quite significant heterogeneity in the sources of information in relation to these rules and parameters. The amount of time required to update the rules also vary significantly across teams, ranging from two to four days in the case of the German IZA model to 80 days in the case of the MIKMOD. The majority of the models take between 4 and 20 person days to update the tax-benefit rules. The response from the IFS in relation to time taken is perhaps representative of the situation elsewhere in the sense that the updating of tax-benefit rules can be significantly complicated by major
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
How often do you update the tax-benefit rules?
Australia
More frequent than annual
Belgium
Source of updating information
Government budget and legislations information from various government departments As policy changes and annual Mostly fiscal memento, but also other publications by the administration and government departments/ administration website As policy changes Legal or government texts As policy changes Mainly tax law, social security law
Finland Germany (IZAΨMOD) Germany (MIKMOD) Annually
Hungary Ireland (SWITCH) Ireland (SMILE)
Annually As policy changes Once a year
Luxembourg
Annually
Spain Sweden UK (Essex) UK (IFS)
As policy changes 5 times per year Annually As policy changes
USA TRIM
Annually
Income tax code, other relevant legal code like social security code and sources on official contribution rates or family transfer/child benefit amounts Legal or government texts Main sources are the websites welfare.ie, revenue.ie Government websites (IGSS/Ministry of social security, Employment administration, STATEC, Ministry of finance, European Commission) Legal or government texts Within the Institution and Legal or Government Texts HMT, DWP and OBR websites and CPAG guides Budget documents, Government press releases, etc., usually available online Various websites and sources to identify each of the 14 benefit, tax, and health programs modelled in TRIM3
Time taken to do this task
Varies from days to weeks 25 days
4 5 days 2 4 days 80
30 40 days 20 Days 10 days, depending upon policy structural changes 4 5 Days
21 5 Days Varies from one hour to one month 5 Days but much more for major rule change 65 Days
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Country
Updating tax-benefit rules
60
Table 3.3.
61
Static Models
policy reforms thereby demanding the input of more person days than what would normally be the case.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
3.3.5. Reweighting While indexation or uprating is typically used for calibrating income variables, it is often limited matching the data set to some external constraints along one dimension. Nonetheless, it is sometimes necessary to calibrate a number of variables when the data year and the policy simulation year differ. Sometimes reweighting is required because the original weights supplied with data do not adequately represent key analytical groups required for the analysis (Callan et al., 2010; Ota & Stott, 2007). In these cases even where analytical weights are provided with the data, new weights are derived to improve the accuracy of analysis for subgroups. The method used in these cases is known as ‘reweighting’ (altering the ‘weights’ of different observations in the data). This technique is different than the ‘alignment’ method used in dynamic microsimulation, where the key variables are calibrated (see Li & O’Donoghue, 2014). Instead, only observation weights are updated in this case. Table 3.4 presents the scale of reweighting being undertaken in some selected static models. Creedy (2003) describes the theory associated with a number of potential reweighting methods, using these to reweight the 2000/2001 wave of the New Zealand Household Economic Survey (HES) data to the 2003/2004 wave of the same data set. The functions included a chi-square distance function, a modified chi-square distance function and the Table 3.4.
Reweighting
Country
Model name
Adjust survey weights
Reweighting methods
Australia Belgium Finland
STINMOD EUROMOD; Mefisto TUJA
Yes No Yes
GREGWT
Germany Germany Hungary Ireland Ireland Spain Sweden UK (Essex) UK (IFS) USA (TRIM)
IZAΨMOD MIKMOD TARZIM SWITCH SMILE
No Yes No Yes Yes No FASIT Yes EUROMOD No TAXBEN Yes Transfer Income Model, No version 3 (TRIM3)
CLAN97
Time taken to do this task
One Day (6 times a year)
Newton Raphson 15 20 person days CALMAR Reweight2
10 Days 2 Days
CLAN97
3 Days
Reweight2
One Day
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
62
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Deville-Sandal function. This allowed the author to analyse distributional changes in tax-benefit expenditures during that particular time. Creedy applied the Deville-Sandal function in further experiments using a larger number of calibration equations and found that no solution was available regardless of how wide the range of variation allowed. The discussion of results in the chapter appears to indicate a preference for the chi-squared function with careful application of upper and lower limits. This work is published in an extended manner in chapter 6 of Creedy, Kalb, and Scutella (2006). The same methodologies are tested in a paper by Cai, Creddy, and Kalb (2006) in relation to reweighting the Australian Survey of Income and Housing (SIH). In that particular paper, the Deville-Sandal function is the preferred method. As in the case of Immervoll et al. (2005), the distance functions are used to minimise the aggregate distance between the original and new weights while aligning the data to control totals for the simulation year. Immervoll et al. (2005) tests the method by comparing the reweighting results by adjusting the sample weights contained in the 1996 wave of the Finnish Income Distribution Survey (IDS) survey to align with control totals derived from the 1998 wave of the same data set. The reweighted 1996 data set was then compared with the ‘real’ 1998 data. This provided a useful testing ground for the reweighting method but clearly an alternative source of ‘real’ data is required for practical implementation. The availability of ‘real’ micro-data for the policy simulation year typically arrives with more than a two year time lag. In any case, the whole rationale for reweighting rests upon the assumption that there are time lags in the availability of the ‘real’ micro-data. Based on the test results provided by the authors, there is no statistical difference between reweighted 1996 data set and 1998 data set on a number of key variables. It is found that the procedure performs well in terms of family typologies with an improved match occurring with the 1998 values for all but one group. The re-weighting process provides near-perfect matches for categories used as calibration targets (earned income, unemployment benefits). There are a few software packages and programmes for the reweighting process. Amongst the most popular is a programme written in SAS, CALMAR developed by Deville and Sa¨rndal (1992). There is a more recent update Calmar 2 (Le Guennec & Sautory, 2003). Abello, Brown, Walker, and Thurecht (2003) use this for short-term projections of pharmaceutical benefits. Callan et al. (2010) use this for reweighting the Irish model SWITCH, while Brenner, Beer, Lloyd, and Lambert (2002) use it to produce weights for the Australian STINMOD model. Another methodology for reweighting has been developed by the Australian Bureau of Statistics, the Generalised Regression (GREG) estimator which is implemented in GREGWT (Australian Bureau of Statistics, Bell, 2000) software. It is used in Australia (see Tanton et al.) and in New Zealand Ota and Stott (2007). Many other analyses utilise methods described in
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
63
Gomulka (1992) and Atkinson, Gomulka, and Sutherland (1988). Browne (2012) provides an algorithm in Stata that can be used to create weights using the Gomulka method. Merz (1983, 1985, 1990) developed a Kaman filter-based reweighting algorithm, and described it as a fast adjustment procedure. The review suggests that many models allow for the adjustment of survey weights; Australia, Ireland, Finland, Sweden and the UK IFS model. Each of these cases typically uses different reweighting methods. The reweighting methods include GREGWT, CALMAR, CLAN and Reweight2. There are some variations in the time taken to do this task across countries. The Irish SWITCH model requires the input of 10 person days while the Swedish FASIT model requires three days and the IFS model takes only one day. Some of this variance can be attributed to the different reweighting practice across models. For instance, the reweighting of the Finnish CLAN97 model usually requires just one-person day but this process is usually repeated six times per year. This is the model used by Immervoll et al. (2005) as discussed in Section 3.2. 3.3.6. Projections and static ageing Many static microsimulation models are also used for short-term projection. More than half of the models in Table 3.5 are used for projections in poverty and inequality analyses. The Belgian model projections are used to capture a snapshot of future families. The Irish SWITCH model examines the measures of work incentives in addition to the distributional analysis and poverty assessments. Projection use of the models raises the need of adjusting micro-data to account for exogenous changes in the population. Various techniques have been used within the microsimulation literature, including reweighting/static ageing and dynamic ageing. Aging within a microsimulation context may be defined as the process of updating a database to represent current conditions or of projecting a database for one or more years to represent expected future conditions. Static ageing takes macro-aggregates and then adjusts the underlying distribution to produce projections of the population distribution over time. On the other hand, dynamic ageing involves the estimation of a system of econometric equations and then simulates changes in the population. As static microsimulation models do not model the change of variables as a function of time, static ageing is the most common choice in this type of models. The method itself is closely related to reweighting. Immervoll et al. (2005) define ‘static’ ageing techniques as methods attempting to align the available micro-data with other known information (such as changes in population aggregates, age distributions or unemployment rates), without modelling the processes that drive these changes (e.g. migration, fertility or economic downturn).
Use of projections
Model name
Types of analyses that the projections are utilised for
Time taken to do this task
Australia Belgium Germany Germany
STINMOD EUROMOD; Mefisto IZAΨMOD MIKMOD
Welfare programme budgeting and reform analysis Snapshot of future families, poverty and inequality, budget
Varies from days to weeks 90 Days
Hungary Ireland
TARZIM SWITCH
Ireland Finland Spain Sweden USA UK (IFS) UK (Essex)
Estimation of effects of reforms of personal income taxation; delivery of figures for publications on personal income tax by Federal ministry
Distributional analysis, poverty impact assessment, measures of work incentives SMILE Impact of macro-economic changes on income and spatial inequality TUJA Tax revenue Simulation of tax reforms, stress test of the labour market, poverty and inequality assessments FASIT Poverty, sum for certain benefits Transfer Income Model, version 3 (TRIM3) TAXBEN Projections of income distribution and poverty EUROMOD Explore specific policy issues related to the new Universal credit
Included as part of 20 days for updating tax-benefit rules 1 month
10 Days One to 20 Days 5 10 Days Longer than regular updates but no precise figure
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
64
Table 3.5.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
65
Static ageing, however, has a number of theoretical limitations. Firstly, static ageing cannot be used where there are no individuals in the sample in a particular state. If there are a small number of cases of a particular household category, a very high weight may have to be applied, resulting in unstable predictions. Changing demographic and economic trends over time may mean that increasing weight is placed on population types with very few cases in the sample. Secondly, static ageing procedures are relatively well suited to short- to medium-term forecasts where changes in the structure of the population are small. However, it may be more difficult to use static ageing over longer periods of time or over more turbulent periods due to changing characteristics of the population. Dekkers and Lie´geois (2012) compare static and dynamic ageing of microsimulation ageing under a number of headings. Static ageing is typically cheaper in terms of development and computation cost, but can be more difficult to communicate with stakeholders as they feel that the output from dynamic ageing is more ‘realistic’. Dekkers (2012) also noted that static ageing are better suited for short-term projections, but less useful where the transition path is required.
3.3.7. Maintenance and other issues Designing the framework of any microsimulation model is a resource intensive task. It was found that static national models generally took 2 to 3 man-years to develop (Immervoll & O’Donoghue, 2009). More sophisticated models such as the TRIM model in the United States, took much longer time. One potential cost cutting method is to develop a generic platform, which can be expensive in the initial development, but less costly in the long run since they can be adapted for a multitude of purposes. In addition, the robustness and reliability of the framework will be positively related to the number of users and uses. If a single generic microsimulation framework is widely used, communication and co-operation between researchers are facilitated and training costs reduced. Given the complexities in the policies and the data in microsimulation models, it is also important to validate the model, benchmarking its performance in order to improve and maintain credibility. All major models validate their tax-benefit outputs. The German IZA model validates the tax-benefit outputs but does not validate the price and income uprating indices. In the case of Ireland, the model is validated as the new microdata becomes available. The Swedish, Belgian and Australian models validate the price and income uprating indices, the tax-benefit outputs, the population reweighting and the projections. The Luxembourg EUROMOD model is validated for price and income uprating indices and the tax-benefit outputs.
66
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
3.4. Summary and future directions Static microsimulation models have contributed to numerous policy analyses within both the government bodies and academia (Gupta & Kapur, 2000; Spielauer, 2011). This chapter undertook a survey of a number of static microsimulation models, and analyses the current practices and technical choices among leading modelling teams. A number of methodological issues have been discussed in the chapter. We undertook an analysis of alternative methods to adjust certain distribution data, for example income, prices, etc., from one period to the next. The distribution of welfare depends upon disposable income, which is a function of the presence and level of market incomes interacted with policy rules within the tax-benefit system, which altogether depend upon the population demographic and labour market profile. Thus changes in the distribution of income over time can be influenced by changes to these individual components. Most modelling teams incorporate all or most of these methodologies in adjusting for changes to the income distribution over time. Our survey reports some of the choices made. Increasingly, these model teams are also utilising variants of these methods for short-term projections, which is relatively novel relative to what is currently in the published literature. The applications of static microsimulation were traditionally limited to tax analyses and it has been gradually extended to other fields as discussed in earlier sections of the chapter. Education, agriculture, environmental policy analyses have all benefit from static microsimulation models. With the expansion of the available data and the growing need of scenario modelling in the decision making process, it is likely that microsimulation models are increasingly adopted by policy analysts. Nevertheless, some of the current practices in the field are not as productive as they could have been. There are many duplicate works done by modelling teams around the world. While the models are designed for different purposes and data sets, the analytical procedures behind most static microsimulation models are somewhat similar, yet there is limited effort in developing a standardised platform. Furthermore, a static microsimulation model is often tied to some internally specified format of a particular data set where only limited users can have access. This hinders the adoption and the use of the model, especially for those who intend to use the model differently. It would be beneficial for the community that future models have better designed interfacing protocols and some standardised model descriptions.
References Abello, A., Brown, L., Walker, A., & Thurecht, L. (2003). An economic forecasting microsimulation model of the Australian pharmaceutical
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
67
benefits scheme. Technical Paper no. 30. The National Centre for Social and Economic Modelling, Canberra. Ahmed, V., & O’Donoghue, C. (2009). Redistributive effect of personal income taxation in Pakistan. Pakistan Economic and Social Review, 47(1), 1 17. Atkinson, A. B., & Bourguignon, F. (1990). Tax-benefit models for developing countries: Lessons from developed countries. Ecole Normale Superieure: DELTA Working Paper No. 90-15. Atkinson, A. B., Bourguignon, F., O’Donoghue, C., Sutherland, H., & Utili, F. (2000). Microsimulation and the formulation of policy: A case study of targeting in the European Union. In A. B. Atkinson, H. Glennerster, & N. Stern (Eds.), Putting economics to work, volume in honour of Michio Morishima. London: LSE/STICERD Occ. Paper 22. Atkinson, A. B., Bourguignon, F., O’Donoghue, C., Sutherland, H., & Utili, F. (2002). Microsimulation of social policy in the European Union: Case study of a European minimum pension. Economica, 69, 229 243. Atkinson, A. B., Gomulka, J., & Sutherland, H. (1988). Grossing-up FES data for tax-benefit models. Discussion Paper. London School of Economics, London, UK. Atkinson, A. B., Rainwater, L., & Smeeding, T. M. (1995). Income distribution in OECD countries: Evidence from the Luxembourg income study. Paris: Organisation for Economic Co-operation and Development. Bach, S., Haan, P., & Ochmann, R. (2013). Taxation of married couples in Germany and the UK: One-earner couples make the difference. International Journal of Microsimulation, 6(3), 3 24. Bardaji, J., Se´dillot, B., & Walraet, E. (2003). Un outil de prospective des retraites: le mode´ le de microsimulation Destinie. E´conomie et pre´vision, 160, 193 214. Beer, G. (1998). The state of play of effective marginal tax rates in Australia in 1997. Australian Economic Review, 31(3), 263 270. Beer, G. (2003). Work incentives under a new tax system: The distribution of effective marginal tax rates in 2002. Economic Record, 79(Special Issue), S14 S25. Bell, P. (2000). GREGWT and TABLE macros-users guide (Unpublished). Australian Bureau of Statistics. Benedek, D., & Lelkes, O. (2008). Assessment of income distribution and a hypothetical flat tax reform in Hungary. Journal of Applied Economic Sciences, 3(5), 173 186. Betti, G., Donatiello, G., & Verma, V. (2010). The siena micro simulation model (SM2) for net to gross conversion of EU-SILC income variables. International Journal of Microsimulation, 3(2), 35 53. Brenner, K., Beer, G., Lloyd, R., & Lambert, S. (2002). Creating a basefile for STINMOD. Technical Paper No. 27. National centre for social and economic modelling.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
68
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
Brewer, M., Dickerson, A., Gambin, L., Green, A., Joyce, R., & Wilson, R. (2012). Poverty and inequality in 2020 impact of changes in the structure of employment. York: Joseph Rowntree Foundation. Broad, A. (1982). A Simulation System for Evaluating Taxation (ASSET). Occasional Paper. New Zealand Department of Statistics. Brown, L. (2011). Editorial Special issue on health. International Journal of Microsimulation, 4(3), 1 2. Brown, L., Abello, A., Phillips, B., & Harding, A. (2004). Moving towards an improved microsimulation model of the Australian pharmaceutical benefits scheme. Australian Economic Review, 37(1), 41 61. Browne, J. (2012). REWEIGHT2: Stata module to reweight survey data to user-defined control totals. Statistical Software Components. Cai, L., Creedy, J., & Kalb, G. (2006). Accounting for population ageing in tax microsimulation modelling by survey reweighting. Australian Economic Papers, 45(1), 18–37. Callan, T. (1991). Income tax and welfare reforms: Microsimulation modelling and analysis. Research Series GRS154. Dublin, Ireland: Economic and Social Research Institute (ESRI). Callan, T., Coleman, K., & Walsh, J. (2006). Assessing the impact of tax/transfer policy changes on poverty: Methodological issues and some European evidence. Research in Labour Economics, 25, 125 139. Callan, T., Keane, C., Walsh, J. R., & Lane, M. (2010). From data to policy analysis: Tax-benefit modelling using SILC 2008. Journal of the Statistical and Social Inquiry Society of Ireland, 40, 1 10. Callan, T., Nolan, B., & O’Donoghue, C. (1996). What has happened to replacement rates? Economic and Social Review, 27(5), 439 456. Casler, S. D., & Rafiqui, A. (1993). Evaluating fuel tax equity: Direct and indirect distributional effects. National Tax Journal, 46(2) 197 205. Ceriani, L., Fiorio, C. V., & Gigliarano, C. (2013). The importance of choosing the data set for tax-benefit analysis. International Journal of Microsimulation, 1(6), 86 121. Citro, C. F. & Hanushek, E. A. (Eds.). (1991a). The uses of microsimulation modelling (Vol. 1). Review and Recommendations. Washington, DC: National Academy Press. Citro, C. F. & Hanushek, E. A. (Eds.). (1991b). The uses of microsimulation modelling (Vol. 2). Technical Papers. Washington, DC: National Academy Press. Creedy, J. (2003). Survey reweighting for tax microsimulation modelling. Treasury Working Paper Series No. 03/17. Creedy, J., Kalb, G., & Kew, H. (2003). Flattening the effective marginal tax rate structure in Australia: Policy simulations using the Melbourne institute tax and transfer simulator. Australian Economic Review, 36(2), 156 172.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
69
Creedy, J., Kalb, G. R., & Scutella, R. (2006). Income distribution in discrete hours behavioural microsimulation models: An illustration. Journal of Economic Inequality, 4(1), 57–76. Creedy, J., & Tuckwell, I. (2004). Reweighting household surveys for tax microsimulation modelling: An application to the New Zealand household economic survey. Australian Journal of Labour Economics, 7(1), 71. Curry, C. (1996). PENSIM: A dynamic simulation model of pensioners’ income government economic service. Working Paper No. 129. Analytical Services Division, Department of Social Security, London. Deaton, A. (1997). The analysis of household surveys: A microeconometric approach to development policy. Washington, DC: World Bank. Decoster, A., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2010). How regressive are indirect taxes? A microsimulation analysis for five European countries. Journal of Policy Analysis and Management, 29(2), 326 350. Decoster, A., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2011). Microsimulation of indirect taxes. International Journal of Microsimulation, 4(2), 41 56. Dekkers, G. (2012). The simulation properties of microsimulation models with static and dynamic ageing A guide into choosing one type of model over the other. Mimeo. Dekkers, G., & Lie´geois, P. (2012). The (dis)advantages of dynamic and static microsimulation. Mimeo, 14 September 2012. Deville, J.-C., & Sa¨rndal, C.-E. (1992). Calibration estimation in survey sampling. Journal of the American Statistical Association, 87(418), 375 382. Dickert, S., Houser, S., & Scholz, J. K. (1994). Taxes and the poor: A microsimulation study of implicit and explicit taxes. National Tax Journal, 47(3), 621 638. Dolls, M., Fuest, C., & Peichl, A. (2011). Automatic stabilizers, economic crisis and income distribution in Europe. In H. Immervoll, A. Peichl, & K. Tatsiramos (Eds.), Who loses in the downturn economic crisis, employment and income distribution (Vol. 32, pp. 227 255). Research in Labor Economics. Bingley, UK: Emerald Group Publishing Limited. Dolls, M., Fuest, C., & Peichl, A. (2012). Automatic stabilizers and economic crisis: US vs. Europe. Journal of Public Economics, 96(3), 279 294. Ehling, M., & Rendtel, U. (Eds.). (2004). Harmonisation of panel surveys and data quality. Wiesbaden: Statistisches Bundesamt. Ericson and Flood. (2012). A microsimulation approach to an optimal Swedish income tax. International Journal of Microsimulation, 5(2), 2–21.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
70
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
European Commission. (2012). Employment and social developments in Europe 2012. Luxembourg: Publications Office of the European Union. European Commission. (2013a, March). EU employment and social situation quarterly review. Brussels: The European Commission. European Commission. (2013b). The role of tax policy in times of fiscal consolidation. Luxembourg: Publications Office of the European Union. Evans, M., O’Donoghue, C., & Vizard, P. (2000). Means testing and poverty in 5 European countries. In V. Atella (Ed.), Le Politiche Sociali in Italia ed. in Europa Coerenza e Convergenza nelle Azioni 1997 1999. Bologna: IL Mulino. Figari, F., Iacovou, M., Skew, A. J., & Sutherland, H. (2012). Approximations to the truth: Comparing survey and microsimulation approaches to measuring income for social indicators. Social Indicators Research, 105(3), 387 407. Figari, F., Salvatori, A., & Sutherland, H. (2011). Economic downturn and stress testing European welfare systems. In H. Immervoll, A. Peichl, & K. Tatsiramos (Eds.), Who loses in the downturn economic crisis, employment and income distribution (Vol. 32, pp. 257 286). Research in Labor Economics. Bingley, UK: Emerald Group Publishing Limited. Fiorio, C. V., & D’Amuri, F. (2005). Workers’tax evasion in Italy. Giornale degli Economisti e Annali di Economia, 64(2/3), 247 270. Flood, L. (2007). Can we afford the future? An evaluation of the new Swedish pension system. Modelling our Future: Population Ageing, Social Security and Taxation, 33, Elsevier, Amsterdam. Flory, J., & Sto¨whase, S. (2012). MIKMOD-ESt: A static microsimulation model of personal income taxation in Germany. International Journal of Microsimulation, International Microsimulation Association, 5(2), 66 73. Foxman, P. (2000). The use of microsimulation models for the Danish tax reform act of 1993. In A. Gupta & V. Kapur (Eds.), Microsimulation in government policy and forecasting (pp. 95 114). North-Holland, The Netherlands: Amsterdam. Fuest, C., Niehues, J., & Peichl, A. (2010). The redistributive effects of tax benefit systems in the enlarged EU. Public Finance Review, 38(4), 473 500. Giannarelli, L. (1992). An analyst’s guide to TRIM2: The transfer income model, version 2. Washington, DC: The Urban Institute. Giannarelli, L., Morton, J., & Wheaton, L. (2007). Estimating the antipoverty effects of changes in taxes and benefits with the TRIM3 microsimulation model. Washington, DC: The Urban Institute. Gomulka, J. (1992). Grossing-up revisited. In R. Hancock & H. Sutherland (Eds.), Microsimulation models for public policy analysis:
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
71
New frontiers, Suntory-Toyota international centre for economics and related disciplines (pp. 121 132). London: London School of Economics and Political Science. Gupta, A., & Kapur, V. (2000). Microsimulation in government policy and forecasting. Elsevier Science. Gupta, A., Kapur, V., & McGirr, T. (2000). Microsimulation and sales tax reform in Canada. In A. Gupta & V. Kapur (Eds.), Microsimulation in government policy and forecasting (pp. 39 64). North-Holland, The Netherlands: Amsterdam. Haan, P., & Steiner, V. (2005). Distributional effects of the German tax reform 2000 A behavioural microsimulation analysis. Schmollers Jahrbuch: Journal of Applied Social Science Studies/Zeitschrift fu¨r Wirtschafts-und Sozialwissenschaften, 125(1), 39 49. Hancock, R., Pudney, S., Sutherland, H., & Unit, M. (2003, October). Using econometric models of benefit take&up by British pensioners in microsimulation models. In International microsimulation conference on population, ageing and health: modelling our future, December (pp. 7–12). Hansen, F. (2000). Marginal effective tax rates and expected gains from employment in Denmark. In A. Gupta & V. Kapur (Eds.), Microsimulation in government policy and forecasting (pp. 95 114). North-Holland, The Netherlands: Amsterdam. Harding, A., Abello, A., Brown, L., & Phillips, B. (2004). Distributional impact of government outlays on the Australian pharmaceutical benefits scheme in 2001 02. Economic Record, 80(s1), S83 S96. Harding, A., & Polette, J. (1995). The price of means-tested transfers: Effective marginal tax rates in Australia in 1994. Australian Economic Review, 28(3), 100 106. Hoschka, P. (1986). Requisite research on methods and tools for microanalytic simulation models. In G. Orcutt, J. Merz, & H. Quinke (Eds.), Microanalytic simulation models to support social and financial policy. Amsterdam: North-Holland. Immervoll, H. (2002). The distribution of average and marginal effective tax rates in European Union member states. EUROMOD Working Paper Series No. EM2/02. Immervoll, H., Kleven, H. J., Kreiner, C. T., & Saez, E. (2007). Welfare reform in European countries: A microsimulation analysis. The Economic Journal, 117(516), 1 44. Immervoll, H., Levy, H., Nogueira, J. R., O’Donoghue, C., & de Siqueira, R. B. (2006). Simulating Brazil’s tax-benefit system using BRAHMS, the Brazilian household micro-simulation model. Economia Aplicada, 10(2), 203 223. Immervoll, H., Levy, H., Nogueira, J. R., O’Donoghue, C., & de Siqueira, R. B. (2008). The impact of Brazil’s tax-benefit system on
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
72
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
inequality and poverty. In F. Nowak-Lehmann & S. Klasen (Eds.), Poverty, inequality, and policy in Latin America. MIT Press. Immervoll, H., Lindstro¨m, K., Mustonen, E., Riihela¨, M., & Viitama¨ki, H. (2005). Static data ‘ageing’ techniques. Accounting for population changes in tax-benefit microsimulation. EUROMOD Working Paper No. EM7/05. Immervoll, H., & O’Donoghue, C. (2001). Imputation of gross amounts from net incomes in household surveys: An application using EUROMOD. EUROMOD Working Paper Series No. EM1/01. Immervoll, H., & O’Donoghue, C. (2004). What difference does a job make? The income consequences of joblessness in Europe. In D. Gallie (Ed.), Resisting marginalisation: Unemployment experience and social policy in Western Europe. Oxford: Oxford University Press. Immervoll, H., & O’Donoghue, C. (2009). Towards a multi-purpose framework for tax-benefit microsimulation: Lessons From EUROMOD. International Journal of Microsimulation, Autumn, 2(2), 43 54. Immervoll, H., O’Donoghue, C., & Sutherland, H. (1999). An introduction to EUROMOD. EUROMOD Working Paper No. 0/99. Jara, H. X., & Tumino, A. (2013). Tax-benefit systems, income distribution and work incentives in the European Union. International Journal of Microsimulation, 1(6), 27 62. Le´gare´, J., & De´carie, Y. (2011). Using statistics Canada lifepaths microsimulation model to project the health status of Canadian elderly. International Journal of Microsimulation, 4(3), 48 56. Le Guennec, J., & Sautory, O. (2003). La macro Calmar2, manuel d’utilisation, document interne INSEE. Lelkes, O. & Sutherland, H. (Eds.). (2009). Tax and benefit policies in the enlarged Europe: Assessing the impact with microsimulation models (Vol. 35). Surrey, UK: Ashgate Publishing Ltd. Levy, H. (2003). Child-targeted tax-benefit reform in Spain in a European context: A microsimulation analysis using EUROMOD. EUROMOD Working Paper Series No. EM2/03. Levy, H., Immervoll, H., Nogueira, J. R., O’Donoghue, C., & de Siqueira, R. B. (2010). Simulating the impact of inflation on the progressivity of personal income tax in Brazil. Revista Brasileira de Economia, 64(4), 405 422. Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2). Li, J., & O’Donoghue, C. (2014). Evaluating binary alignment methods in dynamic microsimulation models. Journal of Artificial Society and Simulation, 17(1), 15. Mantovani, D., Papadopoulos, F., Sutherland, H., & Tsakloglou, P. (2007). Pension incomes in the European Union: Policy reform
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
73
strategies in comparative perspective. Micro-Simulation in Action: Policy Analysis in Europe using EUROMOD, 27, 27 72. Mantovani, D., & Sutherland, H. (2003). Social indicators and other income statistics using the EUROMOD baseline: A comparison with Eurostat and national statistics. EUROMOD Working Paper Series No. EM1/03. Matsaganis, M., & Flevotomou, M. (2010). Distributional Implications of Tax Evasion in Greece, GreeSE Paper No. 31. London, UK. Matsaganis, M., O’Donoghue, C., Levy, H., Coromaldi, M., MercaderPrats, M., Rodrigues, C. F., … Tsakloglou, P. (2006a). Family transfers and child poverty in Greece, Italy, Spain and Portugal. Research in Labour Economics, 25. Matsaganis, M., O’Donoghue, C., Levy, H., Coromaldi, M., MercaderPrats, M., Rodrigues, C. F., … Tsakloglou, P. (2006b). Reforming family transfers in Southern Europe: Is there a role for universal child benefits? Social Policy and Society, 5(2), 189 197. Mercader-Prats, M., & Levy, H. (2004). The role of tax and transfers in reducing personal income inequality in Europe’s regions: Evidence from EUROMOD. EUROMOD Working Paper Series No. EM9/04. Merz, J. (1983). The adjustment of microdata using the Kalman filtering procedure and optimal control theory, Sfb 3-Arbeitspapier No. 122, Sonderforschungsbereich 3, Mikroanalytische Grundlagen der Gesellschaftspolitik, Frankfurt/M., Mannheim. Merz, J. (1985). Ein modifiziertes Newton-Verfahren zur Lo¨sung des Hochrechnungsproblems nach dem Prinzip des minimalen Informationsverlustes. Computing, 35, 51 61. Merz, J. (1991). Microsimulation A survey of principles, developments and applications. International Journal of Forecasting, 7(1), 77 104. Morrison, R. J. (1990). Microsimulation as a policy input: Experience at health and welfare Canada. In G.-H. Lewis & R.-C. Michel (Eds.), Microsimulation techniques for tax and transfer analysis. Washington, DC: Urban Institute Press. Na, S. L., & Hyun, J. K. (1993). Microsimulation model in tax and benefit policies: Korean tax-benefit model. Seoul: Korea Tax Institute (in Korean). O’Donoghue, C. (1999). Estimating the rate of return to education using microsimulation. Economic and Social Review, 30(3), 249 266. O’Donoghue, C. (2001). Dynamic microsimulation: A survey. Brazilian Electronic Journal of Economics , 4(2), 77. O’Donoghue, C. (2004). Redistributive forces in the Irish tax-benefit system. Journal of the Statistical and Social Inquiry Society of Ireland, 32, 33 69. O’Donoghue, C. (2011). Do tax-benefit systems cause high replacement rates? A Decompositional analysis using EUROMOD. Labour, 25, 126 151. doi:10.1111/j.1467-9914.2010.00501.
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
74
Jinjing Li, Cathal O’Donoghue, Jason Loughrey and Ann Harding
O’Donoghue, C., & Loughrey, J. (2014). Nowcasting in microsimulation models: A methodological survey. Working paper. Teagasc, Ireland. O’Donoghue, C., & Sutherland, H. (1999). For richer, for poorer?: The treatment of marriage and the family in European income tax systems. Cambridge Journal of Economics, 23(5), 565 598. O’Donoghue, C., Sutherland, H., & Utili, F. (2000). Integrating output in EUROMOD: An assessment of the sensitivity of multi-country microsimulation results. In L. Mitton, H. Sutherland, & M. Weeks (Eds.), Microsimulation in the new millennium. Cambridge: Cambridge University Press. Ota, R., & Stott, H. P. (2007). A New Zealand static microsimulation model Challenges with data. Paper presented to the International Microsimulation Association, Vienna, July. Palme, M. (1996). Income distribution effects of the Swedish 1991 tax reform: An analysis of a microsimulation using generalized Kakwani decomposition. Journal of Policy Modelling, 18(4), 419 443. Paulus, A., Cˇok, M., Figari, F., Hegedu¨s, P., Kump, N., Lelkes, O., & Vo˜rk, A. (2009). The effects of taxes and benefits on income distribution in the enlarged EU. EUROMOD Working Paper No. EM8/09. Paulus, A., & Peichl, A. (2008). Effects of flat tax reforms in Western Europe on income distribution and work incentives (No. 3721). IZA Discussion Papers. Pedersen, T. B. (2000). Distributional outcome of the Danish welfare system. In A. Gupta & V. Kapur (Eds.), Microsimulation in government policy and forecasting (pp. 95 114). North-Holland, The Netherlands: Amsterdam. Pellegrino, S., Piacenza, M., & Turati, G. (2011). Developing a static microsimulation model for the analysis of housing taxation in Italy. International Journal of Micro simulation, 4(2), 73 85. Pudney, S., Hancock, R., & Sutherland, H. (2006). Simulating the reform of means-tested benefits with endogenous take-up and claim costs. Oxford Bulletin of Economics and Statistics, 68(2), 135 166. 04. Redmond, G., Sutherland, H., & Wilson, M. (1998). The arithmetic of tax and social security reform: A user’s guide to microsimulation methods and analysis (Vol. 64). Cambridge, UK: Cambridge University Press. Schofield, D., Shrestha, R., Passey, M., Fletcher, S., Kelly, S. J., & Percival, R. (2011). Projecting the impacts of illness on labour force participation: An application of Health&WealthMOD. International Journal of Microsimulation, 4(3), 37 47. Scholz, J. K. (1996). In-work benefits in the United States: The earned income tax credit. The Economic Journal, 106, 156 169. Spielauer, M. (2011). What is social science microsimulation? Social Science Computer Review, 29(1), 9 20. Sung, M. J., & Song, H. (2011). Distributional impacts of personal income tax and excise duties in Korea. Retrieved from http://www.scb.se/
Downloaded by UNIVERSITY OF OTAGO At 01:17 11 December 2016 (PT)
Static Models
75
Grupp/Produkter_Tjanster/Kurser/_Dokument/IMA/Sung_SongDistributional_Impacts_of_Personal_Income_Tax_and_Excise_Duties_ in_Korea.PDF. Accessed on 19 February 2014. Sutherland, H. (1995). Static microsimulation models in Europe: A survey. Cambridge Working Papers in Economics 9523, University of Cambridge. Sutherland, H. (2001). EUROMOD: An integrated European benefit-tax model: EUROMOD. Working Paper Series EM9/01. Sutherland, H., & Figari, F. (2013). EUROMOD: The European Union tax-benefit microsimulation model. International Journal of Microsimulation, 6(1), 4 26. Thurecht, L., Brown, L., & Yap, M. (2011). Economic modelling of the prevention of type 2 diabetes in Australia. International Journal of Microsimulation, 4(3), 71 80. Urzu´a, C. M. (2012). Fiscal inclusive development: Microsimulation models for Latin America. Campus Ciudad de Me´xico: Tecnolo´gico de Monterrey. Verbist, G. (2004). Redistributive effect and progressivity of taxes: An international comparison across the EU using EUROMOD (No. EM5/04). Euromod Working Paper Series. University of Essex. Verbist, G. (2006). The distribution effects of taxes on pensions and unemployment benefits in the EU-15. Research in Labour Economics, 25, 73 99. Wagenhals, G. (2011). Dual income tax reform in Germany. A microsimulation approach. International Journal of Microsimulation, 4(2), 3 13. Walker, A., Fischer, S., & Percival, R. (1998). A microsimulation model of Australia’s pharmaceutical benefits scheme. NATSEM Technical paper 15, National Centre for Social and Economic Modelling, University of Canberra. Wilkinson, K. (2009). Adapting EUROMOD for use in a developing country: The case of South Africa and SAMOD. EUROMOD Working Paper No. EM5/09. Will, B., Berthelot, J., Nobrega, K., Flanagan, W., & Evans, W. (2001). Canada’s population health model (POHEM) a tool for performing economic evaluations of cancer control interventions. European Journal of Cancer, 37(14), 1797 1804. Xiong, L., & Ma, X. (2007). Forecasting China’s medical insurance policy for urban employees using a microsimulation model. Journal of Artificial Societies & Social Simulation, 10(1), 8. Xiong, L., Weidong, T., & Hong, L. (2011). Constructing a basefile for simulating Kunming’s medical insurance scheme of urban employees. International Journal of Microsimulation, 4(3), 3 16. Zhang, S., & Wan, X. (2008). Distribution effects of personal income tax system: A micro-simulation approach. Finance & Economics, 2, 010.
CHAPTER 4
Multi-Country Microsimulation Holly Sutherland
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
4.1. Introduction ‘Multi-country’ microsimulation involves simulations and data for two or more countries. The purpose of the analysis may be to compare effects across the countries or to consider them together at the supra-national level, as though for a single entity. Multi-country microsimulation can be carried out using national models side-by-side or it can make use of a model specifically designed for the purpose. In principle it can involve any of the types of microsimulation described in the other chapters in this volume. In practice it usually involves simulating the effects of policies, which are naturally country-specific, and most frequently within a static framework. In this chapter many of the empirical illustrations are drawn from analysis using the EU-wide tax-benefit model EUROMOD (Sutherland & Figari, 2013). Covering 27 countries and made generally accessible this is now one of the most widely used microsimulation models and the most well-established multi-country model. Since it is generally available to use, readers can potentially reproduce, update and extend the examples of analysis included in this chapter. The next section considers in more detail the contexts in which multicountry microsimulation is used. This is followed by a discussion of the methodological choices to be made when carrying out such an analysis, starting with the fundamental choice of whether or not to construct a special-purpose multi-country model and going on to a separate consideration of choices related to software, data and simulations, each taking EUROMOD as the prime example of a multi-country model. The next section describes some uses and applications of multi-country microsimulation and the final section concludes with a forward-looking perspective.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293003
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
78
Holly Sutherland
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
4.2. Context Cross-country microsimulation is relevant when the subject matter relates to features that naturally vary across national boundaries and when the questions to be addressed are not specific to one national situation. This includes anything that is in the economic domain, or affected by policy choices or political or historical developments. Arguably all of the types of microsimulation considered in this volume could be applied informatively in a cross-country context. However, multi-country microsimulation is particularly relevant for measuring the effects of policies on human populations. Policies are usually defined and implemented at national (or sub-national) level and population characteristics and individual behaviours are shaped by economic, social and political circumstances and developments that are to some extent specific to the country concerned. Multi-country microsimulation is at its more relevant when applied to policy analysis (Figari, Paulus, & Sutherland, forthcoming). Comparisons across countries, either of the status quo or of the effects of changes, naturally add value to what can be said about a single country because the broader perspective helps to provide a sense of scale and proportion. From an academic perspective they provide the basis for assessing the robustness of results and for generalising conclusions. In addition, considering several countries within the same analysis provides a kind of ‘laboratory’ in which to analyse the effects of similar policies in different contexts or different policies with common objectives. From a policy perspective, cross-country comparisons of the effects of policies in different national contexts provides opportunities for ‘policy learning’ across policy-making institutions at national level and by international organisations with a policy monitoring role. Furthermore, multi-country analysis using microsimulation can address questions that apply beyond particular national boundaries in two ways. First, they might capture implicit or actual flows between countries, for example of people through migration or economic resources through financial transactions. Secondly they permit analysis at a supranational level, for example for the European Union or the Euro zone, any meaningful grouping of countries or world region or, in principle at least, the whole world. Microsimulation models are traditionally categorised into those that are ‘static’ (or cross-sectional), ‘dynamic’ (or longitudinal) or ‘behavioural’; see for example, Harding (1996). However, much modern microsimulation analysis combines elements of each type, according to the question being addressed and it is also worth making a different kind of distinction: that between model frameworks (or platforms) that aim to provide a structure into which data, rules and other specific content can be inserted, and the (usually nationally specific) content itself. Modelling platforms that facilitate the construction of dynamic microsimulation
Multi-Country Microsimulation
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
1
2
79
models include MODGEN and LIAM2. EUROMOD provides a framework for constructing a static tax-benefit model. In principle they can all be used as the basis for constructing multi-country models but in practice this has only been done extensively with EUROMOD, for which the main motivation for building the framework was to support multi-country microsimulation. Indeed, there are few examples of cross-country comparative analysis based on dynamic microsimulation modelling and few dynamic models aim to cover more than one country. Of the 61 dynamic models reviewed by Li and O’Donoghue (2013) only two aim to do this and only MIDAS (Dekkers et al., 2010) includes specific content for a number of countries (Belgium, Germany and Italy) in order to compare aspects of pensions policy. We return to why there is so little in this area to date in the final part of this chapter and the focus of the remainder is on models that provide both multi-country content and the framework to manage it effectively, and on modelling of the effect of tax and benefit policy on current household incomes and welfare. Where extensions into multi-country behavioural modelling (labour supply) and use of dynamic microsimulation techniques have been adopted, these are also covered. There are several distinct ways in which multi-country models can be deployed to make comparisons across countries, each of which imposes their own methodological requirements. First, they can be used to understand the detailed operation of existing policy systems, how they impact on households and how this differs across countries. Secondly they can be used to compare the effect of policy systems across changing or different economic, social and demographic conditions by decomposing the effects of policies as a whole (or particular policies) on the income distribution across time. In addition, multi-country microsimulation can allow policies from one country to be ‘imported’ into another, so that the effect of a policy system on national populations can be decomposed and compared across countries directly. Such ‘policy swapping’ is particularly demanding in terms of the design and characteristics of the multi-country model and this is considered further in the next section. Thirdly, a multi-country approach offers the opportunity to design and compare the effects of potential policy reforms across countries. These might be reforms that differ across countries but which have a common objective (such as reducing risk of poverty, reducing the budget deficits, improving work incentives or redistributing the tax burden) or common reforms (such as a European Union Minimum Income (MI), a European Monetary Union unemployment insurance or a flat tax). This type of analysis usually requires the reform scenarios to be comparable or equivalent
1 2
http://www.statcan.gc.ca/microsimulation/modgen/modgen-eng.htm http://liam2.plan.be/pages/about.html
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
80
Holly Sutherland
in some way across countries, for example by being budget neutral, or with the policy parameters set in a way that is benchmarked against national circumstances (e.g. setting the MI level as some proportion of national median disposable income). In addition, instead of reforming policies in a common or comparable way, common equivalent changes can be made to the underlying economic conditions in each country to explore the effectiveness of existing policies in different economic circumstances. Such changes might include an increase in unemployment or a change in the inequality of earnings (Immervoll, Levy, Lietz, Mantovani, & Sutherland, 2006). In each of these types of analysis a multi-country approach can not only allow comparisons of national-level effects but also an assessment of the effect at the aggregated supra-national level. For example, revenueneutrality might be defined at the level of the countries aggregated together and the implied flows of budget between the countries could be one of the estimated outputs. Furthermore, in each case the analysis might be deepened through linkage to labour supply or other micro-level behavioural models (Bargain, Orsini, & Peichl, 2014) or to macro-economic models of one kind or another. In the latter case this linkage might be top down, with the macro model supplying information to the inputs of the multi-country microsimulation, or bottom up, with the macro model receiving information about income levels (and if labour supply modelling is used, labour market conditions). In some cases, the modelling of changing circumstances might draw on dynamic microsimulation techniques, for example to model the transition between labour market states. Evidently, the methodological issues and choices related to all these possibilities are many and various. The next section considers these in relation to multi-country static microsimulation, which is the essential core of all the modelling considered here. The methodological issues concerning single country labour supply, macro-linked and dynamic microsimulation models are discussed in other chapters in this volume. We return to the future of multi-country analysis using other modelling approaches in the concluding section of this chapter. 4.3. Methodological characteristics and choices 4.3.1. A fundamental choice The main methodological choice to be made when carrying out multicountry analysis using microsimulation is whether to assemble together models built for the purpose of national analysis or alternatively to construct or make use of an existing model that sets out to cover many countries in a consistent way. Most multi-country microsimulation analysis can in principle be carried out using a set of pre-existing national
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
81
microsimulation models, side by side. But for two reasons this is rarely done. First, as Callan and Sutherland (1997) found, making national models produce comparable results, even for just two relatively similar countries (Ireland and the United Kingdom) for one relatively straightforward comparison was necessarily a formidable task. There was a choice between, on the one hand, a ‘lowest common denominator’ approach to the selection of assumptions, options and definitions to be used and, on the other hand, rebuilding one or both models in order to align assumptions, definitions, etc. Even the first of these strategies is problematic because all the assumptions that are being made have first to be identified; they are not always transparently obvious. What may seem like a natural assumption in one country may be treated quite differently in another. Examples include the definition(s) of a dependent child, whether to cover the simulation of near cash benefits such as free school meals and whether to adjust for under-reporting of incomes in the input data. The second reason is that in the case of analysis involving many countries rather than just two; it is unlikely that national models would be made available in the way that would be necessary. This motivated the construction of EUROMOD as a multi-country model which now covers all 27 Member States. As well as EUROMOD there have been other multi-country initiatives to construct and use microsimulation models. These include a Latin American project that built separate models using a range of software and approaches for Brazil, Chile, Guatemala, Mexico and Uruguay (Urzu´a, 2012). A WIDER project has constructed models that are available in simplified form on the web for ten African countries.3 To our knowledge neither set of models has been used for cross-country comparisons of the effects of reforms. Thus, the methodological challenges posed by comparability (e.g. of data and concepts) and transferability (e.g. of policies) may not have been relevant in constructing these models. On the other hand, some national models operate as a federation of regional models, capturing regional differences in policy as well as national policy competencies. In the United States, the most comprehensive in terms of policy coverage is the long-standing microsimulation model, TRIM3,4 which simulates welfare programs as well as taxes and regional variation in programs, making use of a common national input dataset.5 Examples of European national modelling exercises that capture regional differences in policies include Canto, Adiego, Ayala, Levy, and Paniagua (2014) for Spain. However, regional models do not face all
3 4 5
http://african-models.wider.unu.edu/ http://trim3.urban.org The Current Population Survey Annual Social and Economic Supplement.
82
Holly Sutherland
the challenges of comparability that multi-country models must account for because data requirements and concepts tend to be common across the regionally specific policy systems (unlike across national systems). Therefore the remainder of this section mainly focuses on the methodological choices made in constructing EUROMOD for the purposes of multi-country microsimulation analysis. As noted earlier the technical requirements of a model depend on the nature of the analysis to be carried out and here we focus on the requirements of two of the most demanding: swapping policies from one country to another, and simulating common reforms in several countries.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
4.3.2. Software Generally, EUROMOD is much more flexible than national microsimulation models in order to ensure consistency of results and transferability of tax-benefit system components across countries. The aim has been to maximise the extent of choice available to the user within a disciplined structure. Full national specificity is maintained but cross-country consistency and comparability is facilitated using a common, special-purpose ‘taxbenefit modelling’ language, a structured naming convention for variables and a user interface that, among many other functions, manages the specification of new policy systems (and sub-components) based on a library of existing systems (and sub-components). The approach has developed over a number of years and successive versions of EUROMOD, with the expansion from 15 to 27 countries marking the transition between the first and second generation models. For descriptions of successive developments of the same fundamental approach, see Immervoll, O’Donoghue, and Sutherland (1999), Immervoll and O’Donoghue (2009), Sutherland (2001), Lietz and Mantovani (2007) and Sutherland and Figari (2013). A typical single-country tax-benefit model consists of coded policy rules programmed in a generic language (e.g. FORTRAN or C++) or software package (e.g. Stata or R), interacting directly with the input data. EUROMOD, in contrast, makes use of an additional layer which consists of its specially developed tax-benefit modelling language. Using this, EUROMOD stores and displays the tax-benefit rules in a very flexible modular system that is standardised across each country and each policy year and reform scenario that is simulated. The tax-benefit rules, expressed as monetary or non-monetary values or as definitions of policy structures, take the form of parameters of EUROMOD functions. These functions constitute the building blocks of the EUROMOD tax-benefit modelling language and are read directly by the model software. They are fully documented in help screens within the user interface. The advantages of this additional layer include a coding facility that emphasises the special features of tax-benefit calculations and enables the comparison of the structure of different policy instruments within national systems as well as
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
83
across countries. This is particularly helpful when integrating policies from other countries into national systems or making common changes across countries. Use of national models for this purpose would require a comprehensive understanding of the programming of each national system, appreciation of each of the various model designs and frameworks, use of multiple languages and/or software packages and so on. The EUROMOD core executable, written in C++ and compiled, is never accessed directly by users. This is possible, while maintaining full flexibility, because there are no parts of the tax-benefit calculations or other assumptions that are hard-wired in this code. Policy changes, including entirely new systems, are implemented without reprogramming the code itself. When a user runs EUROMOD, the executable reads the tax-benefit rules stored in the user interface, applies such rules to the input micro-data and produces an individual level output data file containing relevant information from the input data and the tax-benefit simulation which can be analysed using statistical software of the user’s choosing. The implementation of complex policy reforms is facilitated by the flexible definition of the major functions. These include the functions used to define units of assessment (i.e. the group of people on which the taxbenefit rule is to be performed), the ‘income lists’ (i.e. the aggregations of monetary variables used as input to tax-benefit algorithms) and the ‘policy spine’ (i.e. the order in which policy instruments are simulated). The user interface is standalone software programmed using the Microsoft .net Framework. There are two important positive externalities to building (and keeping maintained and updated) a many-country flexible and adaptable model like EUROMOD. The first is that, given the effort and resources that go into this, it makes sense for the model to be made generally available and for it not to be used exclusively by its developers, as is often the case with microsimulation models. The second is that the flexible framework offers a short cut for building models for other countries, not only adding European countries as the EU expands, but also developing new models for other countries such as Russia (Popova, 2013) and Australia (Hayes & Redmond, 2014). Using the EUROMOD framework and software as a starting point is in many ways analogous to the adoption of the LIAM2 or MODGEN platforms when starting to construct a dynamic (longitudinal) microsimulation model, with the additional benefit of the potential to use the coding of European tax-benefit rules as a ‘library’ to adapt to provide the content corresponding to the rules of the new country. Some of these new model developments also provide the prospect for multi-country analysis. For example there is an ongoing collaboration among some of the Balkan countries to make use of the EUROMOD platform to build models with the explicit intention that they will be used for comparisons. The Serbian model SRMOD is the first completed step in this process (Randelovic´ & Rakic´, 2013) followed by the Macedonian
84
Holly Sutherland
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
model, MAKMOD (Mojsoska Blazevski, Petreski, & Petreska, 2013). Similarly, the South African model SAMOD, again using the EUROMOD platform (Wilkinson, 2009) has been joined by a sister model for Namibia (NAMOD), with the aim, among other things, of modelling ‘borrowed’ policies that have been successful in a South African context (Wright, Noble, & Barnes, 2014). Both of these aspects of EUROMOD influence the directions and priorities for its ongoing development. Most obviously they highlight the importance of comprehensive and regularly updated documentation as well as the functionality of the user interface and, given the number and range of users, the potential for linkage with other data, software and models. 4.3.3. Data The micro-data requirements for each country in a multi-country model are the same as they would be for a national model of the same type and policy scope, but for one aspect. If the multi-country model is to be used for policy swaps (transferring policy elements from country A into country B), or for modelling the effects of common policies across several countries, then the data requirements for each policy system must be met by the input micro-databases for each country, even if some of the variables are not needed for the modelling of all national systems. For example, hours of paid work is an important variable for some national tax-benefit system but not relevant for others. The same applies to housing costs, whether or not an employee is a civil servant and region of residence, to give more examples. Thus, the input data need to contain the information necessary to simulate the policies of that country, and in addition the information needed for all other policy systems. In EUROMOD, if they do not, default values are provided which may be changed by the user. Swapping policies using national models would not only require the defaults to be set, but the need for each of them to be identified. National models require different variables and in addition the names given to equivalent variables will not generally be the same. A first step in any simulation of common reforms across countries, and any policy swap exercise, would involve somehow mapping and reconciling these differences and equivalencies, and perhaps the re-naming of variables into a common scheme. In EUROMOD there is a common variable naming convention which applies to output and intermediate variables as well as input data variables. This reduces, although does not eliminate, the need to check for equivalence of variable definitions across countries, as well as greatly reducing the number of variables that the model would need to handle if countries did not use the common naming convention. (It should be stressed, that, for flexibility reasons there is no restriction in
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
85
EUROMOD on the number of variables. But on the grounds of transparency it is important that the same variable is named in the same way in each country, and that multiple variables with similar definitions can easily be identified and grouped together.) It is not essential for the input data to originate from a common or harmonised source in a multi-country model. It is more important that the data are of high quality, represent the population in question and provide the information needed for the simulations. The first version of EUROMOD covering the EU-15 used a number of different sources of household micro-data (Sutherland, 2001). The current version uses crosssectional micro-data from the European Union Statistics on Income and Living Conditions (EU-SILC). This output-harmonised source of data is not ideal for the purposes of EU microsimulation, not least because not all relevant aspects are in fact harmonised in a transparent way (Figari, Levy, & Sutherland, 2007). Nevertheless it was preferred over using a range of national data sources some of which may be better-suited, which was the previous approach, for a number of reasons. First, the SILC data are the only or most suitable micro-data that are made available for research use in many countries of the enlarged EU; it was bound to be the input dataset of choice in these cases. Secondly, since the SILC is the main source of data for much social monitoring and analysis in the EU, including for official monitoring of progress towards the Europe 2020 targets, it maintains coherence and consistency if EUROMOD also uses these data. (Although as Figari, Iacovou, et al. (2012) explain, indicators based on simulated incomes do not necessarily take the same value as indicators based on incomes as recorded by the SILC.) Thirdly, there are considerable advantages of a common access permission process, synchronised release of data and common data structure. The importance of these dimensions of convenience should not be under-estimated in the context of so many countries.6 Without a multi-country data source like the SILC it would be necessary to harmonise aspects of the heterogeneous individual national data sources, or at least to document ways in which they differed and how this affected comparability of the multi-country analysis (Sutherland, 2001). The dimensions that are important include the definition of the ‘household’ unit, the reference time period of the income data, the imputation of gross incomes from net (if only net are available), any mismatch in the reference period of income and characteristics, treatment of missing values as well as a more general assessment of the quality of the sample (population coverage, non-response, under-reporting and item non-response, etc.)
6
In fact EUROMOD uses Family Resources Survey data for the United Kingdom. This is the basis for the UK SILC from 2012.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
86
Holly Sutherland
and of the weights. It is worth noting that with the exception of a couple of these points, the need to consider differences in treatment and variations in quality also apply to micro-data from the SILC. It should also be emphasised that full harmonisation, in the sense of commonly defined variables across countries is not what a multi-country model requires. Indeed, one of the main drawbacks of the SILC data as provided by Eurostat (the User Data Base or UDB) is that incomes from benefits and pensions are aggregated into harmonised variables defined according to function (e.g. ‘old age’ or ‘unemployment’). To be useable by EUROMOD the individual component benefit payments must be imputed from the aggregates so that interactions between individual instruments can be captured and those that are not simulated (because of lack of information, for example on contribution histories in the case of pensions) can be identified and treated correctly. In addition, in some cases there are variables that are needed by the simulations but are not available in the Eurostat UDB. They are imputed, necessarily approximately, using available SILC variables and external information. This inevitably introduces some error which is avoided or reduced by making use of variables from the national SILC data where these are accessible. The underlying national dataset often contains more information than the UDB and does not aggregate benefit incomes into harmonised variables. This illustrates how national datasets may be preferred over harmonised data sources from the perspective of the quality and indeed comparability of results. However, the amount of work involved in building a multi-country input dataset is of course greater in this case, in terms of organisation and synchronisation, harmonisation of the common input variables and adjusting for basic differences in the source data (reference time period, household definition, imputation strategies and so on). 4.3.4. Simulations A model specifically designed for multi-country analysis such as EUROMOD can achieve much higher cross-country consistency compared to a set of national models because of its flexibility in defining concepts and assumptions and its transparency in identifying differences and similarities. However, it is not possible to achieve 100% comparability in what is simulated (how much of the tax-benefit system as a whole) and how (with what assumptions). In this respect the challenges faced by EUROMOD users are similar to those faced when using national models side-by-side. The significance of any particular shortcoming in the data and simulation will vary in size across countries. A common treatment in a particular dimension will not necessarily result in the most comparable results. The approach taken by EUROMOD in making use of the available SILC data and in simulating taxes and benefits aims to do the best possible job in each country, rather than standardise the approach which
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
87
might involve less comprehensive or less precise results in some countries. The advantage of using EUROMOD arises from the transparency of the model, the country-by-country validation process and the extensive documentation in Country Reports through which users can make themselves aware of instances of differing treatment and change the treatment if they wish.7 Two examples illustrate how ‘comparability of simulations’ can be a complex issue. First, there is the question of how to account for non-take-up of certain means-tested benefits. In some countries, where there is evidence that the problem is substantial, where the benefits are a sizeable component of incomes and where there is some information on the incidence of nontake-up on which to base an adjustment, something can be done. In other cases, where little is known or the effects are thought to be small, typically no adjustment is made and it is assumed that all those calculated to be eligible actually receive their entitlement. In EUROMOD the specific treatment is documented in Country Reports and summarised across countries in an annual cross country comparative report (e.g. Jara & Sutherland, 2013). In a collection of national models one might take a similar approach, with the documentation being an essential ingredient.8 A second example is provided by the processes used to update data from the reference period of the input data to the reference year of the policies being simulated. Updating the level of market incomes by source, as well as certain expenditures such as housing costs, involves calculating factors from relevant indexes usually available at national level which are therefore subject to variation depending on what information is available and which income components are deemed to need their own treatment in each country. In this respect the EUROMOD process is the same as that for a collection of national models although, again, relies on comprehensive documentation and guidelines about the most appropriate indexes to use. The approach to updating the characteristics of the population may also be a source of non-comparability. Usually the period between the data collection and the policy year being simulated is too short (3 5 years at most) for much change in relevant characteristics (labour market, demographics and household composition). However, such changes may be significant in some countries and not in others, and where they exist may vary in nature. In this respect EUROMOD users are in the same
7
8
EUROMOD Country Reports are downloadable from https://www.iser.essex.ac.uk/ euromod/resources-for-euromod-users/country-reports For certain applications one might want to assume 100% take-up of all benefits if the focus is on entitlement rather than receipt, and hence the intended effect of the system. In this case the EUROMOD user can straightforwardly ‘switch off’ the non-take-up adjustment in instances where it is applied by default.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
88
Holly Sutherland
position as users of collections of national models and can make their own adjustments by re-weighting or transforming the input data to meet the needs of their analysis. One example is the use of EUROMOD to nowcast the income distribution and risk of poverty in eight countries in 2012 based on 2007 data (Navicke, Rastrigina, & Sutherland, 2013). This was a period of rapid change in the labour market in some countries and in this analysis transitions between states are explicitly modelled to adjust for this. It is worth noting that one of the main limitations to the comparability of cross-country results is related to the lack of up-to-date harmonised and synchronised detailed macro-level statistics on labour market activity, household demographic characteristics and movements in incomes by source and type of recipient with which to adjust the data and modelling, and to validate results. Extending the scope of simulations beyond the components of household disposable within a (generally) static framework is naturally of interest for multi-country analysis. For comparative purposes it can be important to consider policies more broadly than simply focusing on those elements that directly affect disposable income. This is not only because broader measures of welfare are of interest but also because different choices are made by governments between direct and indirect taxes or cash and non-cash benefits, for example. Excluding some components may give a misleading picture of the relative extent of redistribution across countries. In terms of policy scope, the effects of indirect taxes have been analysed in a multi-country framework using Household Budget Survey data, linked to EUROMOD or results from EUROMOD (Decoster, Loughrey, O’Donoghue, & Verwerft, 2010; O’Donoghue, Baldini, & Mantovani, 2004). The distributional effects of non-cash benefits are considered alongside cash benefits and direct taxes in several countries by Paulus, Sutherland, and Tsakloglou (2010). The two are considered together by Figari and Paulus (2013). Moreover, there are numerous cross-national studies that estimate labour supply effects of policy changes by linking EUROMOD to econometric models of labour supply or by using elasticities from other studies. These include Immervoll, Kleven, Kreiner, and Saez (2007), Colombino, Locatelli, Narazani, O’Donoghue, and Shima (2008), Bargain et al. (2013) and Bargain et al. (2014).
4.4. Uses and applications As explained above there are several distinct ways in which multi-country models can be deployed to make comparisons across countries. Here, we consider them each in turn, providing illustrations selected from the recent literature. In some cases the type of analysis can also be carried out as a
89
Multi-Country Microsimulation
national study for one country and the comparison adds perspective and breadth. In others the main motivation is the cross country comparison or results for a combination of countries.
Multi-country microsimulations can be used to understand the operation of existing policy systems, how they impact on households and how this differs across countries (Immervoll et al., 2006; Paulus et al., 2009). In particular it can add to the detail that is available in surveys and other crossnational sources of micro-data. For example, Verbist and Figari (2014) use EUROMOD simulations to analyse the progressivity of personal income taxes and social insurance contributions for EU countries including measuring the effects of various types of tax concession (exemptions, deductions, allowances and credits). Verbist (2007) analyses the distributional effects of the tax treatment of the main types of replacement income pensions and unemployment benefits across the EU15 countries. Comparisons of the work incentive effects of tax-benefit systems make use of the ‘what if’ functionality of multi-country microsimulation models. They measure work incentives by changing the labour market status or extent of participation of each individual in turn and use the model to recalculate household income for each individual scenario either on the intensive margin (how much to work) (see Immervoll, 2004; Jara & Tumino, 2013) or the extensive margin (whether to work at all) (see Carone, Immervoll, Paturot, & Saloma¨ki, 2004; Immervoll & O’Donoghue, 2004; Figari, Salvatori, et al., 2011; Fernandez Salgado, Marginal effective tax rates across the EU, 2007 (%)
70
Figure 4.1.
Median
Mean
50 40 30 20 10
Marginal Effective Tax Rate
60
75th-25th Percentile
0
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
4.4.1. Comparisons of the effects of existing policies
EL PT MT CZ LT FR LV SI PL IT LU AT DK BE CY EE ES SK RO BG SE UK IE HU NL FI DE
Source: Jara and Tumino (2013) using EUROMOD version F6.20.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
90
Holly Sutherland
Figari, Sutherland, & Tumino, 2014). For example, Figure 4.1 illustrates how Jara and Tumino (2013) use EUROMOD to compare the distribution as well as the average Marginal Effective Tax Rate (METR) across the EU27. This adds considerably to the information available from the ‘hypothetical family’ calculations regularly used by OECD to compare the work incentive implications of policy systems across countries. In particular Figure 4.1 shows how the median METR for people in paid work in 2007 varied from 20% in Greece to over 50% in Belgium. Within countries METRs varied rather little in some countries, notably in the Eastern European countries with flat income tax systems, while the difference between the 25th and 75th percentile was over 15 percentage points in the Southern European countries (except Italy and Malta), Sweden, Denmark, Ireland and Hungary. Similar calculations reported in Jara and Sutherland (2013) show that very high METRs (over 60%) are faced by non-negligible numbers of workers (at least 10%) under the 2009 taxbenefit system in Belgium, Denmark, Germany, Ireland, Hungary, Romania and the United Kingdom. Marginal effective tax rates and their equivalent on the extensive margin, participation tax rates, are useful indicators to compare the extent to which tax-benefit systems may limit employment for certain individuals, such as those with high METRs. Static microsimulation calculations of budget sets for individuals (household disposable income estimates for a range of labour market participation scenarios) are a necessary component for estimating labour supply models. Estimating labour supply elasticities so that they can be meaningfully compared across countries requires (at least) a consistent framework for calculating the budget sets for each country. Using EUROMOD together with TAXSIM, Bargain et al. (2014) provide the first large-scale international comparison of labour supply elasticities including 17 EU countries and the United States. The use of a harmonised approach produces results that show smaller differences across countries than those in previous studies using different data, microsimulation models and methodological choices. Microsimulation can also be used to calculate indicators of the effects of policies that capture the differences in national specifics to improve comparability when considering many countries. For example, in the case of measuring the relative size of support for children across countries Figari, Paulus, et al. (2011) calculate ‘child contingent’ incomes estimated as the change in household disposable income for families with children, if they did not have children. This measure captures not only the gross benefit payments labelled explicitly for children but also the effect of taxing these payments, additions that families may receive through benefits labelled for other purposes, and child-related tax concessions. Figure 4.2 shows the great diversity in the 19 countries considered in how the
Figure 4.2.
BE
DK
DE
EE
25%
EL
20%
20%
15%
15%
10%
10%
5%
5%
0%
0%
–5%
–5% 1 2 3 4 5 6 7 8 9 10
25%
ES
1 2 3 4 5 6 7 8 9 10 FR
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
IE
1
2
3
4
5
6
7
8
9 10
LU
IT
25%
20%
20%
15%
15%
10%
10%
5%
5%
0%
0%
–5%
–5% 1 2 3 4 5 6 7 8 9 10
25%
HU
1 2 3 4 5 6 7 8 9 10 NL
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
AT
1
2
3
4
PL
5
6
7
8
9 10 25%
PT
20%
20%
15%
15%
10%
10%
5%
5%
0%
0%
–5%
25%
–5% 1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
SI
FI
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1
UK
25%
SE
20%
20%
15%
15%
10%
10%
5%
5%
0%
0%
–5%
2
3
4
5
6
7
8
Multi-Country Microsimulation
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
25%
Child-contingent payments (as a % of National per Capita Disposable Income) and the share of children by decile group
9 10
Child contingent benefits Child contingent taxes Total net payments Share of children
–5% 1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
1
2
3
4
5
6
7
8
9 10
91
Notes: Bars show components of spending per child as a proportion of overall average per capita disposable income, by decile group. Deciles have been constructed on the basis of equivalised household disposable income of the entire population, using the OECD equivalence scale. Estimates relate to policy years 2001, 2003 or 2005. Source: Figari, Paulus, et al. (2011) using EUROMOD version D24.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
92
Holly Sutherland
support per child varies across the income distribution, and whether it is delivered through tax or gross benefit payments. A related group of microsimulation-based indicators permit comparison of the ‘automatic’ effects of tax-benefit systems when economic conditions change, for example in response to an income or unemployment shock, and in the absence of any direct government action. At the micro-level comparisons show how relatively well household incomes are protected if one or more of their members becomes unemployed (for example). At the macro-level they indicate the extent to which income or output fluctuations are moderated through the automatic stabilisation of the tax-benefit system. The calculations involve comparing household income pre- and post-shock, in the form of a replacement rate, using either re-weighting or explicit simulation of market income change or labour market transitions to simulate the shock. For comparative purposes the shock may be stylised and common (e.g. a 5% increase in unemployment) or reflect the observed change in unemployment. The latter approach is taken by Fernandez Salgado et al. (2014) who analyse the distribution of replacement rates when simulating the unemployment shock due to the Great Recession. They distinguish between short- and long-term unemployment and their findings show how the relative extent of protection differs for the two scenarios among the six countries considered. Dolls, Fuest, and Peichl (2012) use stylised shocks to compare the stabilising properties of tax-benefit systems in Europe and the United States using EUROMOD and TAXSIM.9 They model a negative income shock where all household gross incomes fall by 5% and an asymmetric unemployment shock in which aggregate household income also decreases by 5%. They find that the EU tax-benefit systems absorb a greater proportion of the income variation than the US system: 38% compared with 32% of the income shock and 47% compared with 34% of the unemployment shock. This is largely explained by the higher coverage and generosity of unemployment benefits in EU countries.10 As well as isolating the automatic effects of policies as economic conditions change, microsimulation is also used to measure and compare the effects of policy changes made by governments. Decomposing the effects of explicit policy choices from the other influences on the income distribution (which include exogenous changes in market income, automatic policy effects and behavioural responses to policy change) allows us to
9
TAXSIM is the NBER microsimulation model that calculates US federal and states income taxes (Feenberg & Coutts, 1993). 10 In this example two models are used, requiring the authors to deal with, or ignore, any differences in model assumptions and concepts.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
93
compare the extent to which government actions contributed to, or mitigated, any observed change in income inequality, for example. Studies with this as the focus compare the effect of post-reform (end period) policies with a counterfactual constructed from the pre-reform (start period) policies, applied to a single input dataset. Of course this is informative if done for a single country. Multi-country studies provide additional perspective on the direction, size and distribution of the policy effect in each country, as well as valuable evidence on the differences and similarities of policy responses at particular periods of time or under certain economic conditions. They also require alignment of key methodological choices which affect the interpretation of results. These include how to index the pre-reform policies to make nominal policy parameters comparable over time and what point in time the input data should refer to. In the first case there is a basic choice between an indexation that captures what would have happened without explicit policy reform and what should have happened. Typically what would have happened varies across countries between no indexation at all and a set of statutory rules governing each part of the system. In some cases there are established patterns of indexation practice, but these may be abandoned in times of economic crisis or political change. A cross-country analysis that uses ‘business as usual’ indexation to define the counterfactual policy scenario faces challenges in terms of establishing a meaningful and comparable set of counterfactuals (Avram et al., 2013; Leventi, Levy, Matsaganis, Paulus, & Sutherland, 2010). The other option is to define the counterfactual that captures what should have happened, according to a principled and transparent criterion. This might be keeping up with inflation, so as to maintain a constant real income, or keeping up with the average market incomes, consistent with the overall tax burden and expenditure level remaining broadly constant in relative terms (Bargain & Callan, 2010; Hills, Paulus, Sutherland, & Tasseva, 2014). Depending on the interpretation one wishes to place on the results, many other indexation assumptions are potentially available. For example, a ‘fiscally neutral’ scenario might calculate a revenue-neutral indexation factor such that pre- and post-reform systems have the same net cost (given unchanging economic and demographic conditions). The distributional implications of spending the available resources in alternative ways can then be measured. The second key choice the reference period for the input microdataset and hence the distribution of market income and other household characteristics which condition the policy effects that are measured is often constrained by the practical considerations of what is available. Given a choice, data from the start, end or middle of the period can be used but in each case the interpretation of results should take the choice into account. In a multi-country analysis there is the additional choice
94
Holly Sutherland
over whether to align the reference period of the data and/or the period of policy change to be the same in each case, or whether to consider points in time and periods that are somehow equivalent in terms of the economic cycle or some other criterion. If data for both the start and end point are available then there is an option to carry out a full decomposition and to measure the effect of policy change, for example using the Shorrocks Shapley decomposition (Bargain & Callan, 2010).11
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
4.4.2. Comparisons of the effects of policy changes This section describes the ways in which multi-country microsimulation is used to assess and compare the effects of policy reform either actual or potential in different national contexts. Comparisons can be motivated in several ways. First, there may be a common objective that is met in different ways in each national context. For example, with the goal of reducing the ‘at risk of poverty’ rate, policies may be changed in ways that address the differences in the incidence or cause of poverty in each country (e.g. by increasing the incentive to take paid work, and its remuneration, for lone parents in one country, while enhancing minimum income provisions for the elderly in another). Multi-country microsimulation can be used to assess the extent to which poverty might potentially be reduced, and at what budgetary cost. The effects of different actual policy reforms in different countries but with common objectives can be analysed with countries side-by-side. One example is Avram et al. (2013) who analyse the first-order distributional effects of fiscal consolidation measures taken in nine European countries in the period up to 2012 from the start of the financial and economic crisis. Figure 4.3 shows the percentage change in household income due to the measures across the (simulated) 2012 income distribution. The measures include different mixes of increases in income tax and social contributions and cuts in public pensions, other cash benefits and public sector pay. Four features of this figure demonstrate the added value of crosscountry comparisons of this type, relative to single country studies. First, the scale of the effect varies greatly across the countries (noting that the country charts are drawn to different scales but the grid interval is uniformly two percentage points), ranging from an average drop in income
11
Not only does the full decomposition offer the opportunity to measure policy effects that are not biased by using data from a specific reference period or from adopting assumptions about income growth, but also it allows other components of change in the income distribution to be identified separately. This can include estimates of the behavioural responses to the policy changes that have occurred in the period (Bargain, 2012).
Figure 4.3.
Percentage change in household disposable income due to fiscal consolidation measures 2008 2012 by household income decile group Estonia
Greece
2 0 –2 –4 –6 –8 –10 1
2 3 4 5 6 7 8 9 Household income decile group
10
2 0 –2 –4 –6 –8 –10 1
2 3 4 5 6 7 8 9 Household income decile group
Italy
10
–2
–4
–6 10
0
–2
–4
–6 1
2 3 4 5 6 7 8 9 Household income decile group
Portugal
10
0
0
–2
–2
–4
–4
–6
–6
–8
–8
–10
Income tax and workers SIC
10
10
0
–2
–4
–10 2 3 4 5 6 7 8 9 Household income decile group
2 3 4 5 6 7 8 9 Household income decile group UK
2
1
1
Romania
2
10
Lithuania
2 0 –2 –4 –6 –8 –10 –12 –14 2 3 4 5 6 7 8 9 Household income decile group
2 3 4 5 6 7 8 9 Household income decile group
Latvia
0
1
1
Multi-Country Microsimulation
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Spain
0 –2 –4 –6 –8 –10 –12 –14 –16
–6 1
2 3 4 5 6 7 8 9 Household income decile group
Non–pension benefits
Public pensions
10
1
2 3 4 5 6 7 8 9 Household income decile group
(Net) public wages
10
Household disposable income
95
Notes: Deciles are based on equivalised household disposable income in 2012 in the absence of fiscal consolidation measures and are constructed using the modified OECD equivalence scale to adjust incomes for household size. The lowest income group is labelled ‘1’ and the highest ‘10’. The charts are drawn to different scales, but the interval between gridlines on each of them is the same. Source: Avram et al. (2013) using EUROMOD version F6.0.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
96
Holly Sutherland
of 11.6% in Greece to 1.6% in Italy. Secondly, the overall first order distributional effects range from broadly progressive in Greece, Spain, Latvia and the United Kingdom to broadly regressive in Estonia. The respective governments have made very different choices about who should bear the greatest cost of austerity. Thirdly, this is reflected in the different choices about which instruments to use and, finally, the incidence of the particular changes is not necessarily as one might expect a priori. For example, increases in income tax have a roughly proportional effect in many countries and are concentrated on higher income households only in Spain and the United Kingdom as might be expected a priori. Cuts in benefits particularly target the better off in Latvia where contributory parental benefits were heavily cut. A second approach is to contrast the effects of a common, hypothetical policy reform in several countries, highlighting the relevance of the interactions of a specific policy design with population characteristics and economic conditions. Often the ‘reform policy’ is designed to highlight features of the existing national system that it replaces or supplements. Examples include Atkinson, Bourguignon, O’Donoghue, Sutherland, and Utili (2002) and Mantovani, Papadopoulos, Sutherland, and Tsakloglou (2007) for minimum guaranteed pensions, Levy, Lietz, and Sutherland (2007a) and Matsaganis et al. (2006) for universal child benefits, Callan and Sutherland (1997) for basic income, Bargain and Orsini (2007) and Figari (2010) for in-work benefits, Matsaganis and Flevotomou (2008) for universal housing transfers, Figari, Paulus, et al. (2012) for the taxation of housing wealth, and Paulus and Peichl (2009) for flat taxes. This type of analysis is usually complicated by the need for the reform policy to be scaled somehow if it is to have an equivalent effect in countries with different levels of income, and because of the need to consider how the reform policies should be integrated with existing national policies. Given that the starting points are different (e.g. the tax systems may treat pensions differently) the net effects will differ too. A third approach involves assessing the effects of policies from one country when simulated in another, a method known as ‘policy swapping’. This highlights how the effects of policies differ across different populations and economic circumstances. Examples of this kind of ‘policy learning’ experiment include, for unemployment benefits in Belgium and the Netherlands, De Lathouwer (1996); and for many studies of child and family benefits including for France and the United Kingdom, Atkinson, Bourguignon, and Chiappori (1988), for Austria, Spain and the United Kingdom, Levy, Lietz, and Sutherland (2007b), for Poland, France, Austria and the United Kingdom, Levy, Morawski, and Myck (2009) and for Lithuania, Estonia, Hungary, Slovenia and the Czech Republic, Salanauskaite and Verbist (2013). Although none of these studies includes simulation of behavioural responses to the replacement of national policy by borrowed policy, this
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
97
would in principle be possible and Bargain and Orsini (2007) include labour supply effects when introducing a stylised version of the UK Working Families Tax Credit into France, Germany and Finland. Even when using a multi-country microsimulation model that is designed to facilitate this technique, policy swapping is not a mechanical procedure. Each exercise has its own motivation and corresponding decisions to be made about which aspects of policy (and assumptions driving its impact) are to be ‘borrowed’ from elsewhere and which are to be retained from the existing local situation. Examples of issues to consider include the integration of the borrowed policies with the retained national system (e.g. should a benefit that is taxed in the donor country also be taxed in the country into which it is swapped, if equivalent existing benefits are not taxed?); the take-up treatment; the scaling up or down of policy parameters to adjust for differences in income level between the donor and recipient countries. 4.4.3. Supra national microsimulation The natural territorial scope for a tax benefit microsimulation model is a country or nation. This is because in most countries some or all of the tax-benefit system is legislated and administered nationally, because the micro-data that are used as an input dataset are (typically) representative at national level, because other data used to update, adjust and validate the model are usually made available at national level, and because the economy and society are usually assumed to exist and operate at this level. However, although the supra-national administration of the EU currently has no relevant policy-making powers, analysis which considers the EU (or the euro zone) as a whole is highly relevant to approaching the design of tax-benefit policy measures to facilitate economic stabilisation and encourage social cohesion. Analogously to regionalised national models, EUROMOD is able to draw out the implications of potential EU-level policy reforms for between- as well as within-country redistribution (Levy, Matsaganis, & Sutherland, 2013), policy harmonisation and stabilisation (Bargain et al., 2013; Jara & Sutherland, 2014), as well as for the EU income distribution. As an illustration, taken from Levy et al. (2013), Figure 4.4 shows the implied flows of resources in or out of each country in the case of an EUwide Child Basic Income (CBI) introduced in addition to existing national systems and set at h50 per month for all children aged under 6. Under the modelled scheme the CBI would be taxable as the mother’s income (and hence partly financed at national level, covering 15% of the total cost in aggregate) with the remainder covered by an EU flat tax chargeable on all household incomes at a rate of 0.2%. The main net gainers from this form of CBI would be (in order of magnitude relative to GDP) Bulgaria, Romania, Hungary, Poland, the Baltic countries, the Czech Republic and
98
Holly Sutherland
Figure 4.4.
EU child basic income h50 per month per child aged under six: net flows as a percentage of national GDP
0.5
0.4
% of GDP
0.3
0.2
0.1
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
0.0
–0.1 BE BG CZ DK DE EE IE EL ES FR IT CY LV LT LU HU MT NL AT PL PT RO SI SK FI SE UK
Source: Levy et al. (2013) using EUROMOD version F5.36.
Slovakia. The South European countries (except Italy), France, Cyprus, Malta, Slovenia and the United Kingdom (the latter two marginally) would also benefit. The main net contributors (relative to their own GDP) would be Denmark, Germany, the Netherlands, Finland, Sweden and Belgium, followed by Austria, Italy, Ireland and Luxembourg. These net outcomes depend not only on the relative income levels across countries but also the proportion of young children in the populations and the amount of tax that can be clawed back from the earnings of mothers of young children. This simple example demonstrates the potential for the between- as well as the within-country distributional implications of EU policies to be assessed using multi-country microsimulation, in this case with EUROMOD.
4.5. Summary and future directions This chapter has described the methodological challenges of multi-country microsimulation modelling and summarised the approach taken by EUROMOD, as a prime example, to overcome them. It has illustrated the wide range of applications that this model’s flexibility and transparency permits with a selection of studies from the recent literature. One of the key features of a many-country model is that the scale of resources that are invested in constructing and maintaining it can only be justified by an open access policy. Therefore, future directions for
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
99
EUROMOD will involve meeting the future needs of many users for comparative and multi-country microsimulation, from both the academic and policy-making spheres. This will consist of further enhancing flexibility, transparency and comparability while also facilitating linkages in a number of directions. These will include linkage with macro-models (see Bourguignon & Bussolo, 2013 for a review and Peichl, 2009 for a discussion of methodological issues), linkage to other types of microsimulation models (see Lie´geois & Dekkers, 2014 for an example), further exercises in behavioural and particularly labour supply modelling and experiments with dynamic microsimulation methods for forecasting purposes. Extensions in policy scope and applicability will be possible through linkage to alternative input datasets such as Household Budget Survey data (for indirect taxes), panel data (to add information about employment history and to provide an inter-temporal dimension) and macro statistics of many kinds, for example to estimate the incidence of non-cash benefits. Geographically there is potential to extend multi-country microsimulation to other global regions than the EU such as southern Africa, Latin America or the Balkan region. There is also potential to extend EUROMOD beyond the EU to include other OECD countries to aid comparisons, for example, between the EU and the United States. Beyond EUROMOD and cross-sectional tax-benefit modelling some questions remain, that were raised at the beginning of this chapter, about why there are few examples of multi-country modelling using other types of microsimulation models. Given the special relevance of policy differences to cross-country comparisons and hence multi-country modelling, this question applies particularly in the case of longitudinal (dynamic) modelling for the analysis of the effects of pensions, long-term care and other policies that depend on the evolution of characteristics and behaviour, as well as the impact of policies themselves, over the longer term.12 writing in 2011 Dekkers and Zaidi (2011) viewed the development of dynamic microsimulation models as being at the same stage that static models were when Callan and Sutherland (1997) demonstrated the problems with using national static models side-by-side for comparative analysis. As noted above, modelling platforms or frameworks for building dynamic models exist. The next steps include adding multi-country content that can be used to provide comparable results for published analysis. This in turn may require the modelling frameworks to be further adapted to provide a disciplined structure and set of guidelines for that content and its country-specific documentation, perhaps learning from the EUROMOD experience.
12
Models of individual wealth dynamics and the effects of wealth taxes could be another example.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
100
Holly Sutherland
This chapter concludes by summarising the added value of multicountry microsimulation of the effects of policies. First, analysis that considers a group of countries as a single entity (such as the EU) requires a multi-country model. Secondly, cross country comparisons of the effects of policies and policy reforms adds to single country analysis in a number of ways. It provides the opportunity for ‘policy learning’ which is relevant not only for policy-makers but also within the scientific literatures on the policy-making process. Consideration of a number of national settings within the same analysis provides a kind of ‘laboratory’ in which to analyse the effects of different policies with common objectives or the effects of similar policies in different contexts. Comparisons across countries provide the basis for assessing the robustness of findings and generalising conclusions. In addition, even single country studies benefit from the discipline and transparency imposed on the simulation set-up by a multicountry model that offers a structured set of options to choose from. Taken together this more than justifies the considerable effort that is necessary to build and maintain microsimulation models that are designed for the purpose of multi-country analysis. Acknowledgements I received useful comments on a draft of this paper from a referee and my colleagues, Francesco Figari, Xavier Jara, Chrysa Leventi, JekaterinaNavicke, Alari Paulus and Iva Tasseva. I am also grateful to them and to Silvia Avram, Paola De Agostini, Christine Lietz, Olga Rastrigina and Alberto Tumino for permission to draw on our joint work. The usual disclaimers apply. References Atkinson, A. B., Bourguignon, F., & Chiappori, P.-A. (1988). What do we learn about tax reform from international comparisons? France and Britain. European Economic Review, 32(2 3), 343 352. Atkinson, A. B., Bourguignon, F., O’Donoghue, C., Sutherland, H., & Utili, F. (2002). Microsimulation of social policy in the European Union: Case study of a European minimum pension. Economica, 69, 229 243. Avram, S., Figari, F., Leventi, C., Levy, H., Navicke, J., Matsaganis, M., … Sutherland, H. (2013). The distributional effects of fiscal consolidation in nine EU countries. EUROMOD Working Paper No. EM2/13, University of Essex, Colchester. Bargain, O. (2012). Decomposition analysis of distributive policies using behavioural simulations. International Tax and Public Finance, 19(5), 708 731.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
101
Bargain, O., & Callan, T. (2010). Analysing the effects of tax-benefit reforms on income distribution: A decomposition approach. Journal of Economic Inequality, 8(1), 1 21. Bargain, O., Dolls, M., Fuest, C., Neumann, D., Peichl, A., Pestel, N., & Siegloch, S. (2013). Fiscal union in Europe? Redistributive and stabilising effects of a European tax-benefit system and fiscal equalisation mechanism. Economic Policy, 28(75), 375 422. Bargain, O., & Orsini, K. (2007). Beans for breakfast? How portable is the British workfare model? In O. Bargain (Ed.), Microsimulation in action: Policy analysis in Europe using EUROMOD (Vol. 25, pp. 165 198). Research in Labour Economics. Bingley, UK: Emerald Group Publishing Limited. Bargain, O., Orsini, K., & Peichl, A. (2014). Comparing labour supply elasticities in Europe and the US: New results. Journal of Human Resources. Bourguignon, F., & Bussolo, M. (2013). Income distribution in computable general equilibrium modelling. In P. B. Dixon & D. W. Jorgenson (Eds.), Handbook of computable general equilibrium modelling (Vol. 1A and 1B, chapter 21, pp. 1383 1437). New York: Elsevier. Callan, T., & Sutherland, H. (1997). The impact of comparable policies in European countries: Microsimulation approaches. European Economic Review, 41(3 5), 627 633. Canto, O., Adiego, M., Ayala, L., Levy, H., & Paniagua, M. (2014). Going regional. The effectiveness of different tax-benefit policies in combating child poverty in Spain. In G. Dekkers, M. Keegan, & C. O’Donoghue (Eds.), New pathways in microsimulation. Farnham: Ashgate. Carone, G., Immervoll, H., Paturot, D., & Saloma¨ki, A. (2004). Indicators of unemployment and low-wage traps (marginal effective tax rates on employment incomes). Social, Employment and Migration Working Papers 18, OECD. Colombino, U., Locatelli, M., Narazani, E., O’Donoghue, C., & Shima, I. (2008). Behavioural and welfare effects of basic income policies: A simulation for European countries EUROMOD. Working Paper No. EM5/08, University of Essex, Colchester. Decoster, A., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2010). How regressive are indirect taxes? A microsimulation analysis for five European countries. Journal of Policy Analysis & Management, 29(2), 326 350. Dekkers, G., Buslei, H., Cozzolino, M., Desmet, R., Geyer, J., Hofmann, D., … Tedeschi, S. (2010). What are the consequences of the European AWG-projections on the adequacy of pensions? An application of the dynamic micro simulation model MIDAS for Belgium, Germany and Italy. In C. O’Donoghue (Ed.), Life-cycle
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
102
Holly Sutherland
microsimulation modelling: Constructing and using dynamic microsimulation models. LAP LAMBERT Academic Publishing. Dekkers, G., & Zaidi, A. (2011). The European network for dynamic microsimulation (EURODYM) A vision and the state of affairs. International Journal of Microsimulation, 4(1), 100 105. De Lathouwer, L. (1996). A case study of unemployment scheme for Belgium and the Netherlands. In A. Harding (Ed.), Microsimulation and public policy (chapter 4, pp. 69 92). Amsterdam: Elsevier. Dolls, M., Fuest, C., & Peichl, A. (2012). Automatic stabilizers and economic crisis: US vs. Europe. Journal of Public Economics, 96(3 4), 279 294. Feenberg, D. R., & Coutts, E. (1993). An introduction to the TAXSIM model. Journal of Policy Analysis and Management, 12(1), 189 194. Fernandez Salgado, M., Figari, F., Sutherland, H., & Tumino, A. (2014). Welfare compensation for unemployment in the great recession. Review of Income and Wealth, 60(Special Issue), S177 S204. Figari, F. (2010). Can in-work benefits improve social inclusion in the Southern European countries? Journal of European Social Policy, 20(4), 301 315. Figari, F., Iacovou, M., Skew, A. J., & Sutherland, H. (2012). Approximations to the truth: Comparing survey and microsimulation approaches to measuring income for social indicators. Social Indicators Research, 105(3), 387 407. Figari, F., Levy, H., & Sutherland, H. (2007). Using the EU-SILC for policy simulation: prospects, some limitations and suggestions. In Comparative EU statistics on income and living conditions: Issues and challenges (pp. 345 373). Eurostat Methodologies and Working Papers. Luxembourg: Office for Official Publications of the European Communities. Figari, F., & Paulus, A. (2013). The distributional effects of taxes and transfers under alternative income concepts: The importance of three ‘i’s. Public Finance Review. Figari, F., Paulus, A., & Sutherland, H. (2011). Measuring the size and impact of public cash support for children in cross-national perspective. Social Science Computer Review, 29(1), 85 102. Figari, F., Paulus, A., & Sutherland, H. (forthcoming). Microsimulation and policy analysis. In A. B. Atkinson, & F. Bourguignon (Eds.), Handbook of income inequality (Vol. 2). New York: Elsevier. Figari, F., Paulus, A., Sutherland, H., Tsakloglou, P., Verbist, G., & Zantomio, F. (2012). Taxing home ownership: Distributional effects of including net imputed rent in taxable income. EUROMOD Working Paper No. EM4/12, University of Essex, Colchester. Figari, F., Salvatori, A., & Sutherland, H. (2011). Economic downturn and stress testing European welfare systems. In H. Immervoll, A. Peichl, & K. Tatsiramos (Eds.), Who loses in the downturn?
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
103
Economic crisis, employment and income distribution (Vol. 32, pp. 257 286), Research in Labour Economics. Bingley, UK: Emerald Group Publishing Limited. Harding, A. (1996). Introduction and overview. In A. Harding (Ed.), Microsimulation and public policy (chapter 1, pp. 1 22). Number 232 in Contributions to economic analysis. Amsterdam: Elsevier. Hayes, P., & Redmond, G. (2014). Could a Universal Family Payment improve gender equity and reduce child poverty in Australia? A microsimulation analysis. EUROMOD Working Paper No. EM3/14. ISER, University of Essex, Colchester. Hills, J., Paulus, A., Sutherland, H., & Tasseva, I. (2014). A lost decade? Decomposing the effect of 2001 11 tax-benefit policy changes on the income distribution in EU countries. ImPRovE Working Paper No. 14/03. Immervoll, H. (2004). Average and marginal effective tax rates facing workers in the EU: A micro-level analysis of levels, distributions and driving factors. Social, Employment and Migration Working Papers 19, OECD. Immervoll, H., Kleven, H. J., Kreiner, C. T., & Saez, E. (2007). Welfare reform in European countries: A microsimulation analysis. The Economic Journal, 117(516), 1 44. Immervoll, H., Levy, H., Lietz, C., Mantovani, D., O’Donoghue, C., Sutherland, H., & Verbist, G. (2006). Household incomes and redistribution in the European Union: Quantifying the equalizing properties of taxes and benefits. In D. Papadimitriou (Ed.), The distributional effects of government spending and taxation (pp. 135 165). Basingstoke: Palgrave Macmillan. Immervoll, H., Levy, H., Lietz, C., Mantovani, D., & Sutherland, H. (2006). The sensitivity of poverty rates in the European Union to macro-level changes. Cambridge Journal of Economics, 30, 181 199. Immervoll, H., & O’Donoghue, C. (2004). What difference does a job make? The income consequences of joblessness in Europe. In D. Gallie (Ed.), Resisting marginalisation: Unemployment experience and social policy in the European union. (chapter 5). Oxford: Oxford University Press. Immervoll, H., & O’Donoghue, C. (2009). Towards a multi-purpose framework for tax-benefit microsimulation. International Journal of Microsimulation, 2(2), 43 54. Immervoll, H., O’Donoghue, C., & Sutherland, H. (1999). An introduction to EUROMOD. EUROMOD Working Paper No. EM0/99. ISER, University of Essex, Colchester. Jara, H. X., & Sutherland, H. (2013). Baseline results from the new EU27 EUROMOD: 2007 2010 policies. EUROMOD Working Paper No. EM3/13. ISER, University of Essex, Colchester.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
104
Holly Sutherland
Jara, H. X., & Sutherland, H. (2014). The implications of an EMU unemployment insurance scheme for supporting incomes. EUROMOD Working Paper No. EM5/14. ISER, University of Essex, Colchester. Jara, H. X., & Tumino, A. (2013). Tax-benefit systems, income distribution and work incentives in the European Union. International Journal of Microsimulation, 6(1), 27 62. Leventi, C., Levy, H., Matsaganis, M., Paulus, A., & Sutherland, H. (2010). Modelling the distributional effects of austerity measures: The challenges of a comparative perspective. Research Note 8/2010 of the European Observatory on the Social Situation and Demography, European Commission. Levy, H., Lietz, C., & Sutherland, H. (2007a). A guaranteed income for Europe’s children? In S. P. Jenkins & J. Micklewright (Eds.), Inequality and poverty re-examined. Oxford: Oxford University Press. Levy, H., Lietz, C., & Sutherland, H. (2007b). Swapping policies: Alternative tax-benefit strategies to support children in Austria, Spain and the UK. Journal of Social Policy, 36, 625 647. Levy, H., Matsaganis, M., & Sutherland, H. (2013). Towards a European Union child basic income? Within and between country effects. International Journal of Microsimulation, 6(1), 63 85. Levy, H., Morawski, L., & Myck, M. (2009). Alternative tax-benefit strategies to support children in Poland. In O. Lelkes & H. Sutherland (Eds.), Tax and benefit policies in the enlarged Europe: Assessing the impact with microsimulation models (chapter 6, pp. 125 151). Vienna: Asghate. Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 3 55. Lie´geois, P., & Dekkers, G. (2014). Combining EUROMOD and LIAM tools for the development of dynamic cross-sectional microsimulation models: A sneak preview. In G. Dekkers, M. Keegan, & C. O’Donoghue (Eds.), New pathways in microsimulation. Farnham: Ashgate. Lietz, C., & Mantovani, D. (2007). A short introduction to EUROMOD: An integrated European tax-benefit model. In O. Bargain (Ed.), Micro-simulation in action: Policy analysis in Europe using EUROMOD (Vol. 25). Research in Labour Economics. Bingley, UK: Emerald Group Publishing Limited. Mantovani, D., Papadopoulos, F., Sutherland, H., & Tsakloglou, P. (2007). Pension incomes in the European Union: Policy reform strategies in comparative perspective. In O. Bargain (Ed.), Microsimulation in action: Policy analysis in Europe using EUROMOD (Vol. 25, pp. 27 71). Research in Labour Economics. Bingley, UK: Emerald Group Publishing Limited.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
Multi-Country Microsimulation
105
Matsaganis, M., & Flevotomou, M. (2008). A basic income for housing? Simulating a universal housing transfer in the Netherlands and Sweden. Basic Income Studies, 2(2), 1 25. Matsaganis, M., O’Donoghue, C., Levy, H., Coromaldi, M., MercaderPrats, M., Rodrigues, C. F., … Tsakloglou, P. (2006). Reforming family transfers in Southern Europe: Is there a role for universal child benefits? Social Policy & Society, 5(2), 189 197. Mojsoska Blazevski, N., Petreski, M., & Petreska, D. (2013). Increasing labour market activity of the poor and females: Let’s make work pay in Macedonia EUROMOD. Working Paper No. EM16/13, University of Essex, Colchester. Navicke, J., Rastrigina, O., & Sutherland, H. (2013). Nowcasting indicators of poverty risk in the European Union: A microsimulation approach. Social Indicators Research. O’Donoghue, C., Baldini, M., & Mantovani, D. (2004). Modelling the redistributive impact of indirect taxes in Europe: an application of EUROMOD. EUROMOD Working Paper No. EM7/01. ISER, University of Essex, Colchester. Paulus, A., Cˇok, M., Figari, F., Hegedu¨s, P., Kralik, S., Kump, N. … Vo˜rk, A. (2009). The effects of taxes and benefits on income distribution in the enlarged EU. In O. Lelkes & H. Sutherland (Eds.), Tax and benefit policies in the enlarged Europe: Assessing the impact with microsimulation models (chapter 4, pp. 65 90). Vienna: Asghate. Paulus, A., & Peichl, A. (2009). Effects of flat tax reforms in Western Europe. Journal of Policy Modelling, 31(5), 620 636. Paulus, A., Sutherland, H., & Tsakloglou, P. (2010). The distributional impact of in-kind public benefits in European countries. Journal of Policy Analysis & Management, 29(2), 243 266. Peichl, A. (2009). The benefits and problems of linking micro and macro models Evidence from a flat tax analysis. Journal of Applied Economics, 12(2), 301 329. Popova, D. (2013). Impact assessment of alternative reforms of child allowances using RUSMOD The static tax-benefit microsimulation model for Russia. EUROMOD Working Paper No. EM9/13. University of Essex, Colchester. ˇ (2013). Improving work incentives in Randelovic´, S., & Rakic´, J. Z. Serbia: Evaluation of a tax policy reform using SRMOD. International Journal of Microsimulation, 6(1), 157 176. Salanauskaite, L., & Verbist, G. (2013). Is the neighbour’s grass greener? Comparing family support in Lithuania and four other new member states. Journal of European Social Policy, 23(3), 315 331. Sutherland, H. (Ed.). (2001). EUROMOD: An integrated European Benefit-tax model. Final Report, EUROMOD Working Paper No. EM9/01. University of Essex, Colchester.
Downloaded by La Trobe University At 08:17 10 July 2016 (PT)
106
Holly Sutherland
Sutherland, H., & Figari, F. (2013). EUROMOD: The European Union tax-benefit microsimulation model. International Journal of Microsimulation, 6(1), 4 26. Urzu´a, C. M. (Ed.). (2012). Fiscal inclusive development: Microsimulation models for Latin America. Instituto Tecnolo´gico y de Estudios Superiores de Monterrey (ITESM). International Development Research Centre: United Nations Development Programme. Verbist, G. (2007). The distribution effect of taxes on pensions and unemployment benefits in the EU-15. In O. Bargain (Ed.), Microsimulation in action: Policy analysis in Europe using EUROMOD (Vol. 25, pp. 73 99). Research in Labour Economics. Bingley, UK: Emerald Group Publishing Limited. Verbist, G., & Figari, F. (2014). The redistributive effect and progressivity of taxes revisited: An international comparison across the European Union. FinanzArchiv/Public Finance Analysis. Wilkinson, K. (2009). Adapting EUROMOD for use in a developing country The case of South Africa and SAMOD. EUROMOD Working Paper No. EM5/09. University of Essex. Wright, G., Noble, M., & Barnes, H. (2014). NAMOD: A Namibian tax-benefit microsimulation model. EUROMOD Working Paper No. EM7/14. University of Essex, Colchester.
CHAPTER 5
Decomposing Changes in Income Distribution
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
Olivier Bargain
5.1. Introduction For at least two decades, the potential and usefulness of microsimulation models for researching tax-benefit systems had found widespread acceptance. National models have been extensively used to evaluate the effect of current or past reforms on inequality or poverty. Tax-benefit microsimulation has also been used to predict alternative or hypothetical systems, including announced scenarios of potential reforms, to help policy makers in the design of reform or to allow international comparison of redistributive systems. More recently, it has been suggested to formally decompose the change in inequality and poverty over time in order to extract and quantify the actual contribution of tax-benefit policy reforms. This method relies on tax-benefit microsimulation to construct counterfactual distributions used to identify the relative effect of policy changes. This is an important, retrospective use of tax-benefit microsimulation for analysts and policy makers as it helps to understand whether actual tax-benefit reforms have achieved their objectives in terms of redistribution. The present chapter reviews the context, the decomposition approach, recent development and future research paths regarding this branch of the microsimulation literature. 5.2. Context 5.2.1. Tax-Benefit Microsimulation Tax-benefit microsimulation models have developed in parallel with the availability of computers. These simulators replicate tax-benefit rules for
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293004
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
108
Olivier Bargain
a given country and, plugged to a national household survey, allow simulating disposable income (i.e. net of taxes, social contributions and benefits) for each representative household in the data. In this way, it becomes possible to simulate the complete distribution of disposable income of a given population, and summary measures like inequality or poverty indices, as well as counterfactual distributions under alternative tax-benefit regimes. By the mid-1990s tax-benefit microsimulation models had spread out to many Western and some Eastern European countries. Examples of early developments of tax-benefit microsimulation models at the national level in Europe are TAXMOD model in the United Kingdom, developed around Atkinson, King and Sutherland’s work (Atkinson, King, & Sutherland, 1983), and the SYSIFF model in France (Bourguignon, Chiappori, & Sastre, 1988; Atkinson, Bourguignon, & Chiappori, 1988). Since then, tax-benefit microsimulation has been extensively used to support governments’ decision making regarding the design of fiscal and social policy reforms (see Atkinson, 2005).1 This type of microsimulation approach has two main advantages compared to solely using country database on income distribution. The most obvious is that it is not restricted to analyses of existing tax-benefit systems, but allows the study of a wide range of reforms and hypothetical tax-benefit approaches. In addition, microsimulation techniques enhance the measurement of redistribution performed by tax-benefit systems when national databases do not provide detailed information about taxes or benefits. A third advantage, reviewed in this chapter, is the possibility to use microsimulation retrospectively to assess the effectiveness of past reforms in reducing poverty or inequality.
5.2.2. Assessing the Effect of Policy Changes Using Decomposition by Income Types Various strategies have been used to identify the impact of tax/transfer policy changes on poverty and inequality. Traditional approaches compare inequality with and without a specific income source, for example social transfers, and repeat the assessment at different points in time, possibly before and after important changes in redistributive policy (for instance Gottschalk & Smeeding, 1997; Heathcote, Perri, & Violante, 2010). More generally, analyses make use of factor decomposition of inequality indices, as introduced and axiomatised by Shorrocks (1982) and extended for instance by Lerman and Yitzhaki (1985).2 Yet
1
Bourguignon and Spadaro (2006) give a general overview of what can be achieved using microsimulation techniques.
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
Decomposing Changes in Income Distribution
109
decomposition by income types has well-known limitations summarized in Shorrocks (1999, 2013). In particular, the contribution assigned to a specific factor is not always interpretable in an intuitively meaningful way (cf. Chantreuil & Trannoy, 2011, 2013). Moreover, conventional procedures often place constraints on the types of poverty and inequality indices which can be used. Some indices require the introduction of a vaguely defined ‘interaction’ term in order to maintain the decomposition identity. There are other potential limitations concerning the types of contributory factors which can be considered; in particular, there is no established method of dealing with mixtures of factors, such as a simultaneous decomposition by subgroups (e.g. household types) and income components.3 To those limitations, we should add the following: measuring the contribution of taxes and transfers to overall inequality/poverty at different points in time does not allow disentangling the pure effect of policy changes from their interaction with the underlying population. This is well understood in the public finance literature. For instance, a given progressive income tax schedule redistributes more when the distribution of taxable incomes becomes more dispersed, and not at all if everybody earns the same (Dardanoni & Lambert, 2002; Musgrave & Thin, 1948). Hence, it is unclear how much of an observed change in tax liabilities (and resulting inequality) is due to policy reforms and what part is due to other
2
3
The approach sometimes referred to as the ‘actual payments’ method is used by Jenkins (1995) and Goodman, Johnson, and Webb (1997) to analyse inequality trends in the United Kingdom. Their finding that the tax-benefit system of the late 1980s was not less redistributive than that of the late 1970s is partly due to the fact that they do not account for changes in the underlying market income distribution. Jenkins indicates that decomposition results differ when used with different inequality indices, especially when the indices are sensitive to extreme income observations. A similar trend exists in the field of decomposition of household income distributions across countries or over time into components like household size, labour supply and productivity. For instance, some authors have relied on the microsimulation of household behaviour to extend the simple framework of Mincer’s wage regression model to additional factors like changes in occupational status or household composition (see Bourguignon, Ferreira, & Leite, 2008; Bourguignon, Fournier, & Gurgand, 2001; Hyslop & Mare´, 2005). This type of approach is computationally demanding, which explains why decompositions of differences in household income distribution are not so frequent, at least compared to OaxacaBlinder type of decomposition of wage distribution (e.g. DiNardo, Fortin, & Lemieux, 1996). On the other hand, simulation techniques overcome some of the limitations of traditional approaches like the decomposition by subgroup (see Cowell, 1998, for an overview). They refer to the distribution as a whole and isolate the effects of particular variables in a well-defined way thanks to counterfactual simulations. In contrast, decompositions by subgroup are often confined to inequality/poverty indices with particular decomposability properties and suffer from the fact that the effects of correlated variables cannot be disentangled in the necessarily coarse population partitions used in practice.
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
110
Olivier Bargain
factors, notably the change in the underlying pre-tax income distribution. The same is true on the benefit side. The approaches discussed above may well identify an increase in social assistance income but cannot say if this arises from increased generosity of benefit payments or from an automatic increase in the incidence of transfers as unemployment rises. Before turning to the solution offered by tax-benefit microsimulation, let us mention two methods aimed at isolating the effects of tax policies on income redistribution. They allow for the analysis of changes in progressivity over time or across countries. The ‘fixed-income’ procedure proposed by Kasten, Sammartino, and Toder (1994), keeps market incomes fixed and equal to those of a base year, thus permitting the derivation of progressivity rankings of comparable tax-transfer systems after (de)inflating the tax thresholds and the relevant transfer parameters. The ‘transplantand-compare’ method of Dardanoni and Lambert (2002) compares net income distributions that have been adjusted to a common base regime in which differences in market income inequality, for whatever reason they happened (behavioural changes, for instance), have been eliminated. Although useful to compare the progressivity of different tax-transfer systems while controlling for changes in market income distribution, these two approaches provide limited insights on the factors driving the observed changes and impose some constraints (either the use of the base year population in Kasten et al., or the use of specific parametric form for income distribution in Dardanoni and Lambert).
5.2.3. Assessing the effect of policy changes using microsimulation Another, more flexible method relies on tax-benefit microsimulations to construct counterfactual situations and to disentangle the pure effect of a policy change from changes in the environment in which the policy operates, particularly changes in market income inequality (see Atkinson, 2005, for a general statement). However, measures may be sensitive to the choice of indexation factor used in the ‘no reform’ counterfactual scenario. Results may also depend on the underlying population used to evaluate policy change, either the base-period or the final-period data. This issue has been investigated in the literature on tax progressivity (e.g. Dardanoni & Lambert, 2002; Lambert & Thoresen, 2009), but has received little attention in actual policy evaluations though results may be sensitive to the choice made.4
4
For instance, Adam and Wakefield (2005) analyse the distributional impact of tax-benefit reforms implemented between 1997 and 2005 in the United Kingdom as assessed on endperiod data.
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
Decomposing Changes in Income Distribution
111
An exception is the study of Clark and Leicester (2004) who carefully investigate the distributional effect of policy changes over the 1980s and 1990s in the United Kingdom and provide an extensive sensitivity analysis. More recently, Bargain and Callan (2008) have suggested a formal framework based on Shorrocks’s (1999, 2013) reinterpretation of the Shapley value decomposition. The change in poverty/inequality indices is decomposed into three components: (i) the effect of changes in tax-benefit policy, (ii) the effect of adjusting tax-benefit monetary parameters according to market income growth and (iii) all the changes not directly linked to tax-benefit policies (including changes in market income inequality). The policy impact can be alternatively assessed on base-period and endperiod data, but symmetry arguments suggest that the two alternative measures should be averaged (see Kolenikov & Shorrocks, 2005; Shorrocks, 1999). This leads to a third decomposition in which the (averaged) policy effect is the contribution associated with the policy change in a two-way Shapley decomposition.5 When end-period data are not available, for example for forward looking analysis of possible reforms, the base weighted decomposition helps to extract an absolute measure of the impact of tax-benefit changes on income distribution as evaluated against a distributionally neutral benchmark. When both end- and base-period data are available, the decomposition can be used to quantify the relative role of policy changes on inequality/poverty trends, as in the application to policy changes in France and Ireland in the late 1990s suggested in Bargain and Callan (2008). This approach is exposed in detail in the methodological section below. A series of contributions have followed. Bargain (2012a) shows that the ShorrocksShapley decomposition framework allows justifying and clarifying the use of market income indexation as the distributionally neutral backdrop situation to analyse policy changes, as compared to the price indexation often used in policy evaluations. He suggests an application of the method to analyse the distributional effect of the policies implemented under the first New Labour government (19972001). Bargain (2012b) acknowledges that when actual reforms are motivated by work incentives, it is also crucial to evaluate behavioural responses and the distributional consequences thereof. For that purpose, he augments the decomposition framework with an additional term accounting for the indirect policy effects due to labour supply responses to the reforms and suggests an
5
The general ShorrocksShapley decomposition method has been applied in several contexts, including the decomposition of changes in poverty into components due to per capita income, income inequality and regional price variation (Kolenikov & Shorrocks, 2005). Here, income distribution statistics are decomposed into the effect of structural policy change, nominal adjustment and other effects (including shifts in gross income inequality).
112
Olivier Bargain
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
application to the British reforms under New Labour. He´rault and Azpitarte (2013) extend this approach to account for labour supply changes which are not related to the policy changes. They suggest an application to policy changes in Australia between 1999 and 2007. Creedy and He´rault (2011) adapt the method to extract the effect of tax-benefit policies on other ‘non-welfarist’ measures than the mere distribution of disposable income, as well as on ‘welfarist’ measures of social welfare using money metric utility, with an application to policy changes in Australia between 2001 and 2006. Other studies are direct applications of the simple decomposition approach to extract the role of policy reforms during the early years of economic crisis in Europe (Bargain, Callan, Doorley, & Keane, 2013) or over the period 19792007 in the United States (Bargain, Dolls, Peichl, Neumann, & Siegloch, 2015).
5.3. Methodological characteristics and choices 5.3.1. Methodology We first introduce some notation and terminology. By household ‘gross income’ or ‘market income’, we mean the total amount of labour income, capital income and private pensions, before taxes and benefits (the treatment of replacement incomes and contributory benefits is detailed below). ‘Disposable income’ is the household income that remains after payment of taxes/social contributions and receipt of all transfers, as widely used to measure poverty and inequality. Matrix y describes the population contained in the data, that is each row contains all the information about a given household (various market income sources and socio-demographic characteristics). Denote d the ‘tax-benefit function’ transforming, for each household, market/gross incomes and household characteristics into a certain level of disposable income. Tax-benefit calculations depend also on a set of monetary parameters p (e.g. maximum benefit amounts, threshold level of tax brackets, etc.). Thus, the distribution of disposable income is represented hereafter by di(pj,yl), for a hypothetical scenario including the population of year l, the tax-benefit parameters of year j and the taxbenefit structure of year i. In empirical applications, we are interested in relative inequality/poverty indices I, computed as a function I[di(pj,yl)] of the (simulated) distribution of disposable income. Policy changes possibly combine changes in policy structure d and changes in parameters p (the ‘uprating policy’). We also consider the possibility of nominally adjusting income levels by the uprating factor α1, that is the income growth rate between year 0 and year 1. That is, α1y0 retains the structural characteristics of year 0 data (in particular the distribution of gross income) but adopts the nominal levels prevailing in year 1. With this notation, we can easily represent
Decomposing Changes in Income Distribution
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
1
113
1 0
counterfactual situations. For instance, d1(p ,α y ) represents disposable incomes obtained by applying tax-benefit rules and parameters of year 1 on nominally adjusted data of year 0. This backdrop, where the new policy is evaluated while holding the population constant, is used in the decomposition. Symmetrically, we may need to evaluate the distribution obtained with the initial policy applied to the new population. A measure d0(p0,y1) would not be consistent since base-period parameters would be artificially applied to end-period income levels. For instance, previous tax band thresholds would be applied to new and possibly higher income levels, thereby generating artificial ‘fiscal drag’ (see Immervoll, 2005). Therefore, we need to construct counterfactuals where tax-benefit parameters can be uprated using the same factor α1 as used to scale up the distribution of gross income between period 0 and 1. Clearly, the nominally adjusted schedule, denoted α1p0, is not identical to the actual set of parameters p1 as decided by the authorities. However, d0(α1p0,y1) suggests an interesting backdrop where the only policy change between years 0 and 1 is an uprating of money parameters in line with income growth. Characterize total change Δ in the inequality/poverty index I between initial period 0 and final period 1 as: Δ = I½d1 ðp1 ; y1 Þ − I½d0 ðp0 ; y0 Þ This change in the distribution of disposable income, as summarized by index I, can be decomposed into the contribution of the change in the taxbenefit policy (‘policy effect’) and the contribution of changes in the underlying gross income distribution (or any other effects not directly linked to policy changes). The former effect corresponds to a shift from d0(p0,·) to d1(p1,·) while the latter is simply a move from base year data y0 to final data y1. Thus the decomposition consists in a shift in data conditional on the initial policy, followed by a change in policy evaluated on final data (decomposition 1). Or, alternatively and symmetrically, a change in policy evaluated on base year data, followed by a change in underlying data conditional on the new policy (decomposition 2). Formally, and cautiously applying nominally adjusted tax-benefit parameters p0 to final income, y1 we can write decomposition (1) as: Δ = fI½d1 ðp1 ; y1 Þ − I½d0 ðα1 p0 ; y1 Þg þ fI½d0 ðα1 p0 ; y1 Þ − I½d0 ðα1 p0 ; α1 y1 Þg þ fI½d0 ðα1 p0 ; α1 y0 ÞÞ − I½d0 ðp0 ; y0 Þg
ðpolicy effectÞ ðother effectsÞ ðincome growthÞ
The first term captures the effect of the tax-policy change over the period conditional on final year data. Conditional on the policy structure of year 0, and for nominal levels of year 1, the second term is a catch-all component that includes the effect of everything except policy reforms,
114
Olivier Bargain
and in particular the change in market income inequality. Symmetrically, decomposition (2) can be written:
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
Δ = fI½d1 ðp1 ; y1 Þ − ½d1 ðp1 ; α1 y0 Þg ðother effectsÞ þ fI½d1 ðp1 ; α1 y0 Þ − I½d0 ðα1 p0 ; α1 y0 Þg ðpolicy effectÞ þ fI½d0 ðα1 p0 ; α1 y0 ÞÞ − I½d0 ðp0 ; y0 Þg ðincome growthÞ Here, the end-period system is evaluated on nominally adjusted baseperiod data. In the above expressions, nominal adjustments necessary lead to a subdecomposition of the change in income base into a growth component and an inequality component. This is reminiscent of decompositions of changes in absolute poverty into the contribution of income growth (holding inequality constant) and the contribution of income inequality (holding mean income constant), for instance in Datt and Ravallion (1992) and Kolenikov and Shorrocks (2005). In the present context, however, changes in market income levels and market income inequality are expressed after transformation into disposable income via the tax-benefit function d(p,·). It is easily shown that if the tax-benefit system is linear and continuous in p and y, which is the case in most countries, a simultaneous change in nominal levels of both incomes and parameters should not affect the relative location of households in the distribution of disposable income: di ðαpj ; αyl Þ = αdi ðpj ; yl Þ The direct consequence of this linear homogeneity property of the taxbenefit function is that the ‘income growth’ component the third term in (1) and (2) disappears.6 I shall test this property empirically in the next section. As explained by Shorrocks (1999, 2013), the Shapley value procedure extracts the marginal effect on a poverty/inequality statistic I of eliminating each of m contributory factors in sequence, and then assigns to each factor the average of its marginal contributions in all possible elimination sequences.7 In the present case, if the homogeneity property is verified,
6
7
In particular, the zeros contained in the initial market income distribution y0 correspond to unemployed and inactive households without other resources than state transfers. In the ‘income growth’ component, welfare payments (in the parameter vector p0) are uprated by the same factor α1 as market incomes so that the relative position of these households does not change. This approach is used by Mookherjee and Shorrocks (1982) to decompose inequality trends in the United Kingdom into the contributions of subgroup population shares, subgroup mean incomes and subgroup inequalities. Jenkins and van Kerm (2005) analyse inequality change in the United Kingdom in the 1980s and discuss the choice of weights used in the decomposition, either base-period values, end-period values or the Shapley value (averaging of all contributions).
Decomposing Changes in Income Distribution
115
the ‘policy effect’ and the ‘market income inequality effect’ under the Shapley decomposition are thus obtained by averaging the contributions from the two decompositions set out above, that is: o 1 n o 1n I d1 ðp1 ;y1 Þ −I d0 ðα1 p0 ;y1 Þ þ I d1 ðp1 ;α1 y0 Þ −I d0 ðp0 ;y0 Þ 2 2 o n o 1 1n other effects : I d0 ðα1 p0 ;y1 Þ −I d0 ðp0 ;y0 Þ þ I d1 ðp1 ;y1 Þ −I d1 ðp1 ;α1 y0 Þ 2 2
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
policy effect :
Hence, it is possible to quantify the relative weight of the policy change in explaining time changes in poverty/inequality. It is also possible to examine the sensitivity of the results to the choice of the decomposition, either the ‘end-weighted’ measure (1), the ‘base-period weighted’ measure (2) or the averaged ShorrocksShapley decomposition. 5.3.2. Choice of nominal adjustments The policy effect disentangled in the two first decompositions, I½d1 ðp1 ; ⋅Þ − I½d0 ðα1 p0 ; ⋅Þ, does not only capture the change in policy structure (d0 to d1) on income distribution. It also assesses the actual uprating policy (shift from p0 to p1) against a scenario where parameters are adjusted in line with average income growth (α1p0). In fact, the way tax brackets and welfare payments are uprated by governments can have important implications for income distribution and public spending in the long run. Sutherland, Evans, Hancock, Hills, and Zantomio (2008) provide a very extensive analysis of this question. Governments have many options to uprate taxbenefit parameters, three of them being fairly standard: (1) no uprating, (2) uprating according to the level of price inflation, (3) uprating according to the level of earnings growth. With non-indexation of tax brackets in progressive systems, or price indexation when incomes rise faster than prices, the total number of taxpayers (and the number of higher-rate taxpayers) increases. This phenomenon of ‘fiscal drag’ or ‘bracket creep’ must affect the final distribution of disposable income (see Immervoll, 2005; Gutierrez, Immervoll, & Sutherland, 2005). With price indexation of welfare payments when real income grows, those living on welfare fall further behind those receiving earnings and relative poverty may increase. This phenomenon of ‘benefit erosion’ tends to have a larger distributional impact, especially through poverty effects, than fiscal drag (cf. Sutherland et al., 2008). Hence, when it comes to assessing the distributional impacts of actual practices, the choice of the ‘no reform’ backdrop against which policy changes are gauged can be crucial. Price indexation seems an appealing and intuitive benchmark (see, for instance, Mitrusi & Poterba, 2000). However, Callan, Coleman, and Walsh (2006) argue that this is not an appropriate backdrop when considering impacts on relative poverty and inequality. For instance, price indexation of tax and welfare when real
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
116
Olivier Bargain
incomes are growing would imply a worsening in the relative situation of those dependent on welfare payments. The ShapleyShorrocks decomposition suggests a reference situation d0(α1p0,·) where tax-benefit parameters are uprated with income growth. More generally, this benchmark is suggested by several authors as the appropriate one for the purpose of evaluating the distributional effect of policies as compared to other changes in underlying data. Indeed, it provides a ‘distributionally-neutral’ backdrop (Callan et al., 2006; Sutherland et al., 2008) or ‘constant progressivity’ counterfactual (Clark & Leicester, 2004) against which actual policy changes can be evaluated.8 In contrast, benefit erosion when real income increases, as described above, would not be captured by a no-reform scenario adjusted with price inflation. Arguably, the actual objectives of a government are more complex than mere redistribution and may actually include regressive policy changes. For instance, benefit erosion may be politically smoother than direct cuts in benefits when it comes to improving public finances. Notwithstanding, it seems important to provide policy makers with a gauge of the actual distributional implications of policy changes. A no-reform situation based on price indexation does not allow this and must imply normative assumptions which have not been fully investigated.9 In fact, the present framework allows us to illustrate the nondistributional neutrality of such backdrop. Consider alternative decompositions whereby data is uprated by the average income growth α1 and parameters are uprated with price inflation π1. For instance, decomposition (1) becomes: Δ = fI d1 ðp1 ; y1 Þ − I d0 ðπ 1 p0 ; y1 Þ g ðpolicy effectsÞ þ fI½d0 ðπ 1 p0 ; y1 Þ − I½d0 ðπ 1 p0 ; α1 y0 Þg ðother effectsÞ þ fI½d0 ðπ 1 p0 ; α1 y0 Þ − I½d0 ðp0 ; y0 Þg ðresidualÞ
8
9
Callan et al. (2006) show that wage indexation gives rise to similar growth in real incomes across the income distribution, and is therefore a distributionally neutral benchmark against which actual policy changes can be evaluated. Similar choices are made in Thoresen (2004) and Clark and Leicester (2004) who analyse tax reforms in Norway and the United Kingdom, respectively. The desirability of earnings-related adjustments is also suggested by actual uprating policies in Scandinavian countries (and the Netherlands), generally regarded as ‘best practice’ examples in terms of welfare provision. Note that this reference situation is used extensively in policy analyses, however. This choice is often justified on historical ground as it aims to guarantee some continuity in the evaluation of policies (see Sutherland et al., 2008). Clark and Leicester (2004) show that it actually captures only part of the uprating practices of the past decades in the United Kingdom. In particular, benefits have been uprated with GDP prior to 1979 and some welfare elements are explicitly income-linked in the recent period (e.g. child tax credit and the pension credit under the second Labour government).
Decomposing Changes in Income Distribution
117
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
This time, the last term does not vanish. This residual term represents the distributional effect of a uniform income growth evaluated against a priceindexed system. This component captures some of the distributional changes ‘missed’ by the policy effect based on a price-indexed benchmark. Assume that real income grows over time, then the residual term will reflect the increase in poverty due to benefit erosion in a price-indexed situation. If the government actually adjusts tax-benefit parameters in line with price changes, and if the tax structure is left unchanged (d0), the decomposition will report a zero policy effect even though benefit erosion occurs. If the government adjusts parameters in line with income growth, the decomposition will overstate the actual policy effect. 5.3.3. Choice of definition of the policy effect An important aspect is the perimeter of the policy effect we want to characterize. In the description above, we have described vector y by saying that it contains gross/market incomes defined as all labour incomes, capital incomes and private pensions received in a given household. We have remained voluntarily vague about contributory benefits and replacement income like basic public pensions, unemployment benefits (or job seeker’s allowance) or disability benefits. These incomes could well be treated as transfers and, hence, be part of the redistributive function d. Alternatively, they can be treated as primary incomes and counted as earnings in vector y rather than part of the redistribution scheme in d. The choice pertains to the nature of the income support system in a given country and countryspecific interpretation as to what constitutes a publicly provided social insurance based benefit and a state transfer. Where the link between contributions and social insurance benefits is strong, as in Continental or Nordic Europe, contributory benefits may be seen as more akin to private insurance than to a state transfer. For other countries (e.g. Ireland, the United Kingdom and the United States) the link between the cost of contributions and the value of benefits is more loose, so that they can be treated as part of the state’s redistribution function. Another dimension related to the definition of the policy effect is the extent of this effect. In the methodological description above, we have considered that the ‘other effects’ are unrelated to policy changes. In fact, tax-benefit reforms may induce labour supply responses which affect the distribution of market income. We remain agnostic about whether this indirect (but possibly unintended) effect of policy reforms should be treated as part of the policy effect or not, as we can simply isolate this additional effect. Denote ylk the population of year l making labour supply choices as if living under the policy regime k. That is, we can estimate a behavioural model on base-period data and simulate the market incomes of the base-period population after adjustment to the new policy (y01 ) or, inversely, estimate the model on end-period data and simulate the market
Downloaded by University of Lethbridge At 16:09 19 June 2016 (PT)
118
Olivier Bargain
incomes under the ‘old’ policy (y10 ). We are free to construct all types of counterfactuals, for instance a distribution di ðpi ; ylk Þ for the population of year l under the policy of year i and with labour supply choices adjusted to the policy of year k. The situations where labour supply is static, that is ylk with k = l, are those used in previous decompositions and simply written yl to simplify notations. We assume homogeneity as described above. Then, we can account for the three effects (policy, behaviour, other) which give nine permutations and in principle nine decompositions. In fact, the ‘other effects’ and behavioural effects must be positioned consecutively since they correspond to a split of the former ‘other effects’ in primary decompositions (1) and (2). Hence we have only four decompositions. In the first two, labour supply behaviour is estimated on base-period data and the behavioural effect is simulated under the old policy regime: 8 2 0 13 9 0 and η(w,I)+ɛ1 + ɛ2 < 0. Alternatively, one could specify a complementary equation that generates involuntary unemployment, for example as in Blundell, Ham, and Meghir (1987). A different perspective to look at the possible divergence between optimal and observed hours is to think of workers as ‘captive’ to certain choices, as in Harris and Duncan (2002). Some authors, within the ‘marginalist’ approach, have exploited datasets containing explicit information on quantity constraints in the opportunity sets, for example Ham (1982), Colombino and Zabalza (1982), Colombino (1985), Altonji and Paxson (1988) and Ilmakunnas and Pudney (1990). 7.2.2.4. Non-linear budget constraints Let us consider the following modification of problem (7.2): max uðc; hÞ c;h
s:t: c = f ðwh; I Þ
ð7:8Þ
176
Rolf Aaberge and Ugo Colombino
Here the function f(.,.) represents the tax-benefit rule, that is the rule according to which the gross earnings wh and the exogenous gross income I are turned into net available income c (=consumption). If u(.,.) and f(.,.) are differentiable, u(.,.) is quasi-concave and f(.,.) is concave in h (i.e. the budget set is convex), then the following condition (together with the budget constraint) is necessary and sufficient for a (interior) solution of problem (7.8):
Downloaded by Monash University At 09:10 12 April 2016 (PT)
−
∂u ∂u ∂f = = ∂h ∂c ∂h
ð7:9Þ
The condition is not sufficient anymore if u(.,.) is not quasi-concave and/or the budget set is not concave. In these cases, the sufficient conditions for identifying a solution might become very cumbersome and unpractical to use in applied research. Of course, also the nondifferentiability of f(.,.) creates problems. However, most actual or reformed tax-benefit rules belong to the piecewise linear family, that is they can be represented as a combination of linear segments. Starting with Burtless and Hausman (1978), a procedure has been designed for identifying the solution on convex budged sets defined by piecewise linear constraints. Let us suppose that as long as the consumer’s earnings do not exceed a certain amount E, she is not required to pay taxes on her earnings. However, for every Euro of earnings above E she has to pay taxes according to a marginal tax rate τ. The first segment has slope w, the second segment has slope equal to w(1 − τ). It is useful to define H = ðE=wÞ=hours of work corresponding to the ‘kink’, and I + E − w(1 − τ)H = I + Eτ = ‘virtual’ exogenous income associated to the second segment (i.e. the intercept of the line that lies on the second segment). Note that the exogenous income associated to the first segment is instead I, which is assumed to be tax-free. Then the problem is: max uðc; hÞ c;h
s:t: c ≤ I þ wh c ≤ I þ Eτ þ wð1 − τÞh h≥0
ð7:10Þ
Now define h(n,q) as the ‘virtual’ labour supply given a wage rate n and an exogenous income q, that is the value of h that solves the problem max uðc; hÞ c;h
s:t: c = q þ nh
ð7:11Þ
Labour Supply Models
Downloaded by Monash University At 09:10 12 April 2016 (PT)
The solution to problem (7.10) is then characterized as follows: 8 0 if hðw; IÞ ≤ 0 > > < hðw; IÞ if 0 < hðw; IÞ < H h = H if hðw; IÞ ≥ H and hðwð1 − τÞ; I þ EτÞ ≤ H > > : h ðwð1 − τÞ; I þ EτÞ if h ðwð1 − τÞ; I þ EτÞ > H
177
ð7:12Þ
The same procedure can be used to characterize the solution when the problem involves more than two segments and can be extended (with due modifications) to cases with non-convex budget sets. The method originally proposed by Heckman (1974b) also adopts a very similar logic. The structural ‘marginalist’ approach can be extended in many directions. Instead of representing the budget constraint with a combination of linear segments (which in most cases in fact correspond to the real system), one could use a smooth non-linear approximation (e.g. Flood & MaCurdy, 1992). Random components capturing preference heterogeneity and/or measurement/optimization errors can be specified in a way similar to what illustrated in the linear budget constraint case. In principle it can also be extended to cover simultaneous household decisions, although most of the applications treat unconditional husband’s decisions or wife’s decisions conditional on husband’s ones. Useful presentations are provided by Hausman (1979, 1985a), Moffitt (1986), Heckman and MaCurdy (1986) and Blundell and MaCurdy (1999). Duncan and Stark (2000) have developed an algorithm for generating piecewise linear budget constraints for estimation or simulation purposes. Applications to different countries and different tax-benefit rules and reforms include Burtless and Hausman (1978), Hausman (1979, 1980, 1985a, 1985b), Blomquist (1983), Zabalza (1983), Arrufat and Zabalza (1986), Blomquist and Hansson-Brusewitz (1990), Bourguignon and Magnac (1990), Colombino and Del Boca (1990), MaCurdy, Green, and Paarsch (1993), Triest (1990), Van Soest, Woittiez, and Kapteyn (1990) and Bloemen and Kapteyn (2008).3 More general surveys, also covering contributions that belong to the structural ‘marginalist’ approach, include Blundell and MaCurdy (1999), Blundell et al. (2007), Meghir and Phillips (2008) and Keane (2011). In the second half of the 1980s the structural ‘marginalist’ approach was thought to be a dominating paradigm and a special number of the Journal of Human Resources (1990) was dedicated to applications of this method to various countries. The same issue of the JHR, however, also collects most of the critiques that eventually led to adopting alternative approaches. The method proposed by Heckman as well as the method
3
Hausman and Wise (1980), although applied to the demand for housing and not to labour supply, is a very clear illustration of how the structural marginalist approach can be applied to non-convex budget sets.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
178
Rolf Aaberge and Ugo Colombino
proposed by Hausman and co-authors in practice turn out to be not so easily applicable to problems that are more complicated than those for which they were originally exemplified. First, the application is general and straightforward with convex budget sets (e.g. those generated by progressive taxation) and a two-good case (e.g. leisure and consumption in the individual labour supply model). Instead, it is more case-specific and tends to become computationally cumbersome when the decision makers face non-convex budget sets and/or when more than two goods are choice variables (e.g. in the case of a many-person household). Second, in view of the computational problems, the above approach essentially forces the researcher to choose relatively simple specifications for the utility function or the labour supply functions. Third, computational and statistical consistency of ML estimation of the model requires imposing a priori the quasi-concavity of the utility function (e.g. Kapteyn, Kooreman, & van Soest, 1990; MaCurdy et al., 1993).4
7.2.3. The random utility maximization approach As a response to the problems mentioned above, since the early 1990s researchers have made use of another innovative research effort which matured in the first half of the 1970s, that is the random utility maximization (RUM) model or some variations of it developed by McFadden (1974, 1984). The crucial advantage of this approach is that the solution of the utility maximization problem is represented in terms of comparisons of absolute values of utility rather than in terms of marginal variations of utility, and it is not affected by the specification of the utility function nor of the tax-benefit rule. This approach is very convenient when compared to the previous ones, since it does not require going through complicated KuhnTucker conditions involving derivatives of the utility function and of the budget constraints. Therefore, it is not affected by the complexity of the rule that defines the budget set or by how many goods are contained in the utility function. Equally important, the deterministic part of the utility function can be specified in a very flexible way without worrying about the computational problems. The most popular version adopts the Extreme Value distribution for the stochastic component, which leads to an easy and intuitive expression for the probability that any particular alternative is chosen (i.e. the Multinomial or Conditional Logit model).
4
The simultaneous household decision model Hausman and Ruud (1984) has essentially remained an isolated contribution. On the difficulties of applying the ‘marginalist’ approach outside the simplest scenarios, see also Bloemen and Kapteyn (2008).
Labour Supply Models
179
Downloaded by Monash University At 09:10 12 April 2016 (PT)
7.2.3.1. The discrete choice model This approach essentially consists in representing the budget set with a set of discrete alternatives or jobs. Early and path-breaking contributions include Zabalza et al. (1980), where labour supply is represented in terms of probabilities of choosing alternative hours of work or alternative jobs. This contribution, however, is essentially an ordinal probit analysis. Especially in view of modelling simultaneous household decisions, the Conditional Multinomial Logit model is much more convenient. This is the line chosen by Van Soest (1995). Although this very influential contribution can be classified as belonging to the RUM family, we denote it more specifically as a Discrete Choice (DC) model. First, the discreteness of the opportunity set is a distinctive feature of it (this is not the case in general for RUM models). Second, the random term that generates the probabilistic choices is given an eclectic interpretation that includes both the RUM-McFadden (1974, 1984) interpretation and the optimization error interpretation (the latter leading to a non-random utility model). Besides Van Soest (1995), many contributions have adopted the DC model during the last two decades. Among others: Duncan and Giles (1996), Bingley and Walker (1997), Blundell, Duncan, McCrae, and Meghir (2000), Van Soest, Das, and Gong (2002), Creedy, Kalb, and Scutella (2006), Haan and Steiner (2005), Brewer, Duncan, Shephard, and Suarez (2006), Labeaga, Oliver, and Spadaro (2008), Fuest, Peichl, and Schaefer (2008), Haan and Wrohlich (2011), Blundell and Shephard (2012), Bargain, Decoster, et al. (2013), and Bargain, Orsini, and Peichl (2014). The DC model typically treats (also) couples with simultaneous decisions of the two partners, but in order to keep the illustration simple, we will discuss the singles case below: the extension to couples is straightforward. The household chooses among H + 1 alternatives or h = 0, 1, …, T. The utility derived from alternative h is first defined as non-stochastic, v(f(wh,I),h), where w is the fixed (individual-specific) gross wage rate, I is the exogenous income and f(.,.) is the tax-transfer rule that transforms gross incomes into net available income. In order to model the observed hours of work as the result of a probabilistic process, a random variable ɛ is added to the previously defined utility function: v(f(wh,I),h)+ɛ. As mentioned above, the random term is typically given two different interpretations (e.g. Van Soest, 1995): (i) the utility contribution of unobserved characteristics of the alternative choices; (ii) a measurement/optimization error. Interpretation (i) is compatible with the classic RUM interpretation and implies that the household are observed as choosing exactly what they prefer, and what they prefer is decided on the basis of v(f(wh,I),h)+ɛ. Interpretation (ii) instead implies that the household’s preference are measured by v(f(wh,I),h) but the alternative to which they are matched does not maximize v(f(wh,I),h) but rather v(f(wh,I),h)+ɛ: this might happen because they make errors or because some other unexpected process
Downloaded by Monash University At 09:10 12 April 2016 (PT)
180
Rolf Aaberge and Ugo Colombino
displaces them from the preferred choices. The two interpretations, in principle, have also different implications in view of the simulation and of the welfare evaluation. The contributions adopting the DC approach stress the importance of a very flexible specification of v(f(wh,I),h) and of checking for its quasi-concavity (e.g. Van Soest, 1995; Van Soest et al., 2002). This focus of attention suggests that this approach tends to consider v(f(wh,I),h) as the true utility function and ɛ as a measurement/ optimization error.5 Consistently, preference heterogeneity is preferably introduced through random preference parameters. By assuming that ɛ is i.i.d. Type I Extreme Value, one gets the Multinomial Logit or Conditional Logit expression for the probability that the household is observed working h hours:6 exp vðf ðwh; IÞ; hÞ ð7:13Þ PðhÞ = T P exp vðf ðwy; IÞ; yÞ y=0
Model (7.13) usually does not fit labour supply data very well. For example van Soest (1995) notes that the model over-predicts the number of people working part-time. More generally, certain types of jobs might differ according to a number of systematic factors that are not accounted for by the observed variables contained in v: (a) availability or density of jobtypes; (b) fixed costs; (c) search costs; (d) systematic utility components. In order to account for these factors the following ‘dummies refinement’ can be adopted. Let us define subsets S0, …, SL of the set (0, 1, …, H). Clearly, the definition of the subsets should reflect some hypothesis upon the differences between the values of h with respect to the factors (a) and (b) mentioned above. Now we specify the choice probability as follows: P exp vðf ðwh; IÞ; hÞ þ ℓ γ ℓ 1ðh ∈ Sℓ Þ ð7:14Þ PðhÞ = T P P exp vðf ðwy; IÞ; yÞ þ ℓ γ ℓ 1ðy ∈ Sℓ Þ y=0
5
6
A motivation for interpreting ɛ as a measurement/optimization error in DC models is the relatively small number of values of h that are typically allowed to belong to the opportunity set, in many cases just three (non-participation, part-time and full-time). Since the observed distribution of hours worked is much more dispersed, it makes sense to allow for a measurement/optimization error. The derivation of the Conditional Logit expression for utility maximization under the assumption that the utility random components are i.i.d. Type I extreme value distributed is due to McFadden (1974). It is conventional to call Conditional Logit a Multinomial Logit model with generic attributes (i.e. attributes like hours or income whose values vary across alternatives).
Labour Supply Models
181
where 1(e)=1 iff e is true. Many papers have adopted this refinement, for example Van Soest (1995), Callan and Van Soest (1996) and Kalb (2000) among others. Aaberge et al. (1995, 1999), Dagsvik and Strøm (2006), Colombino, Locatelli, Narazani, and O’Donoghue (2010) and Colombino (2013) also implement a similar procedure, which however is based on a specific structural interpretation of the dummies and of their coefficients (see expressions (7.21) and (7.22)). An alternative adjustment consists of imputing a monetary cost (or benefit) to some ranges of work hours: P exp vðf ðwh; IÞ þ ℓ cℓ 1ðh ∈ Sℓ Þ; hÞ ð7:15Þ PðhÞ = T P P exp vðf ðwy; IÞ þ ℓ cℓ 1ðy ∈ Sℓ Þ; yÞ
Downloaded by Monash University At 09:10 12 April 2016 (PT)
y=0
A popular specification of the (7.15)-type is interpreted as accounting for fixed costs of working c (e.g. Duncan & Harris, 2002; see also the survey by Blundell et al., 2007). 7.2.3.2. The random utilityRandom opportunities model The Random UtilityRandom Opportunities (RURO) model is an extension of McFadden’s RUM model. The utility is assumed to be of the following form: U ðf ðwh; IÞ; h; jÞ = vðf ðwh; IÞ; hÞ þ ɛðw; h; jÞ
ð7:16Þ
where h is hours of work, w is the wage rate, I is the exogenous income, f is a tax-transfer function that transforms gross incomes into net income, j is a variable that captures other job and/or individual characteristics and ɛ is a random variable that varies across market and non-market alternatives. A first difference with respect to the DC model is that the utility function is directly specified as stochastic. The random component is interpreted as in McFadden’s (1974) presentations of the Conditional Logit model: besides the observed characteristics, there are other characteristics j of the job or of the household-job match that are observed by the household but not by the econometrician. Commuting time or required skill (when not observed by the analyst) are possible examples of the characteristics captured by j. Their effect upon utility is captured by ɛ(w,h,j). Second, the households maximize their utility by choosing not simply hours but rather opportunities (‘jobs’) defined by hours of work h, wage rates w (which can change across jobs for the same household) and other unobserved (by the analyst) attributes j. In the DC model, the households’ choices (how many hours of work) are analogous to the choices of a consumer deciding how many units of a consumption good (like meat, milk or gasoline) to buy every week. In the RURO model, the household is closer to the McFadden’s commuter choosing among car, train or the
Downloaded by Monash University At 09:10 12 April 2016 (PT)
182
Rolf Aaberge and Ugo Colombino
BART shuttle when travelling along the San Francisco Bay (Domencich & McFadden, 1975) or to the McFadden’s household choosing among different apartment in different locations (McFadden, 1978). Third, besides not observing the other job characteristics j, the analyst does not know exactly which and how many jobs are contained in the household opportunity set; therefore the opportunity set can be seen as random from the analyst’s viewpoint. The opportunity set will in general contain more than one job of the same (w,h) type. These jobs will differ depending on the value of other unobserved (by the analyst) attributes. This implies that the number (or the density) of jobs belonging to the different types will plays a crucial role in the model. In Aaberge et al. (1995) the range of values of (w,h) is assumed to be continuous. Let B be the set of admissible values of (w,h) and p(x,y) the density of jobs of type (x,y). The household chooses h and j so as to maximize v(f(wh,I),h)+ɛ(j). Then it turns out that we get the (continuous) conditional logit expression for the probability density function of a (w,h) choice: n o exp vðf ðwh; IÞ; hÞ pðw; hÞ φðw; hÞ = Z ð7:17Þ n o exp vðf ðxy; IÞ; yÞ pðx; yÞ dx dy ðx;yÞ ∈ B
Expression (7.17) is based on Dagsvik (1994). The model is close to the continuous spatial model developed by Ben-Akiva and Watanatada (1981). It can also be seen as an extension of the McFadden’s Conditional Logit model where the systematic utility of a job type (w,h) is ‘weighted’ by the number of jobs of that type available in the opportunity set. On the foundations and various applications of RURO models, see also Dagsvik (2000) and Dagsvik et al. (2014). Aaberge et al. (1999) formally derive a discrete version of model (7.17): n o exp vðf ðwh; IÞ; hÞ pðw; hÞ n o ð7:18Þ φðw; hÞ = P ðx;yÞ ∈ B exp vðf ðxy; IÞ; yÞ pðx; yÞ The discrete version can be interpreted either as a more realistic representation or as computational simplification of the continuous version.7
7
Tummers and Woittiez (1991) and Dickens and Lundberg (1993) develop labour supply models not based on the same stochastic assumptions as RURO’s where different hours of work have a different probability of being available and thus have some similarity with model (7.18). An alternative way to account for quantity constraints in the opportunity set is developed by Harris and Duncan (2002).
Labour Supply Models
183
Downloaded by Monash University At 09:10 12 April 2016 (PT)
So far, in all the applications of the RURO the opportunity density p (w,h) is first factorized as p1 g1 ðhÞg2 ðwÞ if h > 0 pðw; hÞ = ð7:19Þ 1 − p1 if h = 0 where p1 denotes the density of alternatives with h > 0, that is market jobs, and g1(h) and g2(w) are the densities of w and h conditional on h > 0. The conditional density of hours is specified as uniform-with-peaks (to be estimated) corresponding to part-time and full-time. The conditional density of wage rates is assumed to be log-normal. Details can be found in the work by Aaberge et al. (1995, 1999, 2013). All the densities p1, g1(h), g2(w) and the density of w can depend on household or job characteristics. From expressions (7.13) and (7.18), we can see that the solution of the utility maximization problem is expressed in terms of comparisons of absolute values of utility rather than in terms of marginal variations of utility and it is not affected by the specification of v(.,.) or f(.,.). One can choose relatively general and complicated specifications for v and/or accounting for complex tax-transfer rules f without affecting the characterization of behaviour and without significantly affect the computational burden involved by the estimation or simulation of the model. This holds for both the DC model and the RURO model (whether in continuous or discrete version). It is not often realized in the literature that the advantages of the RUM approach are due more to the representation of choice as the maximization of a random utility, rather than to the discreteness of the choice set. Note that expression (7.13) can be seen as a special case of expression (7.18) when the wage rate w is treated as a fixed characteristic of the household (invariant with respect to the alternatives) and p(x,y)=constant for all (x,y). It is also useful to observe that the opportunity density p(x,y) can be specified in such a way that expression (7.18) reduces to a DC model with dummies refinement. For example, Colombino (2013) starts by considering a model with fixed individual-specific wage rates: exp vðf ðwh; IÞ; yÞ pðhÞ ð7:20Þ φðhÞ = P exp vðf ðwy; IÞ; yÞ pðyÞ y∈B
By specifying the opportunity density p(y) as uniform-with-peaks, we get the following expression: n o L P exp vðf ðwh; IÞ; hÞ þ γ 0 1ðh > 0Þ þ γ ℓ 1ðh ∈ Sℓ Þ ℓ=1 φðhÞ = L P P exp vðf ðwy; IÞ; xÞ þ γ 0 1ðy > 0Þ þ γ ℓ 1ðy ∈ Sℓ Þ y∈B
ℓ=1
ð7:21Þ
184
Rolf Aaberge and Ugo Colombino
with Jℓ γ 0 = ln J þ A0 ; γ ℓ = ln þ Aℓ J
ð7:22Þ
Downloaded by Monash University At 09:10 12 April 2016 (PT)
J = number of alternatives with h > 0, Jl = number of alternatives with h ∈ Sl (e.g. Sl might be the set of hours values classified as ‘part-time’), A0 and Al are constants. Expression (7.21) is formally equivalent to the DC model with the ‘dummies refinement’: however, here the coefficients γ have a specific structural interpretation, which as we will see in the section dedicated to policy simulation can be used to develop an equilibrium simulation procedure. 7.2.3.3. The representation of the opportunity set In the continuous version of the RURO model, the opportunity set in principle can contain the whole positive quadrant, that is all the positive values of (w,h). If instead one adopts a discrete representation of the choice set (as in the DC model or as in the (7.18)-version of the RURO model) then one has to decide which alternatives are to be included in the opportunity set (besides the chosen alternative). DC models typically assume the opportunity set is fixed and imputed to every household. For example, one might divide the hours interval (0,T) into equal sub-intervals and pick one value in each sub-interval (e.g. the midpoint, or a randomly chosen point). The wage rate is also fixed and household-specific: therefore, for every value h, the corresponding gross earnings are equal to wh. In the RURO models, the opportunity set is unknown since the opportunity density p(w,h) must be estimated. The opportunity set used in the estimation (and in the simulations) can then be interpreted as a sample drawn from an unknown population. Therefore, the sampling method emerges as a relevant issue. Aaberge et al. (1995, 1999), Aaberge, Colombino, and Strøm (2004), Aaberge et al. (2013) sample alternative (w,h) values from a preestimated density q(w,h) and, following McFadden (1978) and Ben-Akiva and Lerman (1985), and use a re-weighted version of expression (7.18): exp vðf ðwh; IÞ; hÞ pðw; hÞ=qðw; hÞ ð7:23Þ φðw; hÞ = P exp vðf ðxy; IÞ; yÞ pðx; yÞ=qðx; yÞ ðx;yÞ ∈ B^
where B^ is the set of sampled alternatives. Expression (7.23) can also be interpreted as a computational approximation to expression (7.17). The same method is explained in detail and applied by Train, McFadden, and Ben-Akiva (1987). Aaberge et al. (2009) discuss and evaluate different methods of representing the opportunity set and find that they might have an important impact on the results of the policy simulation.
Labour Supply Models
185
Downloaded by Monash University At 09:10 12 April 2016 (PT)
7.2.3.4. Unobserved wage rates As in the ‘marginalist’ approach, also in the RUM approach the problem of unobserved wage rates for those who are not working can be solved either with a simultaneous procedure or with a two-step procedure. When adopting a simultaneous estimation with a DC model, one should also treat the wage rate w as an endogenous outcome and account for the fact that w is not observed for the non-workers in the sample. For that purpose we must specify a probability density function m(w). Starting from expression (7.13), the likelihood of an observation with non-zero hours h and wage rate w would then be: exp vðf ðwh; IÞ; hÞ Pðw; hÞ = T mðwÞ ð7:24Þ P exp vðf ðwk; IÞ; kÞ k=0
The likelihood of an observation with h = 0 and unobserved wage rate would instead be: Z exp vðf ð0; IÞ; 0Þ Pðh = 0Þ = mðwÞ dw ð7:25Þ T P exp vðf ðwk; IÞ; kÞ k=0
In RURO models, the wage rate is endogenous from the very start. Therefore (in the continuous version), the likelihood of a choice (w,h) is given by (7.18) or (7.23). For example, by inserting (7.19) into (7.18) we get 8 exp vðf ðwh;IÞ;hÞ p1 g1 ðhÞg2 ðwÞ > > X > if h > 0 > > > exp vðf ðxy; IÞ; yÞ p1 g1 ðyÞg2 ðxÞ dx dy exp vðf ð0;IÞ; 0Þ ð1 − p1 Þ þ > < ðx;yÞ ≠0 φðw;hÞ= ð7:26Þ exp vðf ð0;IÞ;0Þ ð1− p1 Þ > > > X if h = 0 > > > exp vðf ðxy; IÞ; yÞ p1 g1 ðyÞg2 ðxÞ dx dy > : exp vðf ð0;IÞ; 0Þ ð1 − p1 Þ þ ðx;yÞ ≠0
Alternatively, one could use a two-step procedure for imputing unobserved wages. In the first step, the wage equation is estimated. In the second step, the predicted wage rate replaces the missing values (or, alternatively, both the missing and the observed values). The random term of the wage equation is added to the systematic part and integrated (or ‘averaged’) out with a simulation procedure (e.g. Van Soest, 1995). Lo¨ffler, Peichl, and Siegloch (2013) illustrate that the estimated labour supply elasticities can be very sensitive to the way unobserved wage rates are treated. Both the simultaneous and the two-steps procedures illustrated above assume that the random term of the wage equation is uncorrelated with the random term of the utility function. However, one might want to allow for a correlation of the wage rate random component with one or
186
Rolf Aaberge and Ugo Colombino
more random parameters of v(f(wh,I),h) due, for example, to a dependence of the wage rate on previous decisions (e.g. Blundell & Shephard, 2012; Breunig, Cobb-Clark, & Gong, 2008; Gong & Van Soest, 2002; Lo¨ffler et al., 2013).
Downloaded by Monash University At 09:10 12 April 2016 (PT)
7.2.3.5. Involuntary unemployment Apparently, RUM-type models do not leave much space to the possibility of involuntary unemployment, since also h = 0 is an optimal choice (nonparticipation). If, however, ɛ is interpreted as an optimization error rather than as part of the utility, then some of the individuals with h = 0 might be interpreted as involuntary unemployed. Maybe they could be identified as those with h = 0 and systematic utility sufficiently close (in some sense) to the systematic utility of those with h > 0. To the best of our knowledge, this line of research has never been pursued. Instead, some contributions have taken involuntary unemployment into account by complementing the basic DC model with an exogenous latent index equation (Blundell et al., 2000). Euwals and van Soest (1999) have used subjective evaluations together with observed outcome to model the differences between actual and desired labour supply. In RURO models, ɛ is strictly interpreted as part of the utility function and therefore h = 0 is an optimal choice. However, there is a sense in which these models also account for involuntary unemployment: the opportunity density p(w,h) allows for a different availability of different opportunities to different households, therefore it can happen that some households have no (or very few) available opportunities with h > 0. 7.2.3.6. Generalizations and developments Both the DC and the RURO model can be easily generalized to include several dimensions of choice. Besides simultaneous decisions on the part of partners in a couple, one might include other decisions such as: labour supply of other members of the household, consumption of goods and services, fertility, choice of child-care mode, sector of employment, other dimensions of labour supply (occupational choice, educational choices, job search activities, etc.) and so on. For example, Aaberge, Colombino, Strøm, and Wennemo (2007), Aaberge and Colombino (2006, 2013) and Dagsvik and Strøm (2006), and Dagsvik, Locatelli, and Strøm (2009) include the choice between private sector and public sector employment; Kornstad and Thoresen (2007) model the simultaneous choice of labour supply and child-care; Haan and Wrohlich (2011) analyse fertility and employment, Flood, Hansen, and Wahlberg (2004), Hoynes (1996) and Aaberge and Flood (2013) analyse labour supply and welfare participation. A potential limitation of the RUM models based on the independent and identical extreme value distribution for the random component ɛ is the Independence-of-Irrelevant-Alternatives assumption, which in turn
Labour Supply Models
187
Downloaded by Monash University At 09:10 12 April 2016 (PT)
implies restrictions on the behavioural responses (e.g. Ben-Akiva & Lerman, 1985). Some contributions have opted for alternative distributional assumptions (e.g. Keane & Moffitt, 1998). However, advances with simulation-based methods (Train, 2009), have made it feasible to overcome this limitation by assuming GEV distributions (e.g. Nested Logit models) or random parameters, while preserving the main convenient analytical advantages of the extreme value distributions. By assuming that one or more preference parameters are random, one gets the so-called Mixed Logit model (McFadden & Train, 2000). When it comes to RURO models, expressions (7.17) and (7.18) are also close to a Mixed Logit model since the wage rate w is random. See also the survey by Keane and Wasi (2013). Due to space limitations, we can only mention two important developments in labour supply analysis which also, recently, tend to adopt the RUM modelling strategy: (i) Stochastic dynamic programming (SDP) models, for example Miller and Sanders (1997), Wolpin (1996), Grogger (2003), Swann (2005), Todd and Wolpin (2006), Keane and Wolpin (2002a, 2002b), Keane (2011), Keane, Todd, and Wolpin (2011). There are various motivations for using SDP models. First, many choices notably human capital decisions, occupational choices, fertility, etc. have important intertemporal implications: namely, the effects of decisions taken today have important effects in the future (e.g. Miller & Sanders, 1997). Second, many policies have an intrinsic intertemporal dimension, for example there might be time limits, or it might be that the amount of services I decide to get today affects the amount of services I can get tomorrow (Swann, 2005). Third, an important source of uncertainty in current decisions is the expectation of future changes in policies, for example expectations on whether a certain policy is temporary or permanent (Keane & Wolpin, 2002a, 2002b). (ii) Non-unitary models of household behaviour, where the household is not represented as a fictitious individual but rather as a set of individuals who somehow arrive at a collective decision. A major aim is developing models that can analyse intra-household allocation of resources (e.g. among genders) and the effects of policies upon different member of the households. As to the way of modelling the process that leads to the collective decision, there are two main lines of research: (i) The ‘sharing rule’ approach, for example Chiappori (1988, 1992), Donni (2003, 2007), Vermeulen (2005), Vermeulen et al. (2006), Bloemen (2010). Here, the intra-household allocation process is given a ‘reduced form’ representation: this way of proceeding requires minimal a priori assumptions (namely, the household attains, somehow, a Pareto-efficient allocation), but in principle makes the model not applicable to ex-ante policy evaluation, unless one is prepared to assume that the ‘sharing rule’ is policy-invariant; (ii) The
188
Rolf Aaberge and Ugo Colombino
explicit structural representation of intra-household allocation process. For example, McElroy and Horney (1981) have proposed Nash bargaining. Other types of solution are of course possible. So far, this second approach has been much less popular than the ‘sharing rule’ one, although its structural character makes it more promising in view of policy simulation (e.g. Bargain & Moreau, 2013; Del Boca & Flinn, 2012; Hernæs, Jia, & Strøm, 2001).
Downloaded by Monash University At 09:10 12 April 2016 (PT)
7.2.4. How reliable are structural models? Many authors have raised doubts upon the reliability of structural models as compared with the (supposed) robustness of evidence produced by (ex-post) experimental or quasi-experimental analysis (e.g. Bargain & Doorley, 2013; Blundell, Duncan, & Meghir, 1998; Brewer et al., 2006). Provided we want ex-ante policy evaluation, the issue is twofold: (i) Are there alternatives to structural models? (ii) How do we evaluate structural models and how do they compare with other approaches? When answering question (i) one has to carefully distinguish between type of data and type of models (or parameters) to be estimated. Often we observe a tendency to associate structural models with observational data and ex-post programme evaluation with experimental or quasiexperimental data. Although this is what goes on in most cases, in principle nothing prevents the use of experimental or quasi-experimental data for the estimation of structural models. Another possible source of confusion comes from erroneously associating structural modelling with the use of convenient parametric functional forms: although this might be a common practice, most of the research done on non-parametric estimation addressed to policy evaluation is definitely structural (e.g. Blomquist & Newey, 2002; Manski, 2012; Matzkin, 2013; Todd & Wolpin, 2008; Varian, 2012). What counts in view of ex-ante evaluation is that a set of relevant parameters (or primitives) be identified as policy independent (Hurwicz, 1962). Depending on the class of policies we are interested in, different sets or combinations of parameters might be sufficient for the purpose (Marschak, 1953). Of course, the point is that in general experimental or quasi-experimental data, by themselves, are not sufficient to identify policy-invariant parameters. For that purpose they must be analysed by a model, either in explicit form (e.g. Bargain & Doorley, 2013; Card & Hyslop, 2005; Todd & Wolpin, 2006), or in an implicit form as for example with ‘statistical extrapolation’ (e.g. Chetty, 2009). The availability of experimental or quasi-experimental evidence promises to improve the internal validity (or the identification conditions) of the model, but does not overcome the need for a structural approach. Therefore, the answer to question (i) is negative: ex-ante evaluation
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
189
requires a structural model, whether parametric or non-parametric, explicit or implicit, estimated on observational or (quasi-) experimental data, etc. It is fair to say, however, that more effort would be desirable on developing models or analysis that somehow go beyond the mainstream of a parametric model estimated on observational data. Let us turn to question (ii). The structural econometric community has abandoned the ideal of the correct specification. Models are approximations. Ordinary statistical testing is informative on the precision of the parameter estimates of the model but less so on how useful the estimated model is. This pragmatic approach would seem to entail a shift of focus from the issue of identification to the issues of external validation and out-of-sample prediction performance (Keane, 2010; Wolpin, 2007), although this conclusion is debatable (e.g. Blundell, 2010; Imbens, 2010). The amount of out-ofsample testing so far is limited (e.g. Aaberge & Colombino, 2006, 2013; Aaberge et al., 2009; Aaberge & Flood, 2013; Keane & Moffitt, 1998; Keane & Wolpin, 2002a, 2002b, 2007) but reassuring. A supplementary evidence provided by out-of-sample prediction exercises suggests that flexible theoretical models as compared with structural models tend to perform better in-sample but worse out-of-sample.
7.3. Policy simulation 7.3.1. Producing simulation outcomes We start by asking, when is information on behavioural responses needed? Non-behavioural simulations may be sufficiently informative provided the policy changes or the reforms can be represented as marginal changes in net wages and/or in unearned income. Let u*(w,I) be the indirect utility function, where w is the net wage rate and I is the unearned income. Let us suppose that the reform can be represented as a marginal change (dw,dI). Then we have: du = ð∂u =∂wÞ dw þ μdI; where μ ≡ ð∂u =∂IÞ is the marginal utility of income. By applying Roy’s Theorem, we get: ðdu =μÞ = h dw þ dI. The right-hand side is the change in the budget, conditional on the pre-reform labour supply h. The left-hand side is the monetary equivalent of the change in utility. Therefore, the result tells us that the change in the budget (i.e. the basic result produced by a nonbehavioural simulation) is a money-metric measure of the change in utility. Similar arguments can be generalized so that a non-behavioural simulation can be complemented by point-estimates of elasticities or other local measures of behavioural responses (e.g. Chetty, 2009). When the reforms involve non-marginal changes in the budget constraint, we typically want a prediction of the new choices, in particular of the new value of h or some function of it. Within the ‘reduced form’ and the ‘marginalist’ approaches (as defined in Section 7.2) we usually estimate
Downloaded by Monash University At 09:10 12 April 2016 (PT)
190
Rolf Aaberge and Ugo Colombino
a labour supply function and (directly or indirectly) a utility function. With non-linear budget constraints and corner solutions (the case commonly faced by analyses adopting the ‘marginalist’ approach), it is in general possible to identify the distribution of the random component capturing unobserved heterogeneity of preferences and/or the distribution of the measurement/optimization error (whichever is present in the model). Non-convex budget sets in general require recovering also a direct or indirect representation of the utility function in order to be able to simulate the optimal decision. Given the estimates of the (non-random parameters of) the labour supply function and/or of utility function, those random components are simulated so that their values are compatible with the observed values of h. Arrufat and Zabalza (1986) provide a clear and exhaustive explanation of this procedure. With DP or RURO models, we can choose between two alternative procedures: Compute the expected chosen value of the variable of interest, based upon the estimated choice probabilities, for example Colombino et al. (2010) and Colombino (2013). Simulate the value of the systematic utility and of the random component corresponding to each alternative in the opportunity set. Identify the alternative with the highest utility and compute the corresponding value of the variable of interest. Typically, the random components are kept fixed across the different policy regimes that one might want to simulate and compare. As to the current policy regime, simulation might be used as well: its results will not be identical to the observations but reasonable close at least in large samples. Alternatively, one might adopt the procedure suggested by Creedy and Kalb (2005b), that is generating a vector of random components that, given the estimated parameters of the utility function, are consistent with the observed choices under the current policy regime. When simulating sample aggregates, such as the total hours worked or total gross income, the two procedures (a) and (b) should be asymptotically equivalent, however they might diverge on small samples or subsamples. Overall, we can observe that, as far as labour supply models are concerned, so far we lack a rigorous investigation of the statistical properties of different methods of producing microsimulation outcomes.8 7.3.1.1. Interpretation of the policy simulation results: short-run, long-run, comparative statics There appears to be a consensus that the results of non-behavioural policy microsimulation should be interpreted as ‘the day after’ predictions,
8
The systematic analysis of the statistical properties of alternative methods for producing predictions is more advanced in other areas where RUM models are used, for example Watanatada and Ben-Akiva (1979) and Ben-Akiva and Lerman (1985).
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
191
that is predictions of the very short term, when agents and market interactions did not have time yet to adjust to the new policy. As argued above, even in the long-run, non-behavioural results might be considered a sufficient statistic provided the reforms can be represented as marginal changes in the budget constraint. The interpretation of behavioural microsimulation results raises more complicated and controversial issues. The typical policy simulation exercise computes the labour supply effects while leaving the wage rates unchanged. Some authors (e.g. Creedy & Duncan, 2005) interpret this scenario as the ‘month after’ prediction, with households making new choices but the market mechanisms still late in the process of adjusting wage rates, labour demand etc. An alternative interpretation might view the typical simulation exercise as a ‘very long-run’ prediction, with a perfectly elastic labour demand defined by the current wage rates. In any case, comparative statics is the appropriate perspective with behavioural microsimulation models based on a static representation of agents’ choices, that is we want to compare two different equilibria induced by two different policies. With the notion of equilibrium, we refer in general to a scenario in which the economic agents make optimal choices (i.e. they choose the best alternative among those available in the opportunity set) and their choices are mutually consistent or feasible. The comparative statics perspective is relevant both when the new equilibrium is reached in a short time and is maybe temporary (as might be the case with an intervention explicitly designed to have an immediate effect) and when instead we evaluate reforms of institutions or policies with long-run (and possibly long-standing) effects. In order to produce simulation results that respect the comparative statics perspective, Creedy and Duncan (2005) and Peichl and Siegloch (2012) have proposed procedures where DC labour supply models (as defined in Section 7.2) are complemented by a function of labour demand and the wage rates are adjusted so that an appropriate or feasible equilibrium criterion is satisfied. These procedures, however, in general would not be consistent with RURO models, which already include a representation of the density of market jobs of different types at the time of observation. In general, the notion of equilibrium will imply some relationship between the opportunity density and the size and composition of labour supply: since a reform will induce a change in labour supply, it follows that in equilibrium also the opportunity density will have to change. This observation carries over to DC models with dummies refinement, to the extent that the alternative-specific constants reflect also the demand side (e.g. the availability of jobs): a new equilibrium induced by a reform should entail a change of the alternativespecific constants. Colombino (2013) proposes and exemplifies an iterative simulation procedure that exploits the structural interpretation of the coefficients of the alternative-specific constants given in expression (7.22) of Section 7.2.
192
Rolf Aaberge and Ugo Colombino
7.3.2. Examples of simulations addressing specific policies or issues As explained in Section 7.1, the microeconometric models of labour supply can be, and have been, used to evaluate by simulation a very large variety of policies and reforms. Most applications concern tax-benefit and welfare policies. While it would be clearly impossible to present an exhaustive list, in Table 7.1 we summarize a selection of notable examples.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Table 7.1.
Tax and benefit analyses based on structural microeconometric models
Type of policy analysis
Country
Flat tax
Australia Creedy, Kalb, and Kew (2003), Creedy and Kalb (2005a), Scutella (2004) Germany Beninger et al. (2006), Fuest el al. (2008) Aaberge, Colombino, and Strøm (2000), Italy Aaberge, Colombino, Strøm, and Wennemo, 2000), Aaberge et al. (2004) Norway Aaberge et al. (1995), Aaberge, Colombino, Strøm, and Wennemo (2000) Spain Labeaga et al. (2008) Sweden Blomquist (1983), Blomquist and HanssonBrusewitz (1990), Aaberge, Colombino, Strøm, and Wennemo (2000) Australia Scutella (2004) Canada Clavet, Duclos, and Lacroix (2013) Germany Horstschra¨er, Clauss, and Schnabel (2010) Italy Colombino (2013, 2014), Colombino et al. (2010), and Colombino and Narazani (2013, 2014) France Bargain and Doorley (2013), Gurgand and Margolis (2008) Canada Fortin, Truchon, and Beausejour (1993) Italy Aaberge, Colombino, Strøm, (2000), Aaberge et al. (2004), Colombino et al. (2010), and Colombino and Narazani (2013, 2014) US Burtless and Hausman (1978) Australia Creedy (2005) Belgium Decoster and Vanleenhove (2012) Canada Clavet et al. (2013) France Bargain and Orsini (2006) Germany Haan and Myck (2007), Bargain and Orsini (2006) Italy Colombino (2014), Colombino and Narazani (2013, 2014), Colonna and Marcassa (2012), De Luca, Rossetti, and Vuri (2012), Figari (2011), Haan and Steiner (2008), Pacifico (2013) Sweden Flood, Wahlberg, and Pylkka¨nen (2007) Sweden Aaberge and Flood (2013)
Unconditional transfers and Basic Income
Mean-tested transfers, Negative Income Tax and Work Fare
In-work benefits, Tax credits and Wage subsidies
Contributions
193
Labour Supply Models
Table 7.1. Type of policy analysis
Country UK
Downloaded by Monash University At 09:10 12 April 2016 (PT)
US
Welfare participation and labour supply
Sweden US
Child care and labour supply
Australia Belgium Germany Italy Norway Russia Sweden US
Fertility and labour supply
Italy Germany US Australia Germany
Optimal taxation
Italy Norway Sweden UK
(Continued ) Contributions Duncan and Giles (1996), Bingley and Walker (1997), Blundell (2006), Blundell and Hoynes (2004), Blundell et al. (2000), Blundell, Brewer, Haan, and Shephard (2009), Blundell and Shepard (2012), Brewer (2001, 2009), Brewer et al. (2006), Brewer, Francesconi, Gregg, and Grogger (2009) Aaberge and Flood (2013), Keane (1995), Kean and Moffitt (1998), Meyer and Rosenbaum (2001), Meyer and Holtz-Eakin (2002), Blank (2002), Meyer and Holtz-Eakin (2002), Hotz and Scholz (2003), Fang and Keane (2004), Eissa and Hoynes (2004, 2006, 2011), Grogger (2003), Grogger and Karoly (2005), Moffitt (2006), and Eissa, Kleven, and Kreiner (2008) Flood et al. (2004), Aaberge and Flood (2013) Moffitt (1983), Fraker and Moffitt (1988), Hoynes (1996), Keane and Moffitt (1998) Kalb (2009) Van Klaveren and Ghysels (2012) Wrohlich (2008), Haan and Wrohlich (2011) Del Boca (2002), Del Boca and Vuri (2007) Kornstad and Thoresen (2006, 2007) Lokshin (2004) Gustafsson and Stafford (1992) Heckman (1974a, 1974b), Rosen (1976), Blau and Robbins (1988), Ribar (1995) Del Boca (2002) Haan and Wrohlich (2011) Hotz and Miller (1988) Creedy and He´rault (2012) Blundell et al. (2009), Bach, Corneo, and Steiner (2012) Aaberge and Colombino (2012) Aaberge and Colombino (2006, 2013), Ericson and Flood (2012) Blundell et al. (2009), Blundell and Shepard (2012)
7.3.3. Identifying optimal systems 7.3.3.1. Empirical applications of theoretical optimal taxation model Labour supply is central not only in the design and evaluation of specific tax-transfer reforms, but also in the identification of optimal tax-transfer systems. To see this, it is useful to review briefly the basic framework adopted by theoretical optimal taxation (Mirrlees, 1971). Agents households differ by their market productivity = wage rate w. They solve
194
Rolf Aaberge and Ugo Colombino
max uðc; hÞ c;h
s:t: c = wh − TðwhÞ
ð7:27Þ
where u(c,h) = u(c)−h = utility function, assumed for simplicity of illustration to be separable in c and h and with no income effects; c = net available income; h = hours of work; T(z)=tax to be paid by an agent with earnings z = wh.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
The Social Planner solves R∞ max 0 W ðuw Þf ðwÞ dw c;h
s:t: R∞ 0 w
Tðzw Þf ðwÞ dw = R
ð7:28Þ
u = max uðc; hÞ c:v: c = wh − TðwhÞ c;h
where W(·) = Social Welfare function R = total tax revenue to be collected (exogenously given) zw ≡ whw hw = arg maxh u(c,h) c.v. c = wh − T(wh) f(w) = Fw(w) = probability density function of w. The solution to problem (7.28) can be expressed as follows:
R ∞ m Tz ðzw Þ 1 ð1 − FðwÞÞ w ð1 − Ω Þf ðmÞdm = 1 þ × × 1 − FðwÞ 1 − Tz ðzw Þ ξðwÞ wf ðwÞ
ð7:29Þ
where Tz(zw)=marginal tax rate for a household with productivity w (and therefore earnings zw); ξ(w)=labour supply elasticity of a household with productivity w; Ωm = marginal social weight given to the consumption of a household with productivity w. Expressions like (7.29) or more general versions of it have been used by many authors (e.g. Tuomala, 1990) to perform illustrative simulation exercises where optimal taxes are computed given imputed or calibrated measures of ξ(w) and F(w). A typical criticism moved to these exercises is that they do not account properly for the heterogeneity of the preferences and productivity across the population (e.g. Tuomala, 2010). Revesz (1989), Diamond (1998) and Saez (2001, 2002), among others, present reformulations of Mirrlees’s model that are more directly interpretable in terms of empirically observable variables and make it more convenient to
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
195
account for agents’ heterogeneity. These reformulations have been used in conjunction with microeconometric models. In particular, Saez (2002) develops a discrete model that assigns a crucial role to the relative magnitude of the labour supply elasticities at the extensive and at the intensive margin. There are J + 1 types of job, each paying (in increasing order) z0, z1, ..., zJ. Job ‘0’ denotes non-working conditions (non-participation or unemployment). Net available income on job j is cj = zj − Tj where Tj is the tax paid at income level zj. Each agent is characterized by one of the potential incomes z0, z1, ..., zJ and if she decides to work she is allocated to the corresponding job. The agent of type j decides to work if cj ≥ c0. The extensive margin (or participation) elasticity is defined as c −c ∂π ηj = j πj 0 ∂ c −j c , where πj is the proportion of agents on job of type j. ð j 0Þ Working agents can also move to a different job if income opportunities change, but the movements (for reasons implicit in the assumptions of the model) are limited to adjacent jobs (i.e. from job j to job j − 1 or job j + 1). The intensive margin elasticity is defined as: c −c ∂π ξj = j πj j − 1 ∂ c − cj . Then it turns out that the optimal taxes satisfy: ð j j − 1Þ h i J P π k 1 − Ωk − ηk Tckk −− Tc00 Tj − Tj − 1 1 k = j = ð7:30Þ πj cj − cj − 1 ξj where Ωk is the marginal social value of income at job k. The model is attractive in view of empirical applications because it seems to fit well to the DC framework. Recent applications include: Blundell et al. (2009) (optimal taxation of single mothers in Germany and the United Kingdom); Haan and Wrohlich (2010) (optimal design of children benefits in Germany); Immervoll, Kleven, Kreiner, and Saez (2007) (evaluation of income maintenance policies in European countries). The studies coupling theoretical optimal taxation results with microsimulation proceed as follows. The researcher looks for an analytical solution to the optimal taxation problem, that is a ‘formula’ that allows to compute the optimal taxes or marginal tax rates as function of exogenous variables and parameters. Next, the numerical simulations consist in calculating the analytical solution with exogenous variables and parameters assigned numerical values produced by microeconometric estimates. There are two main problems with this procedure. First, in order to get an analytical solution we must adopt many simplifying and restrictive assumptions. Second, when we ‘feed’ the formulas with empirical measures, we are very likely to face an inconsistency between the theoretical results and the empirical evidence, since the latter was typically generated under assumptions that are very different for those that made it possible obtaining the former. For example, Saez (2002) assumes there are no income effects and specifies a very special and limited representation of
196
Rolf Aaberge and Ugo Colombino
choices at the intensive margin. None of these assumptions are shared by the typical microeconometric models used to simulate the elasticities.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
7.3.3.2. Identification of an optimal rule by searching the policy space A number of studies have used labour supply microsimulation models to explore the policy space defined by certain types of tax-transfer systems. Fortin et al. (1993) calibrate (on the basis of previous estimates or ‘reasonable’ imputation) a collective ‘marginalist’ model of household labour supply and run it in order to identify the best income support mechanism within a set including many versions of the Negative Income Tax and of a Workfare system. The contributions mentioned hereafter adopt a RURO approach. Aaberge et al. (2004) evaluate rules such as the Flat Tax, the Negative Income Tax and the Workfare as hypothetical reforms in Italy. Colombino et al. (2010), Colombino and Narazani (2013) and Colombino (2013) focus on Basic Income policies. Ericson and Flood (2012) look for welfare improving changes in the Swedish tax-benefit system. 7.3.3.3. Optimal taxation by simulation In order to overcome the drawbacks of the simulation exercises coupled with theoretical optimal taxation (see Section 7.3.3.1), recent contributions have proposed a computational approach (Aaberge & Colombino, 2006, 2012, 2013; Blundell & Shephard, 2012). Modern microeconometric models of labour supply are based on very general and flexible assumptions. They can accommodate many realistic features such as general structures of heterogeneous preferences, simultaneous decisions of household members, non-unitary mechanisms of household decisions, complicated (non-convex, non-continuous, non-differentiable, etc.) constraints and opportunity sets, multidimensional heterogeneity of both households and jobs, quantitative constraints, etc. It is simply not feasible (at least so far) to obtain analytical solutions for the optimal taxation problem in such environments. Yet those features are very relevant and important especially in view of evaluating or designing reforms (Colombino, 2009). An alternative (or maybe complementary) procedure consists of using a microeconometric model to obtain a computational solution of the optimal taxation problem. The microeconometric model, which primarily simulates the agents’ choices by utility maximization, is embedded into a global maximization algorithm that solves the social planner’s problem, that is the maximization of a social welfare function subject to the public budget constraint. The method (as presented in Aaberge & Colombino, 2013) is formulated as follows:
Labour Supply Models
max W ðU1 ðc1 ; h1 ; j1 Þ; U2 ðc2 ; h2 ; j2 Þ; :::; UN ðcN ; hN ; jN ÞÞ ϑ s:t: ðcn ; hn ; jn Þ = arg max Un ðc; h; jÞ s:t: c = f ðwh; In ; ϑÞ; ∀n N X
ðw;h;jÞ ∈ Bn
197
ð7:31Þ
ðwn hn þ In − f ðwn hn ; In ; ϑÞÞ ≥ R
Downloaded by Monash University At 09:10 12 April 2016 (PT)
n=1
Agent n can choose a ‘job’ within an opportunity set Bn. Each job is defined by a wage rate w, hours of work h and other characteristics j (unobserved by the analyst). Given gross earnings wh and gross unearned income I, net available income is determined by a tax-transfer function c = f(wh,I;ϑ) defined up to a vector of parameters ϑ. For any given tax-transfer rule (i.e. any given value of ϑ) the choices by the agents are simulated by a microeconometric model that allows for a very flexible representation of heterogeneous preferences and opportunity sets, it covers both singles and couples, accounts for quantity constraints and is able to treat any tax-transfer rule however complex. Note that it would be hopeless to look for analytical solutions of an optimal taxation problem in such an environment. The choices made by the N agents result in N positions (c1,h1,j1), (c2,h2,j2), …, (cN,hN,jN), which are then evaluated by the social planner according to a Social Welfare function W. The Social Planner’s problem therefore consists of searching for the value of the parameters ϑ that maximizes W subject to the following constraints: (i) the various positions (c1,h1,j1), …, (cN,hN,jN) result from utilitymaximizing choices on the part of the agents (incentive-compatibility constraints); (ii) the total net tax revenue must attain a given amount R (public budget constraint). The optimal taxation problem is solved computationally by iteratively simulating the household choices for different values of ϑ until W is maximized. Any exercise involving a comparison between the utility levels attained by heterogeneous households requires developing comparable measures of utility or individual welfare. If, moreover, we adopt social welfare as the criterion for comparing alternative policies we must specify a Social Welfare function. The next section is devoted to these two issues.
7.4. Social evaluation of policy simulations 7.4.1. Individual welfare functions As explained in Sections 7.2 and 7.3, empirical microeconomic models of labour supply are helpful tools for simulating the effects on households’ labour supply and income from changes in tax and benefit systems or
Downloaded by Monash University At 09:10 12 April 2016 (PT)
198
Rolf Aaberge and Ugo Colombino
from changes in distributions of wage rates and hours of work offered by the demand side of the labour market. However, to complete the economic evaluation of policy reforms a framework for analysing the outcomes from the simulation exercises is required. It is straightforward to provide a summary of changes in employments rates and distributions of hours of work and income. However, a social planner needs information that makes it possible to compare individuals’ level of welfare before and after a policy change and thus who is gaining and who is losing on the policy change. It is, however, not obvious how one should make a social evaluation of the policy effects when each individual’s welfare is considered to be a function of income and leisure. The estimated utility functions (or their systematic parts) might emerge as a useful basis for making social evaluations of welfare. However, since the behaviour of an individual is invariant with respect to monotonic transformations of the utility function we face two problems. The first one concerns the construction of specific cardinal utility functions to represent the consumption/leisure preferences of individuals/households, and the second concerns the lack of convincing justification for comparing arbitrarily chosen individual cardinal utility functions and use them as arguments in a social welfare function (see e.g. the thorough discussion provided by Hammond, 1991). The origin of the problem is as stated by Hume that one cannot derive an ‘ought’ from an ‘is’. To circumvent these problems Deaton and Muellbauer (1980), King (1983) and Hammond (1991) proposed to use a common utility function as a tool for making interpersonal comparisons of welfare, since it by definition contains within it interpersonal comparability of both welfare levels and welfare differences. The common utility function is supposed to capture the preferences of the social planner, whereas the individual/household-specific utility functions solely are assumed to capture the consumption/leisure preferences of individuals/ households. The latter can be used to simulate the behaviour of individuals/households under alternative tax/benefit systems, whereas the former is designed to be used for evaluating the outcomes of simulation exercises. However, even though there was agreement about the requirement of a common utility function the problem of how to construct it would remain. As argued by Aaberge and Colombino (2013) a plausible approach is to assume that the social planner exploits the information provided by the consumption/leisure choices of the individuals/households (and moreover accounts for large heterogeneity in the availability of different jobs in the market) by estimating the common utility function. Alternatively, a specific utility function (e.g. the utility function of the poorest, the richest or the median) can be used as the common utility function. Examples of the latter approach can be found in King (1983) for housing choices and in Aaberge et al. (2004) for labour supply choices. As opposed to the common utility approach the practice of basing social evaluations on distributions of individual-specific money-metric
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
199
measures of utility differences like equivalent and compensation variation disregards the interpersonal comparability problem, which makes it difficult to judge the ethical significance of this approach.9 An alternative and more promising approach aiming at respecting individual (consumption/ leisure) preferences in welfare analyses has been proposed by Fleurbaey (2003, 2008) and Fleurbaey and Maniquet (2006) and applied by Bargain, Decoster, et al. (2013) and Decoster and Haan (2014) in analyses of labour supply. However, as acknowledged by Decoster and Haan (2014), the choice of a specific preference respecting welfare metric might have a significant impact on the result of the welfare evaluation, and moreover it is shown to depend on the degree of emphasis the welfare metric places on willingness-to-work. Thus, depending on the chosen metric a work averse or work loving individual will be more or less favoured, which means that the social planner faces the problem of giving more or less weight to people with preferences that exhibit low or high willingness-to-work. Below we will provide an explanation of the specific version of the common utility approach employed by Aaberge and Colombino (2013) for designing optimal taxes based on a microeconomic model of labour supply. Since households differ with regard to size and composition it is required to construct a common utility function that justifies comparison of individual welfare for individuals. The common utility function (individual welfare function) V is to be interpreted just as the input of a social welfare function and thus differs from the role played by the actual utility function U for households. The individual welfare function (V) is assumed to have a functional form that is identical to the basic functional form of the systematic part of the positive utility function U, which means that the heterogeneity of the parameters of U has been removed. Thus, V is defined by γ γ y 1 −1 L 3 −1 Vðy; hÞ = γ 2 þ γ4 ð7:32Þ γ1 γ3 where L is leisure, defined as L = 1 − (h/8736), and y is the individual’s income after tax defined by 8 for singles < c = f ðwh; I Þ c 1 y = pffiffiffi = pffiffiffi f ðwF hF ; wM hM ; I Þ for married=cohab: individuals: ð7:33Þ : 2 2 Thus, couples incomes are transformed into comparable individualspecific incomes by dividing the couple incomes by the square root of 2. The next problem is to assess the value of the four parameters of the
9
See, for example Aaberge et al. (1995, 2000) and Creedy and He´rault (2012).
Downloaded by Monash University At 09:10 12 April 2016 (PT)
200
Rolf Aaberge and Ugo Colombino
common utility function for individuals on the basis of the observed leisure and income data where individual incomes are defined by Eq. (7.33). Since the observed chosen combinations of leisure and income depend on the availability of various job opportunities, we use expression (7.26), where the systematic part of the utility function v is replaced by the individual welfare function (V) defined by Eq. (7.32), as a basis for estimating the parameters of V. Table 7.2 displays the parameter estimates. A different way to circumvent the interpersonal comparability problem consists in avoiding interpersonal comparisons altogether and basing the social evaluation exclusively on intrapersonal comparisons of utility levels, which of course is less informative. A proper application of the ordinal criterion would require defining the optimal tax in a different way, for example the rule that maximizes the number of winners. However, since the winners might be the individuals with the highest pre-reform welfare levels the ordinal criterion does obviously not account for distributional effects and may, for that reason, be considered as an inappropriate social evaluation approach. 7.4.2. Social welfare functions the primal and dual approach The informational structure of the individual welfare functions (defined by the common utility function (7.32) or Fleurbaey’s preference respecting welfare metrics) allows comparison of welfare levels as well as gains and losses of different individuals due to a policy change. Comparison of distributions of individual welfare, formed for example by alternative hypothetical tax reforms, might be made in terms of dominance criteria of first- and second degree. However, since distribution functions normally intersect even second-degree dominance may not provide an unambiguous ranking of the distributions in question. Dominance criteria of higher degree can as demonstrated by Aaberge et al. (2013) provide a complete ranking, but it would in any case be helpful to quantify social welfare. To this end, let social preferences be represented by the ordering ≽ defined on the family F of distributions of individual welfare. The preference ordering is assumed to be continuous, transitive and complete and to satisfy Table 7.2.
Estimates of the parameters of the welfare function for individuals 20 to 62 years old, Norway 1994
Variable
Parameter
Estimate
Std. dev.
γ1 γ2
−0.649 3.026
0.086 0.138
γ3 γ4
−12.262 0.045
0.556 0.011
Income after tax (y)
Leisure (L)
Labour Supply Models
201
first-degree stochastic dominance as well as the following independence axiom, Axiom (Independence). Let F1, F2 and F3 be members of F(V) and let α ∈ [0,1]. Then F1 ≽ F2 implies αF1 + (1 − α)F3 ≽ αF2 + (1 − α)F3.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
This axiom focuses attention on the proportion of people F(y) for a given level V of individual welfare and imposes an invariance condition on the proportions F1(V) and F2(V) being compared. Instead, we might focus on the income level F −1(t) that is associated with a given proportion of people t, that is, the rank in the distribution F, and impose an invariance condition on the individual welfare levels F1− 1 ðtÞ and F2− 1 ðtÞ being compared. This corresponds to an alternative version of the independence axiom, which is called the dual independence axiom in the literatures on uncertainty and inequality, Axiom (Dual Independence). Let F1, F2 and F3 be members − 1 −of1 F and let α∈[0,1]. Then F1≽F2 implies αF1− 1 þ ð1 − αÞF3− 1 αF2 þ ð1 − αÞF3− 1 Þ − 1 . The axioms require that the ordering is invariant with respect to certain changes in the distributions being compared. It is these axioms that give social preferences an empirical content. If F1 is weakly preferred to F2, then the Independence Axiom (similar to the expected utility theory) states that any mixture on F1 is weakly preferred to the corresponding mixture on F2. The intuition is that identical mixing interventions on the distributions do not affect their ranking; the ranking depends solely on how the differences between the mixed distributions are judged. Thus, the axiom requires the ordering relation≽to be invariant with respect to aggregation of sub-populations across individual welfare. The Dual Independence axiom postulates a similar invariance property on the inverse distributions. It says that, if we consider a decomposition by sources of individual welfare, then dominance with regard to one set of sources implies, other things equal, overall dominance. The essential difference between the two axioms is that the Independence Axiom deals with the relationship between a given level of individual welfare and weighted averages of corresponding population proportions, while the Dual Independence Axiom deals with the relationship between given population proportions and weighted averages of corresponding levels of individual welfare. The choice between the two independence axioms determines whether the associated family of welfare functions can be considered as a primal or dual family of social welfare functions.10 The ‘primal approach’ is
10
See Aaberge (2001) and Aaberge and Atkinson (2013) for similar discussions of how to summarize the information content of Lorenz curves and headcount curves.
202
Rolf Aaberge and Ugo Colombino
analogue to the inequality framework developed by Atkinson (1970), while the ‘dual approach’ is analogue to the rank-dependent measurement of social welfare introduced by Weymark (1981) and Yaari (1988). As is well known the Independence Axiom justifies the following family of social welfare functions, Z ∞ WðFÞ = uðxÞ dFðxÞ ð7:34Þ 0
Downloaded by Monash University At 09:10 12 April 2016 (PT)
where F is a distribution with mean μ of the individual welfare V, and u is a non-decreasing concave evaluation function of individual welfare levels that reflects the preferences of a social planner who support the Independence Axiom. As demonstrated by Atkinson (1970) W can be represented by the equally distributed equivalent welfare level defined by: ξðFÞ = u − 1 ðWðFÞÞ
ð7:35Þ
Thus, ξ(F) is the equally distributed individual welfare level that would yield the same level of social welfare as the actual distribution F. Since ξ(F)≤μ Atkinson (1970) used ξ(F) as a basis for defining the following family of inequality measures, ξðFÞ μ
IðFÞ = 1 −
ð7:36Þ
The following specific family of social welfare functions and associated inequality measures were introduced by Atkinson (1970), Z ξðFÞ =
∞
1 −1 θ x1 − θ dFðxÞ
ð7:37Þ
0
where θ ≥ 0 defines the degree of inequality aversion of the social welfare function. The simplest welfare function is the one that adds up the individual welfare levels, which is obtained by inserting u(x) = x in Eq. (7.34) or θ = 0 in Eq. (7.37). The objection to the linear additive welfare function is that the individuals are given equal welfare weights, independent of whether they are poor or rich. Concern for distributive justice requires, however, that poor individuals are assigned larger welfare weights than rich individuals. This is consistent with inserting a strictly concave u-function in Eq. (7.34). A similar structure is captured by the family of rankdependent welfare functions,11
11
Several other authors have discussed rationales for rank-dependent measures of inequality and social welfare, see for example Sen (1974), Hey and Lambert (1980), Donaldson and Weymark (1980, 1983), Weymark (1981), Ben Porath and Gilboa (1994), Aaberge (2007) and Aaberge et al. (2013).
Labour Supply Models
Z
1
W ðFÞ =
pðtÞF − 1 ðtÞ dt
203
ð7:38Þ
Downloaded by Monash University At 09:10 12 April 2016 (PT)
0
where F−1 is the left inverse of the cumulative distribution function of the individual welfare levels V with mean μ, and p(t) is a positive concave weight-function defined on the unit interval.12 The social welfare functions (7.38) can be given a similar normative justification as for the family (7.34). Given suitable continuity and dominance assumptions for the preference ordering ≽ defined on the family of income distributions F, Yaari (1987, 1988) demonstrated that the Dual Independence Axiom characterizes the family of rank-dependent measures of social welfare functions (7.38) where represents the preferences of the social planner. Aaberge (2007) proposed to use the following specification of p(t), 8 i=1 < − log t; i i−1 pi ðtÞ = ð7:39Þ ; i = 2; 3; ::: :i−1 1−t Note that the inequality aversion exhibited by the social welfare function Wi (associated with pi(t)) decreases with increasing i. As i → ∞, Wi approaches inequality neutrality and coincides with the linear additive welfare function defined by Z 1 W∞ = F − 1 ðtÞ dt = μ ð7:40Þ 0
It follows by straightforward calculations that Wi ≤ μ for all i and that Wi is equal to the mean μ for finite i if and only if F is the egalitarian distribution. Thus, Wi can be interpreted as the equally distributed individual welfare level. As recognized by Yaari (1988) this property suggests that Ci, defined by Ci = 1 −
Wi ; μ
i = 1; 2; :::
ð7:41Þ
can be used as a summary measure of inequality and moreover can be proved to be a member of the ‘illfare-ranked single-series Ginis’ class introduced by Donaldson and Weymark (1980).13 Thus, as was recognized
12
13
Note that Eqs. (7.32)(7.34) and (7.32), (7.33) and (7.40) can be considered as two-stage approaches for measuring social welfare where the first stage consists of using the common utility function to aggregate the two goods (consumption and leisure) for each individual into a measure of well-being and the second stage to aggregate the well-being across individuals into a measure of social welfare. As demonstrated by Bosmans, Decancq, and Ooghe (2013) the two-stage approach can be given an axiomatic normative justification. Aaberge (2007) provides an axiomatic justification for using the Ck measures as criteria for ranking Lorenz curves.
204
Rolf Aaberge and Ugo Colombino
Downloaded by Monash University At 09:10 12 April 2016 (PT)
by Ebert (1987) the justification of the social welfare function Wi = μ(1 − Ci) can also be made in terms of a value judgement of the trade-off between the mean and (in)equality in the distribution of welfare. To ease the interpretation of the inequality aversion profiles exhibited by W1, W2, W3 and W∞. Table 7.3 provides ratios of the corresponding weights as defined by (7.39) of the median individual and respectively the 5 per cent poorest, the 30 per cent poorest and the 5 per cent richest individual for different social welfare criteria. As can be observed from the weight profiles provided by Table 7.3 W1 will be particularly sensitive to changes in policies that affect the welfare of the poor, whereas the inequality aversion profile of W3 is rather moderate and W∞ exhibits neutrality with respect to inequality.
7.5. Socially optimal income taxes A number of recent contributions identify optimal tax-benefit rules by employing a microeconometric labour supply model together with microsimulation and (some version of) the social evaluation framework presented above. Aaberge and Colombino (2006, 2013) identify the optimal income tax in Norway within the class of piecewise linear systems. Aaberge and Colombino (2012) perform a similar exercise for Italy, where however the Social Welfare criterion adopted is based on a version of the Roemer’s (1998) Equality-of-Opportunity criterion. Blundell and Shephard (2012) look for an optimal tax-benefit rule for low-income families with children in the United Kingdom. Bach et al. (2012) consider optimal taxation with household income splitting. Creedy and He´rault (2012) explore welfare improving directions for tax-benefit reforms in Australia. Instead of relying on a priori theoretical results as in previous empirical applications of optimal taxation theory, the microeconometric-simulation approach allows for a much more flexible representation of households’ heterogeneous characteristics and behaviour and permits the analysis of more complicated tax-benefit rules. This has significant implications upon the results. For example, Aaberge and Colombino (2013), for each of the social welfare functions referred to in Table 7.3, identify the tax system that maximizes social welfare within a class of 10 parameter tax rules. The Table 7.3.
p(0.01)/p(0.5) p(0.05)/p(0.5) p(0.30)/p(0.5) p(0.95)/p(0.5)
Distributional weight profiles of four different social welfare functions W1 (Bonferroni)
W2 (Gini)
W3
W∞ (Utilitarian)
6.64 4.32 1.74 0.07
1.98 1.90 1.40 0.10
1.33 1.33 1.21 0.13
1 1 1 1
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
205
results show that the marginal tax rates of each of the optimal tax systems turned out to be monotonically increasing with income and that more egalitarian social welfare functions tended to imply more progressive tax rules. Moreover, the optimal bottom marginal tax rate is negative, suggesting a mechanism close to policies like the Working Families Tax Credit in the United Kingdom, the Earned Income Tax Credit in the United States and the In-Work Tax Credit in Sweden. The overall picture emerging is in sharp contrast with most of the results obtained by the numerical exercises based on Mirrlees’s optimal tax type of formulas. The typical outcome of those exercises envisages a positive lump-sum transfer which is progressively taxed away by very high marginal tax rates on lower incomes (i.e. a negative income tax mechanism), in combination (close to) flat (or even decreasing) marginal tax rates for higher incomes. The results obtained with the microsimulation approach seems to support what suggested by Tuomala (2010): the theory-based results might be enforced by the restrictive assumptions made on the preferences, the elasticities and the distribution of productivities (or wage rates), which in turn might be in conflict with the empirical evidence provided by microeconomic labour supply studies.
7.6. Conclusions and future perspectives The original concept of microsimulation envisaged large models of the entire economic (or even socio-economic) system as an alternative to the then dominating large macroeconometric models including behavioural responses. The events took a different route. On the one hand, the first successful implementations of microsimulation models at the policy level were non-behavioural. On the other hand, the researchers working on microeconometric models of labour supply started using microsimulation tools for policy design and evaluation. In this chapter, we have documented the evolution of different labour supply modelling strategies, together with their notable policy applications that use microsimulation methods. Further developments, both on the microsimulation algorithms side and on the microeconometric side, might or might not favour a reencounter between large microsimulation algorithms and behavioural labour supply analysis. While further developments on the side of microsimulation technology is documented in other chapters, on the side of microeconometric labour supply models, four research directions are likely to attract more and more attention: (i) intertemporal decisions and decisions under uncertainty; (ii) other dimensions of labour supply (educational and occupational choices, effort, etc.); (iii) modelling intra-household allocation, in particular the structural approach; (iv) development of standardized procedures for improving external (i.e. out-of-sample) validation and internal validation (e.g. non-parametric identification and estimation with
Downloaded by Monash University At 09:10 12 April 2016 (PT)
206
Rolf Aaberge and Ugo Colombino
experimental or quasi-experimental data) of structural models. The general problem is that there is a trade-off between the increasing theoretical sophistication of labour supply models (e.g. stochastic dynamic programming models, intra-household allocation or collective model, etc.) and their flexibility in interacting with other models representing different segments the economic system. There seem to be three not mutually exclusive main directions, in various degree dependent on the quality of the available data and on how sophisticated and flexible both the microeconometric methodology and the microsimulation algorithms will become. First, very specific (both methodologically and policy-wise) labour supply ‘modules’ can be more or less ‘mechanically’ linked to system-wide models, the latter being in turn micro- or macro-analytic or a combination of the two. This is close to the current most common practice on micro-macro models. Second, it might be the case that empirical research on labour supply whether based on observational, experimental, or quasi-experimental data at a certain point reaches a degree of robustness and generality comparable to an accounting relationship, and can therefore be permanently incorporated into a system-wide microsimulation model. Third, it might be that microeconometric results on labour supply attain a level that allows both specificity and flexibility and permits a structural (microfounded rather than mechanical) linkage with other micro-analytic behavioural modules and system-wide algorithms.
References Aaberge, R. (2007). Gini’s nuclear family. Journal of Economic Inequality, 5(3), 305322. Aaberge, R., & Atkinson, A. B. (2013). The median as watershed. Discussion Paper No. 749. Research Department, Statistics Norway. Aaberge, R., & Colombino, U. (2006). Designing optimal taxes with a microeconometric model of labour supply. IZA Discussion Paper No. 2468. Aaberge, R., & Colombino, U. (2012). Accounting for family background when designing optimal income taxes: A microeconometric simulation analysis. Journal of Population Economics, 25(2), 741761. Aaberge, R., & Colombino, U. (2013). Using a microeconometric model of household labour supply to design optimal income taxes. Scandinavian Journal of Economics, 115(2), 449475. Aaberge, R., Colombino, U., Holmøy, E., Strøm, B., & Wennemo, T. (2007). Population ageing and fiscal sustainability: An integrated micro-macro analysis of required tax changes. In A. Harding & A. Gupta (Eds.), Population ageing, social security and taxation: Modelling our future. Bingley, UK: Emerald Group Publishing Limited.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
207
Aaberge, R., Colombino, U., & Strøm, S. (1999). Labour supply in Italy: An empirical analysis of joint household decisions, with taxes and quantity constraints. Journal of Applied Econometrics, 14(4), 403422. Aaberge, R., Colombino, U., & Strøm, S. (2000). Labor supply responses and welfare effects from replacing current tax rules by a flat tax: Empirical evidence from Italy, Norway and Sweden. Journal of Population Economics, 13(4), 595621. Aaberge, R., Colombino, U., & Strøm, S. (2004). Do more equal slices shrink the cake? An Empirical investigation of tax-transfer reform proposals in Italy. Journal of Population Economics, 17(4), 767785. Aaberge, R., Colombino, U., Strøm, S., & Wennemo, T. (2000). Joint labour supply of married couples: Efficiency and distributional effects of tax reforms. In H. Sutherland, L. Mitton, & M. Weeks (Eds.), Microsimulation in the new millenium: Challenges and innovations, department of applied economics. Cambridge: Cambridge University. Aaberge, R., Colombino, U., & Wennemo, T. (2009). Evaluating alternative representations of the choice set in models of labor supply. Journal of Economic Surveys, 23(3), 586612. Aaberge, R., Dagsvik, J. K., & Strøm, S. (1995). Labor supply responses and welfare effects of tax reforms. Scandinavian Journal of Economics, 97(4), 635659. Aaberge, R., & Flood, L. (2013). U.S. versus Sweden: The effect of alternative in-work tax credit policies on labour supply of single mothers. IZA Discussion Paper No. 7706. Aaberge, R., Havnes, T., & Mogstad, M. (2013). A theory for ranking distribution functions. Discussion Paper No. 763. Research Department, Statistics Norway. Altonji, J. G., & Paxson, C. H. (1988). Labor supply preferences, hours constraints, and hours-wage trade-offs. Journal of Labor Economics, 6(2), 254276. Arrufat, J. L., & Zabalza, A. (1986). Female labor supply with taxation, random preferences, and optimization errors. Econometrica, 54(1), 4763. Atkinson, A. B. (1970). On the measurement of inequality. Journal of Economic Theory, 2(3), 244263. Bach, S., Corneo, G., & Steiner, V. (2012). Optimal top marginal tax rates under income splitting for couples. European Economic Review, 56(6), 10551069. Bargain, O., Decoster, A., Dolls, M., Neumann, D., Peichl, A., & Siegloch, S. (2013). Welfare, labor supply and heterogeneous preferences: Evidence for Europe and the US. Social Choice and Welfare, 41(4), 789817.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
208
Rolf Aaberge and Ugo Colombino
Bargain, O., & Doorley, K. (2013). Putting structure on the RD design: Social transfers and youth inactivity in France. IZA Discussion Paper No. 7508. Bargain, O., & Orsini, K. (2006). In-work policies in Europe: Killing two birds with one stone? Labour Economics, 13(6), 667697. Bargain, O., Orsini, K., & Peichl, A. (2014). Comparing labor supply elasticities in Europe and the US: New results. Journal of Human Resources, 49, 723738. Bargain, O., & Peichl, A. (2013). Steady-state labor supply elasticities: A survey. ZEW Discussion Paper No. 13-084. Ben-Akiva, M., & Lerman, S. (1985). Discrete choice analysis: Theory and application to travel demand. Cambridge, MA: MIT Press. Ben-Akiva, M., & Watanatada, T. (1981). Application of a continuous spatial choice logit model. In C. F. Manski & D. McFadden (Eds.), Structural analysis of discrete data with econometric applications. MIT Press. Ben Porath, E., & Gilboa, I. (1994). Linear measures, the Gini index, and the income-equality trade-off. Journal of Economic Theory, 64, 443467. Beninger, D., Bargain, O., Beblo, M., Blundell, R., Carrasco, R., Chiuri, M.-C., … Vermeulen, F. (2006). Evaluating the move to a linear tax system in Germany and other European countries. Review of Economics of the Household, 4(2), 159180. Bingley, P., & Walker, I. (1997). The labour supply, unemployment and participation of lone mothers in in-work transfer programmes. The Economic Journal, 107(444), 13751390. Blank, R. M. (2002). Evaluating welfare reform in the United States. Journal of Economic Literature, American Economic Association, 40(4), 11051166. Blau, D. M., & Robbins, P. K. (1988). Child care costs and family labor supply. Review of Economics and Statistics, 70, 374381. Bloemen, H. (2010). An empirical model of collective household labour supply with non-participation. Economic Journal, 120(543), 183214. Bloemen, H. G., & Kapteyn, A. (2008). The estimation of utilityconsistent labor supply models by means of simulated scores. Journal of Applied Econometrics, 23(4), 395422. Blomquist, N. S. (1983). The effect of income taxation on the labor supply of married men in Sweden. Journal of Public Economics, 22(2), 169197. Blomquist, N. S., & Hansson-Brusewitz, U. (1990). The effect of taxes on male and female labor supply in Sweden. Journal of Human Resources, 25(3), 317357. Blomquist, S., & Newey, W. (2002). Nonparametric estimation with nonlinear budget sets. Econometrica, 70(6), 24552480.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
209
Blundell, R. (2006). Earned income tax credit policies: Impact and optimality: The Adam Smith lecture, 2005. Labour Economics, 13(4), 423443. Blundell, R. (2010). Comments on: Michael P. Keane ‘structural vs. atheoretic approaches to econometrics. Journal of Econometrics, 156(1), 2526. Blundell, R. (2012). Tax policy reform: The role of empirical evidence. Journal of the European Economic Association, 10(1), 4377. Blundell, R., & Hoynes, H. W. (2004). Has ‘in-work’ benefit reform helped the labor market? (pp. 411460). National Bureau of Economic Research, Inc. Retrieved from http://ideas.repec.org/h/ nbr/nberch/6753.html Blundell, R., Brewer, M., Haan, P., & Shephard, A. (2009). Optimal income taxation of lone mothers: An empirical comparison of the UK and Germany. Economic Journal, 119(535), 101121. Blundell, R., Duncan, A., McCrae, J., & Meghir, C. (2000). The labour market impact of the working families’ tax credit. Fiscal Studies, 21(1), 75103. Blundell, R., Duncan, A., & Meghir, C. (1998). Estimating labor supply responses using tax reforms. Econometrica, 66(4), 827861. Blundell, R., Ham, J., & Meghir, C. (1987). Unemployment and female labour supply. The Economic Journal, 97, 4464. Blundell, R., & MaCurdy, T. (1999). Labor supply: A review of alternative approaches. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics (1st ed., Vol. 3, pp. 15591695). Amsterdam: Elsevier. Blundell, R., MaCurdy, T., & Meghir, C. (2007). Labor supply models: Unobserved heterogeneity, nonparticipation and dynamics. In J. Heckman & E. Leamer (Eds.), Handbook of econometrics (1st ed., Vol. 6). Amsterdam: Elsevier. Blundell, R., & Shephard, A. (2012). Employment, hours of work and the optimal taxation of low-income families. Review of Economic Studies, 79(2), 481510. Bosmans, K., Decancq, K., & Ooghe, E. (2013). What do normative indices of multidimensional inequality really measure? Universite´ catholique de Louvain, Center for Operations Research and Econometrics (CORE). Retrieved from http://ideas.repec.org/p/cor/ louvco/2013035.html Bourguignon, F., & Magnac, T. (1990). Labor supply and taxation in France. The Journal of Human Resources, 25(3), 358389. Bourguignon, F., & Spadaro, A. (2006). Microsimulation as a tool for evaluating redistribution policies. Journal of Economic Inequality, 4(1), 77106. Bowen, W. G., & Finegan, T. A. (1969). The economics of labor force participation. Princeton, NJ: Princeton University Press.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
210
Rolf Aaberge and Ugo Colombino
Breunig, R., Cobb-Clark, D. A., & Gong, X. (2008). Improving the modelling of couples’ labour supply. The Economic Record, 84(267), 466485. Brewer, M. (2001). Comparing in-work benefits and the reward to work for families with children in the US and the UK. Fiscal Studies, 22(1), 4177. Brewer, M. (2009). How do income-support systems in the UK affect labour force participation? IFAU Institute for Evaluation of Labour Market and Education Policy. Working Paper No. 2009027. Retrieved from http://ideas.repec.org/p/hhs/ifauwp/2009_027.html Brewer, M., Duncan, A., Shephard, A., & Suarez, M. J. (2006). Did working families’ tax credit work? The impact of in-work support on labour supply in Great Britain. Labour Economics, 13(6), 699720. Brewer, M., Francesconi, M., Gregg, P., & Grogger, J. (2009). Feature: In-work benefit reform in a cross-national perspective: Introduction. Economic Journal, 119(535), F1F14. Burtless, G., & Hausman, J. A. (1978). The effect of taxation on labor supply: Evaluating the Gary negative income tax experiment. Journal of Political Economy, 86(6), 11031130. Callan, T., & van Soest, A. H. O. (1996). Family labour supply and taxes in Ireland. Tilburg University, Center for Economic Research. Retrieved from http://ideas.repec.org/p/dgr/kubcen/199426.html Card, D., & Hyslop, D. (2005). Estimating the effects of a time-limitedearnings subsidy for welfare-leavers. Econometrica, 73(6), 17231770. Chetty, R. (2009). Sufficient statistics for welfare analysis: A bridge between structural and reduced-form methods. Annual Review of Economics, Annual Reviews, 1(1), 451488. Chiappori, P.-A. (1988). Rational household labor supply. Econometrica, 56(1), 6390. doi:10.2307/1911842 Chiappori, P.-A. (1992). Collective labor supply and welfare. Journal of Political Economy, 100(3), 437467. Clavet, N.-J., Duclos, J.-Y., & Lacroix, G. (2013). Fighting poverty: Assessing the effect of guaranteed minimum income proposals in Quebec. Canadian Public Policy, 39(4), 491516. Colombino, U. (1985). A model of married women’s labour supply with systematic and random disequilibrium components. Ricerche Economiche, 39(2), 165179. Colombino, U. (2009). Optimal income taxation: Recent empirical applications. Rivista Italiana degli Economisti, 14(1), 4770. Colombino, U. (2013). A new equilibrium simulation procedure with discrete choice models. International Journal of Microsimulation, 6(3), 2549. Colombino, U. (2014). Five crossroads on the way to basic income: An Italian tour. IZA Discussion Paper No. 8087.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
211
Colombino, U., & Del Boca, D. (1990). The effect of taxes on labor supply in Italy. Journal of Human Resources, 25(3), 390414. Colombino, U., Locatelli, M., Narazani, E., & O’Donoghue, C. (2010). Alternative basic income mechanisms: An evaluation exercise with a microeconometric model. Basic Income Studies, 5(1), 131. Colombino, U., & Narazani, E. (2013). Designing a universal income support mechanism for Italy: An exploratory tour. Basic Income Studies, 8(1), 117. Colombino, U., & Narazani, E. (2014). Closing the gender gap: Gender based taxation, wage subsidies or basic income? WP 12-2014. Torino: Dipartimento di Economia e Statistica. Colombino, U., & Zabalza, A. (1982). Labour supply and quantity constraints. Results on female participation and hours in Italy. Discussion Paper No. 125. Centre for Labour Economics, London School of Economics. Colonna, F., & Marcassa, S. (2012). Taxation and labor force participation: The case of Italy. CEPREMAP Working Papers (Docweb) 1203. Creedy, J. (2005). An in-work payment with an hours threshold: Labour supply and social welfare. The Economic Record, 81(255), 367377. Creedy, J., & Duncan, A. (2002). Behavioural microsimulation with labour supply responses. Journal of Economic Surveys, 16(1), 139. Creedy, J., & Duncan, A. (2005). Aggregating labour supply and feedback effects in microsimulation. Australian Journal of Labour Economics, 8(3), 277290. Creedy, J., & He´rault, N. (2012). Welfare-improving income tax reforms: A microsimulation analysis. Oxford Economic Papers, 64(1), 128150. Creedy, J., & Kalb, G. (2005a). Behavioural microsimulation modelling for tax policy analysis in Australia: Experience and prospects. Australian Journal of Labour Economics (AJLE), 8(1), 73110. Creedy, J., & Kalb, G. (2005b). Discrete hours labour supply modelling: Specification, estimation and simulation. Journal of Economic Surveys, 19(5), 697734. Creedy, J., Kalb, G., & Kew, H. (2003). Flattening the effective marginal tax rate structure in Australia: Policy simulations using the Melbourne institute tax and transfer simulator. Australian Economic Review, 36(2), 156172. Creedy, J., Kalb, G., & Scutella, R. (2006). Income distribution in discrete hours behavioural microsimulation models: An illustration. Journal of Economic Inequality, 4(1), 5776. Dagsvik, J. K. (1994). Discrete and continuous choice, max-stable processes, and independence from irrelevant attributes. Econometrica, 62(5), 11791205.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
212
Rolf Aaberge and Ugo Colombino
Dagsvik, J. K. (2000). Aggregation in matching markets. International Economic Review, 41(1), 2757. Dagsvik, J. K., Jia, Z., Kornstad, T., & Thoresen, T. O. (2014). Theoretical and practical arguments for modeling labor supply as a choice among latent jobs. Journal of Economic Surveys, 28(1), 134151. Dagsvik, J. K., Locatelli, M., & Strøm, S. (2009). Tax reform, sectorspecific labor supply and welfare effects. Scandinavian Journal of Economics, 111(2), 299321. Dagsvik, J. K., & Strøm, S. (2006). Sectoral labour supply, choice restrictions and functional form. Journal of Applied Econometrics, 21(6), 803826. De Luca, G., Rossetti, C., & Vuri, D. (2012). In-work benefits for married couples: An ex-ante evaluation of EITC and WTC policies in Italy. IZA Discussion Paper N. 6739. Deaton, A., & Muellbauer, J. (1980). Economics and consumer behavior. Cambridge: Cambridge University Press. Decoster, A., & Haan, P. (2014). Empirical welfare analysis with preference heterogeneity. International Tax and Public Finance. doi:10.1007/s10797-014-9304-5%I Decoster, A., & Vanleenhove, P. (2012). In-work tax credits in Belgium: An analysis of the jobkorting using a discrete labour supply model. Brussels Economic Review, 55(2), 121150. Del Boca, D. (2002). The effect of child care and part time opportunities on participation and fertility decisions in Italy. Journal of Population Economics, 15(3), 549573. Del Boca, D., & Flinn, C. (2012). Endogenous household interaction. Journal of Econometrics, 166(1), 4965. Del Boca, D., & Vuri, D. (2007). The mismatch between employment and child care in Italy: The impact of rationing. Journal of Population Economics, 20(4), 805832. Diamond, P. (1998). Optimal income taxation: An example with a u-shaped pattern of optimal marginal tax rates. American Economic Review, 88, 8395. Dickens, W. T., & Lundberg, S. J. (1993). Hours restrictions and labor supply. International Economic Review, 34(1), 169192. Domencich, T. A., & McFadden, D. (1975). Urban travel demand: A behavioral analysis. Amsterdam: North-Holland. Donaldson, D., & Weymark, J. A. (1980). A single-parameter generalization of the Gini indices of inequality. Journal of Economic Theory, 22(1), 6786. Donaldson, D., & Weymark, J. A. (1983). Ethically flexible Gini indices for income distributions in the continuum. Journal of Economic Theory, 29(2), 353358.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
213
Donni, O. (2003). Collective household labor supply: Nonparticipation and income taxation. Journal of Public Economics, 87(56), 11791198. Donni, O. (2007). Collective female labour supply: Theory and application. Economic Journal, 117(516), 94119. Duncan, A., & Giles, C. (1996). Labour supply incentives and recent family credit reforms. Economic Journal, 106(434), 142155. Duncan, A., & Harris, M. N. (2002). Simulating the behavioural effects of welfare reforms among sole parents in Australia. The Economic Record, 78(242), 264276. Duncan, A., & Stark, G. (2000). A recursive algorithm to generate piecewise linear budget constraints. London, UK: Institute for Fiscal Studies. Retrieved from http://ideas.repec.org/p/ifs/ifsewp/00-11.html Ebert, U. (1987). Size and distribution of incomes as determinants of social welfare. Journal of Economic Theory, 41(1), 2333. Eissa, N., & Hoynes, H. (2011). Redistribution and tax expenditures: The earned income tax credit. National Tax Journal, 64(2), 689729. Eissa, N., & Hoynes, H. W. (2004). Taxes and the labor market participation of married couples: The earned income tax credit. Journal of Public Economics, 88(910), 19311958. Eissa, N., & Hoynes, H. W. (2006). Behavioral responses to taxes: Lessons from the EITC and labor supply. Tax Policy and the Economy, 20, 73110. Eissa, N., Kleven, H. J., & Kreiner, C. T. (2008). Evaluation of four tax reforms in the United States: Labor supply and welfare effects for single mothers. Journal of Public Economics, 92(34), 795816. Ericson, P., & Flood, L. (2012). A microsimulation approach to an optimal Swedish income tax. International Journal of Microsimulation, 2(5), 221. Euwals, R., & van Soest, A. (1999). Desired and actual labour supply of unmarried men and women in the Netherlands. Labour Economics, 6(1), 95118. Fang, H., & Keane, M. P. (2004). Assessing the impact of welfare reform on single mothers. Brookings Papers on Economic Activity, 2004(1), 195. Figari, F. (2011). From housewives to independent earners: Can the tax system help Italian women to work? ISER Working Paper Series 201115. Institute for Social and Economic Research. Fleurbaey, M. (2003). Social welfare, priority to the worst-off and the dimensions of individual well-being. IDEP Working Papers 0312. Institut d’economie publique (IDEP), Marseille, France. Fleurbaey, M. (2008). Fairness, responsibility and welfare. Oxford: Oxford University Press.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
214
Rolf Aaberge and Ugo Colombino
Fleurbaey, M., & Maniquet, F. (2006). Fair income tax. Review of Economic Studies, 73(1), 5584. Flood, L., Hansen, J., & Wahlberg, R. (2004). Household labor supply and welfare participation in Sweden. Journal of Human Resources, 39(4), 10081032. Flood, L., & MaCurdy, T. (1992). Work disincentive effects of taxes: An empirical analysis of Swedish men. Carnegie-Rochester Conference Series on Public Policy, 37(1), 239277. Flood, L., Wahlberg, R., & Pylkka¨nen, E. (2007). From welfare to work: Evaluating a tax and benefit reform targeted at single mothers in Sweden. Labour, 21(3), 443471. Fortin, B., Truchon, M., & Beausejour, L. (1993). On reforming the welfare system: Workfare meets the negative income tax. Journal of Public Economics, 51(2), 119151. Fraker, T., & Moffitt, R. (1988). The effect of food stamps on labor supply: A bivariate selection model. Journal of Public Economics, 35(1), 2556. Fuest, C., Peichl, A., & Schaefer, T. (2008). Is a at tax reform feasible in a grown-up democracy of Western Europe? A simulation study for Germany. International Tax and Public Finance, 15(5), 620636. Gong, X., & Van Soest, A. (2002). Family structure and female labor supply in Mexico city. Journal of Human Resources, 37(1), 163191. Grogger, J. (2003). The effects of time limits, the EITC, and other policy changes on welfare use, work, and income among female-headed families. Review of Economics and Statistics, 85(2), 394408. Grogger, J., & Karoly, L. A. (2005). Welfare reform: Effects of a decade of change. Cambridge, MA: Harvard University Press. Gurgand, M., & Margolis, D. (2008). Does work pay in France? Monetary incentives, hours constraints, and the guaranteed minimum income. Journal of Public Economics, 92(7), 16691697. Gustafsson, S., & Stafford, F. (1992). Child care subsidies and labor supply in Sweden. Journal of Human Resources, University of Wisconsin Press, 27(1), 204230. Haan, P., & Myck, M. (2007). Apply with caution: Introducing UK-style in-work support in Germany. Fiscal Studies, 28(1), 4372. Haan, P., & Steiner, V. (2005). Distributional effects of the German tax reform 2000 A behavioral microsimulation analysis. Zeitschrift fu¨r Wirtschafts- und Sozialwissenschaften. [Schmollers Jahrbuch: Journal of Applied Social Science Studies.], 125(1), 3949. Haan, P., & Steiner, V. (2008). Making work pay for the elderly unemployed Evaluating alternative policy reforms for Germany. FinanzArchiv/Public Finance Analysis, 64(3), 380402. Haan, P., & Wrohlich, K. (2010). Optimal taxation: The design of childrelated cash and in-kind benefits. German Economic Review, 11, 278301.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
215
Haan, P., & Wrohlich, K. (2011). Can child care policy encourage employment and fertility?: Evidence from a structural model. Labour Economics, 18(4), 498512. Hall, R. (1973). Wages, income, and hours of work in the U.S. labor force. In G. Cain & H. Watts (Eds.), Income maintenance and labor supply (pp. 102162). Chicago, IL: Markham. Ham, J. C. (1982). Estimation of a labour supply model with censoring due to unemployment and underemployment. Review of Economic Studies, 49(3), 335354. doi:10.2307/2297360 Hammond, P. J. (1991). Interpersonal comparisons of utility: Why and how they are and should be made. In J. Elster & J. E. Roemer (Eds.), Interpersonal comparisons of well-being (pp. 200254). Studies in Rationality and Social Change. Cambridge, USA: Cambridge University Press. Harris, M. N., & Duncan, A. (2002). Intransigencies in the labour supply choice. Melbourne Institute of Applied Economic and Social Research, University of Melbourne. Hausman, J. A. (1979). The econometrics of labor supply on convex budget sets. Economics Letters, 3(2), 171174. Hausman, J. A. (1980). The effect of wages, taxes, and fixed costs on women’s labor force participation. Journal of Public Economics, 14(2), 161194. Hausman, J. A. (1985a). Taxes and labor supply. In A. J. Auerbach & M. Feldstein (Eds.), Handbook of public economics (1st ed., Vol. 1, pp. 213263). Amsterdam: Elsevier. Hausman, J. A. (1985b). The econometrics of nonlinear budget sets. Econometrica, 53(6), 12551282. doi:10.2307/1913207 Hausman, J. A., & Ruud, P. (1984). Family labor supply with taxes. American Economic Review, 74(2), 242248. Hausman, J. A., & Wise, D. A. (1980). Discontinuous budget constraints and estimation: The demand for housing. Review of Economic Studies, 47(1), 7596. doi:10.2307/2297104 Heckman, J. J. (1974a). Shadow prices, market wages, and labor supply. Econometrica, 42(4), 679694. doi:10.2307/1913937 Heckman, J. J. (1974b). Effects of child-care programs on women’s work effort. Journal of Political Economy, 82(2), S136S163. Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47(1), 153161. doi:10.2307/1912352 Heckman, J. J., & MaCurdy, T. E. (1986). Labor econometrics. In Z. Griliches (Ed.), Handbook of econometrics (pp. 19171977). Amsterdam: North-Holland. Hernæs, E., Jia, Z., & Strøm, S., (2001). Retirement in non-cooperative and cooperative families. CESifo Working Paper Series 476. Hey, J. D., & Lambert, P. J. (1980). Relative deprivation and the Gini coefficient: Comment. Quarterly Journal of Economics, 95(3), 567573.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
216
Rolf Aaberge and Ugo Colombino
Horstschra¨er, J., Clauss, M., & Schnabel, R. (2010). An unconditional basic income in the family context: Labor supply and distributional effects. ZEW Discussion Papers 10-091. ZEW Zentrum fu¨r Europa¨ische Wirtschaftsforschung [Center for European Economic Research]. Hotz, V. J., & Miller, R. A. (1988). An empirical analysis of life cycle fertility and female labor supply. Econometrica, 56(1), 91118. Hotz, V. J., & Scholz, J. K. (2003). The earned income tax credit. In R. Mofitt (Ed.), Means-tested transfer programs in the United States. Chicago, IL: University Chicago Press. Hoynes, H. (1996). Welfare transfers in two-parent families: Labor supply and welfare participation under AFDC-UP. Econometrica, 64(2), 295332. Hurwicz, L. (1962). On the structural form of interdependent systems. In E. Nagel, P. Suppes, & A. Tarski (Eds.), Logic, methodology and philosophy of science (pp. 232239). Stanford, CA: Stanford University Press. Ilmakunnas, S., & Pudney, S. (1990). A model of female labour supply in the presence of hours restrictions. Journal of Public Economics, 41(2), 183210. Imbens, G. W. (2010). Better late than nothing: Some comments on Deaton (2009) and Heckman and Urzua (2009). Journal of Economic Literature, 48(2), 399423. Immervoll, H., Kleven, H. J., Kreiner, C. T., & Saez, E. (2007). Welfare reform in European countries: A microsimulation analysis. Economic Journal, 117(516), 144. Kalb, G. R. (2000). Labour supply and welfare participation in Australian two-adult households: Accounting for involuntary unemployment and the ‘cost’ of part-time work. Victoria University, Centre of Policy Studies/IMPACT Centre. Retrieved from http://ideas.repec.org/p/ cop/wpaper/bp-35.html Kalb, G. R. (2009). Children, labour supply and child care: Challenges for empirical analysis. Australian Economic Review, 42(3), 276299. Kapteyn, A., Kooreman, P., & van Soest, A. (1990). Quantity rationing and concavity in a flexible household labor supply model. Review of Economics and Statistics, 72(1), 5562. Keane, M., & Moffitt, R. (1998). A structural model of multiple welfare program participation and labor supply. International Economic Review, 39(3), 553589. Keane, M. P. (1995). A new idea for welfare reform. Quarterly Review, Federal Reserve Bank of Minneapolis, (Spring), 228. Keane, M. P. (2010). A structural perspective on the experimentalist school. Journal of Economic Perspectives, 24(2), 4758. Keane, M. P. (2011). Labor supply and taxes: A survey. Journal of Economic Literature, 49(4), 9611075.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
217
Keane, M. P., Todd, P. E., & Wolpin, K. I. (2011). The structural estimation of behavioral models: Discrete choice dynamic programming methods and applications. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics (1st ed., Vol. 4, No. 4, pp. 331461). Amsterdam: Elsevier. Keane, M. P., & Wasi, N. (2013). The structure of consumer taste heterogeneity in revealed vs. stated preference data. Economics Papers 2013W10, Economics Group, Nuffield College, University of Oxford. Keane, M. P., & Wolpin, K. I. (2002a). Estimating welfare effects consistent with forward-looking behavior. Part I: Lessons from a simulation exercise. Journal of Human Resources, 37(3), 570599. Keane, M. P., & Wolpin, K. I. (2002b). Estimating welfare effects consistent with forward-looking behavior. Part II: Empirical results. Journal of Human Resources, 37(3), 600622. Keane, M. P., & Wolpin, K. I. (2007). Exploring the usefulness of a nonrandom holdout sample for model validation: Welfare effects on female behavior. International Economic Review, 48(4), 13511378. King, M. A. (1983). Welfare analysis of tax reforms using household data. Journal of Public Economics, 21(2), 183214. Klevmarken, N. A. (1997). Modelling behavioural response in EUROMOD. Faculty of Economics, University of Cambridge. Retrieved from http://ideas.repec.org/p/cam/camdae/9720 Kornstad, T., & Thoresen, T. O. (2006). Effects of family policy reforms in Norway: Results from a joint labour supply and childcare choice microsimulation analysis. Fiscal Studies, 27(3), 339371. Kornstad, T., & Thoresen, T. O. (2007). A discrete choice model for labor supply and childcare. Journal of Population Economics, 20(4), 781803. Kosters, M. H. (1966). Income and substitution effects in a family labor supply model. P-3339. The Rand Corporation. Kosters, M. H. (1969). Effects of an income tax on labor supply. In A. C. Harberger & M. C. Bailey (Eds.), The taxation of income from capital (pp. 301332). Washington, DC: Studies of Government Finance, Brookings Institution. Labeaga, J., Oliver, X., & Spadaro, A. (2008). Discrete choice models of labour supply, behavioural microsimulation and the Spanish tax reforms. Journal of Economic Inequality, 6(3), 247273. Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 355. Lo¨ffler, M., Peichl, A., & Siegloch, S. (2013). Validating structural labor supply models. Annual Conference 2013 (Duesseldorf): Competition Policy and Regulation in a Global Economic Order 79819, Vereinfu¨rSocialpolitik [German Economic Association].
Downloaded by Monash University At 09:10 12 April 2016 (PT)
218
Rolf Aaberge and Ugo Colombino
Lokshin, M. (2004). Household childcare choices and women’s work behavior in Russia. Journal of Human Resources, 39(4), 10941115. Lucas, R. Jr. (1976). Econometric policy evaluation: A critique. CarnegieRochester Conference Series on Public Policy, 1(1), 1946. Elsevier. MaCurdy, T., Green, D., & Paarsch, H. (1993). Assessing empirical approaches for analyzing taxes and labor supply. The Journal of Human Resources, 25(3), 415490. Manski, C. (2012). Identification of income-leisure preferences and evaluation of income tax policy. Centre for Microdata Methods and Practice, Institute for Fiscal Studies. Retrieved from http://ideas. repec.org/p/ifs/cemmap/07-12.html Marschak, J. (1953). Economic measurements for policy and prediction. In W. Hood & T. Koopmans (Eds.), Studies in econometric method (pp. 126). New York, NY: Wiley. Matzkin, R. L. (2013). Nonparametric identification in structural economic models. Annual Review of Economics, 5(1), 457486. McElroy, M. B., & Horney, M. J. (1981). Nash-bargained household decisions: Toward a generalization of the theory of demand. International Economic Review, 22(2), 333349. McFadden, D. (1974). The measurement of urban travel demand. Journal of Public Economics, 3(4), 303328. McFadden, D. (1978). Modeling the choice of residential location. In A. Karlqvist, L. Lundqvist, F. Snickars, & J. Weibull (Eds.), Spatial interaction theory and planning models (pp. 7596). Cambridge, MA: Harvard University Press. McFadden, D. (1984). Econometric analysis of qualitative response models. In Z. Griliches & M. D. Intriligator (Eds.), Handbook of econometrics (1st ed., Vol. 2, pp. 13951457). Amsterdam: Elsevier. McFadden, D., & Train, K. (2000). Mixed MNL models for discrete response. Journal of Applied Econometrics, 15(5), 447470. Meghir, C., & Phillips, D. (2008). Labour supply and taxes. IZA Discussion Paper No. 3405. Meyer, B. D., & Holtz-Eakin, D. (2002). Making work pay. New York, USA: Russell Sage Foundation. Meyer, B. D., & Rosenbaum, D. T. (2001). Welfare, the earned income tax credit, and the labor supply of single mothers. The Quarterly Journal of Economics, 116(3), 10631114. Miller, R., & Sanders, S. (1997). Human capital development and welfare participation. Carnegie-Rochester Conference Series on Public Policy, 46, 144. Mirrlees, J. A. (1971). An exploration in the theory of optimum income taxation. Review of Economic Studies, 38(114), 175208. Moffitt, R. (1983). An economic model of welfare stigma. American Economic Review, 73(5), 10231035.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
219
Moffitt, R. (1986). The econometrics of piecewise-linear budget constraints: A survey and exposition of the maximum likelihood method. Journal of Business & Economic Statistics, 4(3), 317328. Moffitt, R. (2006). Welfare work requirements with paternalistic government preferences. Economic Journal, 116(515), F441F458. O’Donoghue, C. (2001). Dynamic microsimulation: A methodological survey. Brazilian Electronic Journal of Economics, 4(2). Retrieved from http://econpapers.repec.org/article/bejissued/v_3a4_3ay_3a2001_ 3ai_3a2_3acathal.htm Orcutt, G., Greenberger, M., Korbel, J., & Rivlin, A. (1961). Microanalysis of socioeconomic systems: A simulation study. New York, NY: Harper & Row. Orcutt, G. H. (1957). A new type of socio-economic system. Review of Economics and Statistics, 39(2), 116123. Pacifico, D. (2013). On the role of unobserved preference heterogeneity in discrete choice models of labour supply. Empirical Economics, 45(2), 929963. Peichl, A., & Siegloch, S. (2012). Accounting for labor demand effects in structural labor supply models. Labour Economics, 19(1), 129138. Revesz, J. T. (1989). The optimal taxation of labour income. Public Finance, 44(3), 453475. Ribar, D. C. (1995). A structural model of child care and the labor supply of married women. Journal of Labor Economics, 13(3), 558597. Roemer, J. E. (1998). Equality of opportunity. Cambridge, MA: Harvard University Press. Rosen, H. S. (1976). Taxes in a Labor supply model with joint wage-hours determination. Econometrica, 44(3), 485507. Rosen, S. (1974). Effects of child-care programs on women’s work effort: Comment. Journal of Political Economy, 82(2), 164169. Saez, E. (2001). Using elasticities to derive optimal income tax rates. Review of Economic Studies, 68(1), 205229. Saez, E. (2002). Optimal income transfer programs: Intensive versus extensive labor supply responses. Quarterly Journal of Economics, 117(3), 10391073. Saez, E., Slemrod, J., & Giertz, S. H. (2012). The elasticity of taxable income with respect to marginal tax rates: A critical review. Journal of Economic Literature, 50(1), 350. Scutella, R. (2004). Moves to a basic income-flat tax system in Australia: Implications for the distribution of income and supply of labour. Melbourne Institute Working Paper Series wp2004n05. Melbourne Institute of Applied Economic and Social Research, The University of Melbourne. Sen, A. (1974). Informational bases of alternative welfare approaches: Aggregation and income distribution. Journal of Public Economics, 3(4), 387403.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
220
Rolf Aaberge and Ugo Colombino
Stern, N. (1986). On the specification of labour supply functions. In R. Blundell & I. Walker (Eds.), Unemployment, search and labour supply (pp. 143189). Cambridge: Cambridge University Press. ISBN: 0521320275. Swann, C. A. (2005). Welfare reform when recipients are forward-looking. Journal of Human Resources, 40(1), 3156. Tobin, J. (1958). Estimation of relationships for limited dependent variables. Econometrica, 26(1), 2436. doi:10.2307/1907382 Todd, P. E., & Wolpin, K. I. (2006). Assessing the impact of a school subsidy program in Mexico: Using a social experiment to validate a dynamic behavioral model of child schooling and fertility. American Economic Review, 96(5), 13841417. Todd, P. E., & Wolpin, K. I. (2008). Ex ante evaluation of social programs. Annales d’E´conomie et de Statistique, (9192), 263291. Train, K. (2003). Discrete choice methods with simulation. Online economics textbooks, SUNY-Oswego, Department of Economics. Train, K. E., McFadden, D. L., & Ben-Akiva, M. (1987). The demand for local telephone service: A fully discrete model of residential calling patterns and service choices. The RAND Journal of Economics, 18(1), 109123. Triest, R. K. (1990). The effect of income taxation on labor supply in the United States. Journal of Human Resources, 25(3), 491516. Tummers, M., & Woittiez, I. (1991). A simultaneous wage and labour supply model with hours restrictions. Journal of Human Resources, 26, 393423. Tuomala, M. (1990). Optimal income tax and redistribution. Oxford University Press. Tuomala, M. (2010). On optimal non-linear income taxation: Numerical results revisited. International Tax and Public Finance, 17(3), 259270. Van Klaveren, C., & Ghysels, J. (2012). Collective labor supply and child care expenditures: Theory and application. Journal of Labor Research, 33(2), 196224. Van Soest, A. (1995). Structural models of family labor supply: A discrete choice approach. Journal of Human Resources, 30(1), 6388. Van Soest, A., Das, M., & Gong, X. (2002). A structural labour supply model with flexible preferences. Journal of Econometrics, 107(12), 345374. Van Soest, A., Woittiez, I., & Kapteyn, A. (1990). Labor supply, income taxes, and hours restrictions in the Netherlands. The Journal of Human Resources, 25(3), 517558. Varian, H. R. (2012). Revealed preferences and its applications. Economic Journal, 122(560), 332338.
Downloaded by Monash University At 09:10 12 April 2016 (PT)
Labour Supply Models
221
Vermeulen, F. (2005). And the winner is … An empirical evaluation of unitary and collective labour supply models. Empirical Economics, 30(3), 711734. Vermeulen, F., Bargain, O., Beblo, M., Beninger, D., Blundell, R., Carrasco, R., … Myck, M. (2006). Collective models of labor supply with nonconvex budget sets and nonparticipation: A calibration approach. Review of Economics of the Household, 4(2), 113127. Wales, T. J., & Woodland, A. D. (1976). Estimation of household utility functions and labor supply response. International Economic Review, 17(2), 397410. Wales, T. J., & Woodland, A. D. (1979). Labour supply and progressive taxes. Review of Economic Studies, 46(1), 8395. Watanatada, T., & Ben-Akiva, M. (1979). Forecasting urban travel demand for quick policy analysis with disaggregate choice models: A Monte Carlo simulation approach. Transportation Research, 13A, 241248. Weymark, J. A. (1981). Generalized Gini inequality indices. Mathematical Social Sciences, 1(4), 409430. Wolpin, K. I. (1996). Public-policy uses of discrete-choice dynamic programming models. American Economic Review, 86(2), 427432. Wolpin, K. I. (2007). Ex ante policy evaluation, structural estimation, and model selection. American Economic Review, 97(2), 4852. Wrohlich, K. (2008). The excess demand for subsidized child care in Germany. Applied Economics, 40(10), 12171228. Yaari, M. E. (1987). The dual theory of choice under risk. Econometrica, 55(1), 95115. Yaari, M. E. (1988). A controversial proposal concerning inequality measurement. Journal of Economic Theory, 44(2), 381397. Zabalza, A. (1983). The CES utility function, non-linear budget constraints and labour supply: Results on female participation and hours. Economic Journal, 93(37), 312330. Zabalza, A., Pissarides, C., & Barton, M. (1980). Social security and the choice between full-time work, part-time work and retirement. Journal of Public Economics, 14(2), 245276.
CHAPTER 8
Consumption and Indirect Tax Models
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Bart Cape´au, Andre´ Decoster and David Phillips
8.1. Introduction There is no formal definition of what constitutes an indirect tax (Kay & King, 1990). However, in common use it usually means those taxes that are levied on the sale of goods and services, and that are generally collected and remitted by the vendor of those goods and services, rather than the purchaser. Examples of such taxes include excise duties, sales taxes, value added taxes (VAT), and import or export duties. Indirect taxes thus defined have two characteristics in which they differ from direct taxes such as income tax. First, because in most instances it is not possible to identify the consumer (and thus their personal characteristics) as opposed to the purchaser of a good, the amount of tax paid on a particular purchase is usually (but not always) the same for everyone;1 otherwise, an arbitrage opportunity would exist, with those facing low tax rates able to purchase goods and sell them on to those facing higher tax rates. Second, the tax schedule is generally linear the same tax rate applies to the first dollar of spending on a particular good or service as does on the millionth dollar. There are exceptions to these rules of course: some price subsidies which can be considered as a negative indirect tax are available only to certain parts of the population, and on a limited amount of expenditure; and some durable purchases, such as housing
1
As always there are exceptions: for example in Belgium criteria to be eligible for reduced registration fees when buying a house on the secondary market depend on the household characteristics of the buyer (number of children).
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293007
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
224
Bart Cape´au, Andre´ Decoster and David Phillips
or cars, are subject to non-linear tax schedules where rates are higher for more expensive purchases. But generally, linear tax schedules unrelated to purchaser characteristics apply. These features considerably simplify data requirements and calculations in indirect tax micro-simulation (relative to direct tax and benefit micro-simulation, for instance). However, linearity does not imply a uniform rate or tax treatment of all goods. The prevalence of differences in tax rates between different goods means indirect tax micro-simulation models need to be based on data with detailed information on spending on different goods, such as household expenditure surveys. It also provides the raison d0 eˆtre for much analysis using indirect tax micro-simulation models: evaluating the distributional, behavioural and welfare effects of non-uniform tax rate structures, and reforms that either increase or reduce these non-uniformities. The three most important and (most commonly simulated) indirect taxes are VAT, retail sales tax, and excise duties. VAT is usually levied as a percentage of the pre-tax price charged by the vendor and is charged on sales to both final consumers and other traders. However, in most instances, VAT-registered traders can reclaim any VAT paid on inputs, and thus, formally, the tax falls on sales to final consumers. Certain traders cannot reclaim this VAT, however if the goods purchased are inputs to the production of goods that are exempt from VAT, or, if the trader has a turnover below the in some countries applicable registration threshold for VAT. In these cases, although no VAT is charged on subsequent sales of those goods to consumers, it may be the case that the increased production costs due to un-reclaimable VAT charged on inputs to production, are reflected in the final price paid by the consumer. Two polar cases can be distinguished. At one extreme, the entire increase in input costs are borne by workers and capital owners in the form of lower wages and profits, leaving the price charged to the consumer unaffected. Alternatively, wages and profits are assumed unchanged, and the un-reclaimable VAT is fully passed through to the price charged to the consumer. Most likely, some mix of the two effects may take place and assumptions need to be made about the incidence of any VAT not passed through to consumer prices. As we discuss later, in the case of full incidence on consumer prices, methods exist to estimate these effects of VAT on the price of exempt goods, and incorporate these implicit tax rates into micro-simulation models. However, most micro-simulation models only include indirect taxes paid directly by consumers and do not take account of the indirect effects of non-reclaimable VAT paid at earlier stages of production. Retail sales taxes, such as those that operate in the United States, are only chargeable on sales to consumers, which means that these difficulties do not arise. Excise duties on goods such as alcohol, tobacco and road fuels are usually not reclaimable for traders and in the case of road fuels, a large fraction of the total tax yield is likely to be paid by traders, and passed on
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
225
in the prices of other goods. Thus standard indirect tax micro-simulation models that account for taxes paid directly by households only may provide inaccurate results as far as the final or total impact of excise duties is concerned. Furthermore, the excise duty payable on a good in most cases is not a function of the amount spent, but instead on the number of units of the item purchased (e.g. the number of cigarettes, the litres of fuel, and the alcohol content of the beer), sometimes combined with an ad valorem component based on the amount spent too. This means that in theory data on physical quantities as well as expenditures is required to accurately model excise duties. However, as we show in the appendix, information on expenditures and retail prices can be used to calculate ad valorem rates that are equivalent to the excise duties. More generally, data availability can be problematic. Household budget surveys are less frequently collected than income surveys in many developed countries, and surveys with detailed information on both income and expenditure are even rarer. This means there is often a need to ‘uprate’ older data to better reflect current expenditure levels and patterns, and methods to link income and expenditure data from different sources are often required. And, many of the same questions that are faced in constructing direct tax and benefit micro-simulation models need to be considered for indirect tax micro-simulation models too. First, is the question of the assumed incidence of indirect taxes which for practical purposes is usually fully on consumer prices, which may not necessarily be the case in reality.2 Second, is what behavioural response, if any, to account for when modelling indirect taxes. And third, is how to measure (or proxy) the welfare effects of the tax changes at both the individual and aggregate level, which is clearly linked to the changes in expenditure and tax payments, but not necessarily in a simple way. Thus although indirect taxes are conceptually much simpler to model than direct taxes the tax base (i.e. expenditure on a given commodity) and rates are easier to calculate than in the case of direct taxes and benefits there are nonetheless methodological and practical challenges which need to be contended with. And in many applications, reforms in indirect and direct taxation come in one package. When it comes to take into account behavioural reactions in such a context, the development of
2
In Section 8.3.1 we discuss in more detail tax incidence in the context of indirect tax microsimulation models, and conclude that most models rely on the assumption of fixed producer prices. Under that assumption, consumers are assumed to bear the full burden of VAT on goods sold by producers or traders that can reclaim VAT on inputs. This no longer holds true when the assumption of fixed producer prices is relaxed.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
226
Bart Cape´au, Andre´ Decoster and David Phillips
models that fully integrates the labour supply decision with the allocation of expenditure across goods remains unfulfilled. This chapter puts the methodological issues and choices available in developing and using indirect tax micro-simulation models at the centre stage. It proceeds as follows. In Section 8.2 we introduce the academic and policy questions which indirect tax micro-simulation models are useful in helping to answer. We also present figures on indirect tax systems and highlight a number of recent reforms to these systems to demonstrate the continuing practical need for such models. Section 8.3 sets out the methodological issues and choices involved in building these models: the types of spending and taxes to cover; the type of behavioural response to taxes to model; the assessment of the distributional impact of tax systems and tax reforms; and the data required for different types of models. To provide some concrete examples, Section 8.4 provides a discussion of two particular models, the methodological choices made in constructing them, and the uses to which they have been put and can be used for. Finally, Section 8.5 concludes and offers what look to be fruitful future developments in indirect tax micro-simulation methodology.
8.2. What is the role of indirect tax micro-simulation models? There are at least two reasons why indirect tax micro-simulation models are useful tools. The first follows from its direct practical relevance for analysing and evaluating reforms to taxes that generate a substantial proportion of government revenue, and have a similarly significant impact on household’s spending power and welfare. The second is more theoretical, and stems from the lack of general theoretical answers about the characteristics of optimal tax systems in the ‘real world’. The importance of indirect tax revenues in OECD countries is illustrated in Figure 8.1, where we show the average fraction of total tax revenues (including social security contributions) made up of VAT, sales taxes and excise duty receipts. In 2011 VAT, sales taxes and excise duties contributed, respectively, 19.1%, 0.5% and 7.9% of tax revenue. Certainly the importance of VAT has increased substantially over the last four decades, growing from 9.4% in 1976 to 19.1% in 2011. Excise revenues, in contrast, and certainly sales tax revenues have fallen in relative terms during the same period. The near disappearance of sales taxes are mainly driven by the introduction of VAT in Canada in 1991 (as a source of federal revenue besides sales taxes at the provincial level), and in Australia and Slovenia in, respectively, 1999 and 1998. Among the OECD countries, the sales tax presence in Figure 8.1 is now nearly exclusively due to the United States. Indirect taxes are thus an important component of the overall tax system and understanding their distributional, behavioural and welfare
227
Consumption and Indirect Tax Models
Figure 8.1.
Share of VAT, sales taxes and excise duties as percentage of total tax revenue OECD-average 19762011
30 VAT
Sales tax
Excise
25
20
10
5
2010
2008
2006
2004
2002
2000
1998
1996
1994
1992
1990
1988
1986
1984
1982
1980
1978
0 1976
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
15
Source: Data downloaded from OECD.Stat, available at http://stats.oecd.org/ on May 23, 2014. For VAT we display variable ‘5111 Value added taxes’, for sales taxes variable ‘5112 Sales tax’ and for excise duties ‘5121 Excises’.
effects is vital to understand the effects of the tax system as a whole. Figure 8.2 shows that there is also significant cross-country variation in the importance of VAT and excise duties in overall revenues. And also the structure of the VAT-rates varies considerably, with the standard ranging from 15% in Luxembourg to 27% in Hungary (with most countries applying a standard rate of 20% or 21%).3 Moreover there have been substantial reforms in indirect tax systems over the last 15 years including: • Increases in the main rates of VAT in Finland, Greece, Ireland, Italy, the Netherlands, Spain, and the United Kingdom as part of fiscal consolidation efforts, and in Germany as part of a shift of taxes away from labour to consumption;
3
For a detailed description of the VAT-structure in the 28 EU member states, see http://ec. europa.eu/taxation_customs/resources/documents/taxation/vat/how_vat_works/rates/vat_ rates_en.pdf
228
Bart Cape´au, Andre´ Decoster and David Phillips
Figure 8.2.
Share of VAT, sales taxes and excise duties as percentage of total tax revenue for different OECD countries in 2011
30 Sales tax
Value added
Excises
25
20
15
5
United States OECD - Average
Turkey
United Kingdom
Switzerland
Spain
Sweden
Poland
Portugal
Norway
Netherlands
Mexico
Italy
Japan
Greece
Hungary
Germany
France
Finland
Denmark
Canada
Belgium
0 Australia
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
10
Source: Data downloaded from OECD.Stat, available at http://stats.oecd.org/ on May 23, 2014. For VAT we display variable ‘5111 Value added taxes’, for sales taxes variable ‘5112 Sales tax’ and for excise duties ‘5121 Excises’.
• Changes in the set of goods subject to reduced rates of VAT (such as the application of reduced rates to restaurants in France), and changes in reduced rates (such as the reduction of the rate on domestic energy from 8% to 5% in the United Kingdom in the late 1990s); • Significant real-term changes in particular excise duty rates such as in the United Kingdom and in Belgium. Assessing the impact of these changing tax rates and structures on tax revenues, and household welfare and spending patterns has been a key use of micro-simulation models. Of course, micro-simulation models allow not only the ex post evaluation of actual reforms but also the ex ante evaluation and analysis of potential reforms. And, although the stampede to VAT is now largely behind us most countries had adopted VAT by the early 2000s, with the notable exceptions of the United States, China and India there is still an intensely active debate about the proper structure for indirect tax rates, and the role of indirect taxes compared to direct taxes and social security contributions. Existing indirect tax micro-simulation models,
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
229
given their relatively simple behavioural models (see Section 8.3) cannot answer these questions on their own. But by providing the tool to quantitatively assess some aspects of the suggested reforms on representative samples of households, they do provide useful inputs into these debates. This brings us to the second rationale for building micro-simulation models for indirect taxes. Since the seminal work of Atkinson and Stiglitz (1976), a large theoretical literature has developed which investigates whether or not the tax rates imposed on different goods and services should vary or instead be uniform.4 Initially, mainly the crucial role of separability between labour supply choices and the allocation of earned income to different commodities to obtain optimal uniform commodity taxation was emphasised. Empirical testing of this assumption is limited, but that evidence which there is (Browning & Meghir, 1991) suggests that there may be efficiency gains from some variations in tax rates. And, more recently, several authors have highlighted the role of preference heterogeneity and shown that taste differences can justify differential indirect tax rates as part of redistributive policy (see the work of Blomquist & Christiansen, 2008; Cremer, Pestieau, & Rochet, 2001; Saez, 2002; Sandmo, 1993). However, that does not mean that existing differences in rates such as the reduced rates on food and other ‘essentials’ found in many VAT systems are worthwhile, nor whether the potential efficiency gains outweigh the real administrative, legal and compliance costs associated with applying different rates to different goods. In part, because of this, a number of major studies have concluded that moves towards a broader more uniform VAT could be beneficial (for instance Mirrlees et al., 2011). Moves towards a uniform VAT have also been suggested for developing countries (Anton, Hernandez, & Levy, 2012; Ebrill, Keen, Bodin, & Summers, 2001), although this is perhaps more controversial (Bird & Gendron, 2007). However, there is also lobbying for extending reduced rates of VAT to additional goods and services in many countries such as the campaigns for extending reduced rates of VAT to food served in pubs and restaurants, holiday accommodation and tourism attractions, and housing renovation and repair in the United Kingdom.5 Micro-simulation models do not allow us to answer whether rates should vary or not. But they can be used to assess some of the distributional and behavioural effects of existing rate structures and moves
4
5
In these theoretical models, a uniform tax rate on all commodities and services reduces to a proportional income tax. Therefore, showing in which cases optimal commodity taxes should be uniform is tantamount to pinpoint the conditions under which commodity or indirect taxation is redundant. See http://www.vatclubjacquesborel.co.uk/index.html, http://www.cuttourismvat.co.uk/, and http://www.fmb.org.uk/news-publications/newsroom/campaigns/cut-the-vat/
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
230
Bart Cape´au, Andre´ Decoster and David Phillips
towards or away from uniformity. For instance, such models have been used to show that reduced rates on food and other essentials are not particularly well-targeted ways of redistributing to poorer households, and to assess the distortionary effects on consumption behaviour of having different rates on different goods (Institute for Fiscal Studies [IFS] et al., 2011). They have also been used alongside direct tax and benefit micro-simulation models to examine how poorer households could be compensated for the abolition of such reduced rates (Mirrlees et al., 2011). By taking into account real households’ spending patterns, and the correlation of these with other household characteristics such as income, micro-simulation models provide a link between theoretical models of commodity tax design and the real world. There has also been a debate about the appropriate mix between direct and indirect taxation (Atkinson, 1977; Atkinson, Stern, & Gomulka, 1980; Atkinson & Stiglitz, 1976; Cnossen, 2012; Cremer et al., 2001), and particularly between employers’ social security contributions and VAT (de Mooij & Keen, 2012). In 2006, for instance, reductions in employer social security contributions in Germany were paid for by increases in VAT: the hope was that this would boost employment and Germany’s international competitiveness. Micro-simulation models of both indirect taxation and consumption, and direct taxes and labour supply, have been utilised to assess the impact of this reform (Bach, Haan, Hoffmeister, & Steiner, 2006), finding, in the short term at least, small increases in employment but also in inequality. Micro-simulation models have also been used to assess the long-run distributional effects of such policies for a broader range of EU countries in research for the EU Commission (CPB et al., 2013). On their own micro-simulation models do not answer the question ‘would such reforms be a good idea?’ but they do allow detailed investigation of the effects of reforms on different types of households that may be missed if analysis was restricted to more macro or theoretical work. Thus, indirect tax micro-simulation models have many uses and their use can contribute to areas of active economic research and debate. In the final section of this chapter we discuss how developments in microsimulation methodology may expand the set of questions such models can address. But first, we set out the main methodological issues and choices that need facing when building a typical indirect tax micro-simulation model (Section 8.3) and discuss in more depth two existing models (Section 8.4). 8.3. Methodological issues and choices 8.3.1. Coverage of the model A first decision that has to be made when constructing any microsimulation model is the individual (‘micro’) unit to which it will apply. In
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
231
this chapter we limit ourselves to the consideration of models where the micro-unit is the household. To the best of our knowledge, there are no micro-simulation models that consider the within-household distributions of consumption or indirect tax payments, although there is certainly a substantial economic literature that explores the implications of moving from the household as the decision make unit (the ‘unitary model’) to the individuals within that household (such as in the ‘collective model’). This literature generally rejects the assumption that households make decisions as a single unit (see Hoddinott, Alderman, & Haddad, 2002 or Rode, 2011 for an overview or Chiappori & Donni, 2011 for a list of studies). However, the use of alternative individual-based models for microsimulation is stymied both by conceptual challenges and by lack of data on how much of household spending is for the benefit of different household members.6 There are models where the unit of analysis is the firm rather than the household (Bardazzi, Parisi, & Pazienza, 2004; Reister, Spengel, Heckemeyer, & Finke, 2008) but these are much less common and tend to focus on corporation tax rather than indirect taxes such as VAT. In Section 8.5 of this chapter we discuss how new models are being developed that incorporate both consumers and producers (firms), and Chapter 16 of this book looks more generally at firm-level microsimulation models. The next decision is the coverage of the model both in terms of (a) the taxes to be modelled and (b) the goods and services to be included. General-purpose indirect tax micro-simulation models typically model VAT (or retail sales tax) and excise duties, which in most countries together make up the vast majority of indirect tax revenues. As discussed in Section 8.2, some VAT and excise duties are payable by firms on intermediate transactions with other businesses. Most models typically ignore this part of the tax revenues and focus on that part levied on final sales to consumers. However, if one assumes that this un-reclaimable tax is ultimately incident on prices charged to final consumers, it is possible to include these indirect effects by using input-ouput tables to calculate the implicit tax rates built into prices from these intermediate taxes (see Ahmad & Stern, 1984 or Tamaoka, 1994). These implicit tax rates can then be added to the tax rates applicable on final sales to calculate total tax rates. The data requirements for this are briefly discussed in Section 8.3.4. It is also possible to account for such taxes by grossing-up revenues to match administrative data (which includes these unrecoverable VAT). This would improve the accuracy of the model’s estimates of the revenue effects of reforms. But if this technique is also used for
6
Such information is, strictly speaking, not required for testing the restrictions implied by the collective as compared to the unitary model.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
232
Bart Cape´au, Andre´ Decoster and David Phillips
distributional analysis, the implicit assumption is that the incidence of un-reclaimable input VAT and excise across households is the same as that which is paid directly by households which may not be true. We discuss the issue of grossing-up to match external data on tax revenues and expenditures more generally later. Excise duties usually take the form of a fixed amount of tax per unit (e.g. per litre of petrol, or per litre of alcohol content), sometimes combined with an ad valorem rate (as in the case for cigarettes in the European Union). To model these, one therefore must either have data on quantities or impute quantities, or convert the per-unit tax into an approximate ad valorem rate (see the appendix). For ease of exposition we assume all taxes are ad valorem in the rest of this section. Other taxes which are much less often modelled, include import duties,7 registration fees (such as for cars), annual motor vehicles taxes, or taxes on land, housing or business premises transactions (such as the Stamp Duty Land Tax in the United Kingdom). Subject to assumptions about incidence and the availability of data on imports of different types, import duties can be incorporated via input-output table methods. Registration fees are usually limited to durable goods, which as we discuss below present some problems for consumption and micro-simulation models. Taxes on land and property are also problematic to deal with, due to the limited data on housing purchases and assets in most household expenditure surveys, and the fact that the economic incidence of these taxes is likely to differ from other indirect taxes (see below). This brings us to what goods and services should be included in a micro-simulation model. In principle, one would want to include all goods purchased and consumed, and calculate the taxes paid and any welfare or behavioural effect of these taxes. However, data limitations and differences in the characteristics of certain goods mean that this may not always be possible or indeed, desirable in a single model. Durables, for instance, are purchased much less frequently than nondurable goods. Although household surveys typically try to get around this by asking about durable purchases during a longer time period than non-durable purchases, for larger items (such as a car, new kitchen, or a house), reported expenditures on such items probably do not reflect the average (annualised) spending on such goods by a household. This creates problems for determining households’ places in the distribution of
7
Export duties present additional difficulties. Where exporters are price-takers in international markets, taxes cannot be passed on to overseas buyers but are instead incident, in the first place, on the exporting firms (and ultimately their employees and shareholders). Analysing such a situation would require a firm-level micro-simulation model. Alternatively, if exporters have market power, taxes may be partly or fully incident on overseas buyers.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
233
consumption (see Section 8.3.3), for calculating their average (annualised) tax payments as a result of spending on such durable goods, and for the estimation of consumer demand systems (see Section 8.3.2). One option is to exclude durables from the model, but given that these make up a large fraction of total expenditure (and especially expenditure subject to VAT), this is not ideal. An alternative is to exclude durables from those parts of the micro-simulation model where the infrequency of purchase causes particular problems such as demand systems that examine the effect of tax changes on spending patterns, but to include them in the main (revenue and distributional) part of the model. Housing is particularly problematic: it is often a very large component of overall consumption; transactions are relatively infrequent among owner-occupiers; and in addition to being a consumption good, it also simultaneously serves as an investment that can be sold on. For determining a household’s place in the distribution of consumption, one might want to include a measure of rent actual rent for those actually renting, or imputed rent for owner-occupiers in the expenditure measure. This approach has been followed in work for the European Commission (IFS et al., 2011). Whether one includes housing consumption in measures of expenditure or income can lead to significantly different rankings of households in the consumption or income distribution (Brewer & O’Dea, 2012). Modelling taxes on housing or housing transactions is even more problematic and for this reason it is often not attempted even when housing is included in the measure of consumption. First, the incidence of taxes on housing is likely to be very different than for other goods. For instance, the imposition of VAT on newly built housing may be mostly incident on land-owners rather than house-buyers if the supply of housing land is less elastic than demand for new-build houses. This is an example of the capitalisation of a tax, which may also occur for recurrent property taxes such as the UK’s council tax or transactions taxes like the UK’s Stamp Duty Land Taxes. Alternatively, if VAT does act to increase the price of new-build houses, it is also likely to increase the price of existing houses, despite no tax being levied on these, resulting in a windfall gain for existing home-owners. Such effects are quite different to the way indirect taxes are modelled in general-purpose indirect tax and consumption models. A significant part of ‘expenditure’ on one type of service is nearly always missing: financial services. Whilst household expenditure surveys may ask how much is spent on fees for banks or credit cards, a large part of the ‘cost’ of these services is the spread between the interest rates charged or awarded to consumers and that which the bank faces. As information on financial assets, debts, and interest rates is generally not recorded in household surveys, it is generally not possible to incorporate such effects into indirect tax micro-simulation models. Moreover, financial services are usually exempt from VAT, but as we mentioned before,
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
234
Bart Cape´au, Andre´ Decoster and David Phillips
VAT paid on intermediate inputs could still affect the price paid by consumers for these services. Lastly, once one has chosen the commodities to include and the taxes to model, there is a question as to how to group commodities into categories for use in the micro-simulation model. The question pops up because expenditure surveys, which are the underlying databases for microsimulation models of indirect taxes, typically contain several hundred categories. In fact, as long as the model is a purely arithmetic one that is, without behavioural reactions included in the model there is no real need to group these hundreds of different commodities into a smaller number of categories. The only thing to do is to match each single commodity to its statutory VAT rate or excise duty, and apply the appropriate tax rule. The less one groups commodities in categories, the more flexible the model is more easily allowing changes in the goods subject to different rates of VAT, for instance. But when behavioural responses are simulated (see below) one seldom models indirect taxes at the most detailed commodity level, available in expenditure surveys. In principle, it is best if one is able to group commodities according to the tax rates they face such as reduced or standard rate of VAT, or different types of duties and by their functional characteristics such as ‘food’, ‘non-alcoholic drinks’, etc. This means one ends up with categories such as ‘food subject to the reduced rate’, ‘food subject to the standard rate’, ‘beer’, ‘wine’ and ‘spirits’. The more such categories one has, the more flexible the model is more easily allowing changes in the goods subject to different rates of VAT, for instance. But in some cases especially where behavioural models are estimated broader groups need to be constructed which may need to combine goods subject to different tax rates (such as different types of alcohol subject to different duties rates). In this case, one must calculate the average tax rates for the commodity group as a whole. This is the weighted average of the tax rates of the component commodities, where the weights are population-wide shares of the component commodities in overall spending on the commodity group. Assume for example that a commodity aggregate, say G, consists of the goods G1 ; G2 ; …; Gg ; …; GG , and population-wide expenditures on commodity Gg are denoted by ηGg , its consumer price by qGg , and the tax rate on Gg equals tGg . The consumer price and tax rate of the commodity aggregate G, say qG and tG are then calculated as: qG =
G X g=1
ηGg PG
g = 1 ηGg
qGg
and tG =
G X g=1
ηGg PG
g = 1 ηGg
tGg :
ð8:1Þ
If individual household composition of group aggregates deviates from population-wide aggregates, this aggregation procedure will cause deviations of estimated tax liabilities from actual tax liabilities of
Consumption and Indirect Tax Models
235
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
the household. Moreover, if the taxes on the different goods within these commodity groups change differentially as part of a reform, one has to work out the effect on the average post-reform price tax rate. In general, two methods can be used. First, is to assume thatPpopulation-wide expenditure shares within this commodity group, ηGg = G g = 1 ηGg , remain fixed, which is akin to the ‘fixed shares’ assumption in modelling consumption behaviour (see the next section). The second is to keep relative quantities fixed. This latter approach is most similar to what is done in basic ‘fixed quantities’ micro-simulation models (also discussed in the next section).
8.3.2. The core modelling assumptions what changes in response to a reform? The four modelling options we consider in this section all rely on one key assumption: producer (pre-tax) prices are fixed. This assumption is made in nearly all existing indirect tax micro-simulation models and implies that indirect taxes are fully passed through to the price paid by the final consumer, significantly simplifying the modelling of tax changes. In the conclusion, we briefly discuss two ways to relax this assumption. In each of the four options considered below, labour supply (and thus earned income) is also considered to be fixed. Before setting out the different modelling options available once these assumptions are made, it is worth setting out some simple arithmetic that will be used to show the links between the expenditure recorded in the household survey, tax payments and tax revenue, and the estimated welfare effects of reforms. Assume there are n commodities, indexed by j ∈ f1; 2; …; ng. Quantities consumed by an individual (household) h are denoted by xh = ðxh1 ; xh2 ; …; xhn Þ. Because of the assumption of fixed producer prices, for goods subject to ad valorem taxes only, we can think of these quantities in monetary terms (the physical quantity of a good does not actually matter for tax calculations). Excise duties, on the other hand, are charged per unit of a good rather than as a percentage of expenditure, meaning that physical quantities do matter. In the technical appendix we show how, generally, excise duties can be translated into ad valorem rates. Thus, for ease of exposition we consider only ad valorem taxes in this section (the same four models are applicable to goods subject to excise duties which cannot be converted to ad valorem rates, but the arithmetic involved in calculating the effect of tax changes on consumer prices differs from that presented here). Consumer prices of commodities are denoted by q = ðq1 ; q2 ; …; qn Þ. Because of the assumption of fixed producer prices, we can normalise producer prices to one, and thus for all commodities j we have qj = 1 þ tj , where tj is the indirect tax rate on commodity j. Finally, disposable
236
Bart Cape´au, Andre´ Decoster and David Phillips
income or total expenditure of a household h is denoted by yh, and is considered to be fixed. So, the household’s budget equation reads as: q0 xh ≡
n X ð1 þ tj Þxhj ≤ yh :
ð8:2Þ
j=1
A household’s indirect tax liabilities are thus equal to: n X T xh = tj xhj ;
ð8:3Þ
j=1
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
and, assuming there are H households in the economy, government revenues from indirect taxation, say R, are equal to: R=
H X n X h=1 j=1
tj xhj :
ð8:4Þ
Consider now an indirect tax reform, where taxes are changed from the baseline system t0 ≡ ðt0;1 ; t0;2 ; …; t0;n Þ to a new reformed system t1 ≡ ðt1;1 ; t1;2 ; …; t1;n Þ, with corresponding consumer price vectors q0 ≡ ðq0;1 ; q0;2 ; …; q0;n Þ and q1 ≡ ðq1;1 ; q1;2 ; …; q1;n Þ. The indirect tax model has then to calculate the effect of the reform on household tax liabilities, government revenues, a measure of consumer welfare or spending power, and in some cases, spending patterns. The four methods of doing this each involve different assumptions about what is held fixed and what can vary: • the vector of quantities xh0 = ðxh0;1 ; xh0;2 ; …; xh0;n Þ is held fixed for all households h; • total expenditure yh and expenditure shares sh0 = ðsh0;1 ; sh0;2 ; …; sh0;n Þ (defined below) are held fixed for all households h; • total expenditure yh is held fixed and expenditure shares (and hence quantities) adjust for each household h according to an estimated Engel curve (i.e. only the income effect of the price change is taken into account); • total expenditure yh is held fixed and expenditure shares (and hence quantities) for each household h adjust according to an estimated complete demand system in which we take into account both the real income effect and the relative price effects. We discuss each of these four possibilities in turn. 8.3.2.1. Fixed quantities The first and perhaps most simple method is to hold the quantities in the reform system fixed at initial quantities xh0 . Note first that, because quantities are fixed but tax rates, and thus consumer prices are changing, total
Consumption and Indirect Tax Models
expenditure under the reform system ditures in the baseline system (yh0 ):
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
yh0 ≡
n X j=1
xh0;j ð1 þ t0;j Þ ≠
n X j=1
(yh1 )
237
cannot be equal to total expen-
xh0;j ð1 þ t1;j Þ ≡ yh1 :
ð8:5Þ
This is usually motivated by assuming that the difference between preand post-reform expenditures is absorbed by net savings (or a change in other expenditures not included in the analysis). In a purely static case, this may be justifiable. However, net savings will ultimately be spent and thus, in the long run, changes in net savings will result in changes in tax payments (and welfare) in the future. This means estimates of the revenue effects of a reform obtained from fixed-quantities models are not consistent with the households’ inter-temporal budget constraint and the assumption of fixed labour supply and earnings. If we hold quantities fixed, then the change in taxes paid by household h is calculated as: ΔT h ðt0 ; t1 Þ =
n X j=1
xh0;j ðt1;j − t0;j Þ;
ð8:6Þ
where t1 is the new set of tax rates. The effect on government revenues is then calculated by summing the change in tax payments over households. In this approach the difference between pre- and post-reform tax payments (Eq. (8.6), which is identical to the difference between pre- and post-reform expenditure) usually serves as a proxy measure of the individual welfare change of the reform. In the technical appendix we show that, if we assume that the initial quantities were chosen optimally given the initial tax system, the change in tax payments in Eq. (8.6) is a first order approximation of the welfare effect of the reform (Bourguignon & Spadaro, 2006). In particular, it provides an upper bound to the ‘compensating variation’ that is, the monetary compensation you would need to give (or take away) from a household in order to ‘compensate’ it for the reform (i.e. to make it as well off as it was pre-reform). This is because, when compensated, the household would only choose a set of quantities different from the original quantities if doing so allowed it to enjoy higher welfare (and thus giving the household enough to buy the original quantities would be at worst full compensation and maybe ‘overcompensation’). If the initial quantities were not optimal, then the change in tax payments does not provide a first order approximation to the welfare effect. This means, strictly speaking, this method is only valid when the baseline system is the same system that applied when the household data were collected. In practice, however, the method is used when this is not the case, with ad hoc adjustments to account for the change in tax rates, price levels and income levels between the time the data was collected and the base period for the analysis. In these circumstances, the change in tax
238
Bart Cape´au, Andre´ Decoster and David Phillips
payments with quantities fixed, does not have a straightforward welfare interpretation: it simply provides an estimate of the change in tax payments holding quantities fixed (although this may also be interesting in its own right and is unlikely to be too bad an approximation). 8.3.2.2. Fixed expenditure shares Under this approach, total expenditure is kept fixed at yh0 and quantities are adjusted so that the initial share of total expenditure on each good, sh0;j ≡
q0;j xh0;j , yh0
is held constant. Since q0;j = 1 þ t0;j this means:
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
ð1 þ t0;j Þxh0;j yh0
≡ sh0;j = sh1;j ≡
ð1 þ t1;j Þxh1;j yh0
;
ð8:7Þ
for all j and the new quantities of each good j can be calculated as: xh1;j =
ð1 þ t0;j Þ h x : ð1 þ t1;j Þ 0;j
ð8:8Þ
Then the change in tax payments by a household can be calculated as: ΔT h ðt0 ; t1 Þ =
n X j=1
ðxh1;j t1;j − xh0;j t0;j Þ;
ð8:9Þ
and summed over households to obtain the change in overall revenues. By construction, yh0 = yh1 ≡ yh , meaning that the household’s budget constraint is satisfied in this model, and revenue estimates are consistent with both within period and inter-temporal budget constraints. However, the change in tax payments in Eq. (8.9) no longer represents an upper bound to the compensating variation measure of the welfare effect of a reform. Even stronger, Eq. (8.9) has no welfare interpretation at all. Indeed it is possible that a bundle which is preferred to the bundle chosen in the base line, but which was not affordable at pre-reform prices, becomes available in the post-reform system. It is possible that the taxes paid on this preferred bundle might exceed the taxes paid on the bundle chosen in the baseline the change in taxes paid would therefore suggest a negative welfare effect, when in fact the reform had a positive welfare effect.8
8
For example, suppose there is a two good economy, t0 = ð0:2; 0:1Þ, and there is a consumer who chooses to allocate her budget of yh≡11.2 such that xh0 = ð2; 8Þ. Taxes paid in this initial situation amount to 1.2. Let there then be a reform such that t1 = ð0:12; 0:25Þ. Let the newly chosen bundle be xh1 = ð8; 1:7Þ. This bundle is unaffordable at the old tariff (8 × 1.2 + 1.7 × 1.1 = 11.47 > 11.2), but affordable at the new tariff (8 × 1.12 + 1.7 × 1.25 = 11.085 < 11.2). It also delivers higher tax revenues (8 × 0.12 + 1.7 × 0.25 = 1.385 > 1.2), and by revealed preference (i.e. because it was not affordable at the old prices), might easily be preferred to xh0 = ð2; 8Þ, by that person.
Consumption and Indirect Tax Models
239
However, since we have calculated new quantities in Eq. (8.8), we can calculate a welfare measure similar to the one in Eq. (8.6), but where we now evaluate the tax induced price change by means of the post reform quantities xh1 : ΔT h ðt0 ; t1 Þ =
n X
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
j=1
xh1;j ðt1;j − t0;j Þ:
ð8:10Þ
In the appendix we show how the change in tax liabilities in Eq. (8.10) can be interpreted as a lower bound of a second well-known money metric measure of welfare change: the equivalent variation. This welfare measure of a price change measures the amount the household would be willing to pay (or have to be paid) to forego the reform. If one is willing to assume that the new quantities, obtained from the ‘fixed shares’ assumption, are optimal quantities in the baseline price regime, then Eq. (8.10) provides us with an exact measure of the equivalent variation of the price change. More generally, since the assumption of fixed shares amounts to assuming that preferences are CobbDouglas, one can use the expenditure function associated with the CobbDouglas utility function to calculate exact welfare effects of a reform (instead of relying on first order approximations, and/or lower or upper bounds of equivalent or compensating variations). The mathematics of this is spelled out in the technical appendix to this chapter. Notice that with preferences that imply fixed expenditure shares it need not be the case that the baseline tax system is the same tax system that was in place when the data were collected because spending patterns remain always the same. Unfortunately, the restrictive nature of CobbDouglas preferences are not supported by empirical evidence on consumer responses to price and income changes. This means more flexible approaches can be worthwhile. It is also possible to combine fixed shares for the estimation of revenues with fixed quantities for the welfare and distributional analysis so that first-order approximations of the compensating variation are used for these parts of the analysis. 8.3.2.3. Using estimated Engel curves A full model of consumer behaviour would explain observed household demand as a function of prices, q = p + t, disposable income (or total expenditures), yh, and other household characteristics, say zh. Denote such a demand function as dh (q, yh; zh) = xh. Household expenditure survey data does generally not contain prices, and being a cross-section, any price variation is likely to be limited in any case: this means assessing the price effects in the demand functions is not usually possible using household expenditure survey data alone.
240
Bart Cape´au, Andre´ Decoster and David Phillips
But these surveys contain a lot of variation in household disposable incomes (total expenditure) and a rich set of other household characteristics. So, the effect of disposable income on commodity demands can be estimated using econometric techniques. Such a relation between demand and disposable income is called an Engel curve. We denote it as follows: h y x h = f h ; z h ; ɛh ; ð8:11Þ P ðqÞ where yh/Ph(q) denotes real income or total expenditures, Ph(q) is a price index satisfying Ph ðq; …; q Þ = q for all q ≥ 0, and ɛh is a set of n random |fflfflffl{zfflfflffl} Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
n times
disturbance terms reflecting unobserved preference heterogeneity or deviations from the model. Normalising the baseline such that all consumer prices are equal to one, one can estimate Engel curves on a cross-section without price variation, which establishes a relation between (nominal) income at the period of observation and demand. The baseline quantities according to this estimated Engel curve, say x^ h0 , are then equal to: h y h h h h ; z ; e = f h ðyh ; zh ; eh Þ; x^ 0 = f ð8:12Þ Ph ðq0 Þ where eh is a particular realisation of the random terms ɛh . The last equality follows from the normalisation such that q0 = ð1; …; 1Þ and Ph (1, …, 1) = 1. When the functional form of the system of Engel curves satisfies the adding-up constraints (which means that total expenditures always equal disposable income), and with total expenditures fixed, say yh, the estimated quantities x^ h satisfy: n n X X ð1 þ t0;j Þx^h0;j = yh = ð1 þ t1;j Þx^h1;j ; j=1
ð8:13Þ
j=1
both at baseline and post-reform prices. To simulate the post-reform quantities x^ h1 , an assumption has to be made about the price index function (since it could not be estimated from a crosssection with fixed prices). One possibility is to use a Divisia price index: n
sh
Ph ðqÞ = ∏ qj j ;
ð8:14Þ
j=1
where shj is the budget share of expenditures on j by h. Then the postreform demand can be estimated by: h y h h h h ;z ;e ; ð8:15Þ x^ 1 = f Ph ðq1 Þ
Consumption and Indirect Tax Models
241
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
where Eq. (8.14) is used to calculate the price index at the new prices q1 . Note however that a problem arises here, because post-reform shares sh1 needed in Eq. (8.15) are not known. In the appendix we show how a firstorder Taylor expansion of yh =Ph ðq1 Þ around the initial prices q0 can be used to approximate the new real income as follows: ! n X yh ð8:16Þ ≈ yh 1 − sh0;j d ln q0;j ; Ph ðq1 Þ j=1 in which the second term between brackets is simply a weighted average of the percentage changes of the prices of the individual commodities. Effects on tax payments can then be calculated as under the fixed expenditure share approach in Eq. (8.9): ΔT h ðt0 ; t1 Þ =
n X j=1
ðxh1;j t1;j − xh0;j t0;j Þ;
ð8:9Þ
and revenue effects are obtained by summing these changes across households. As with the fixed-shares models, the changes in tax payments cannot be used as welfare proxies. But again, approximations to compensating and equivalent variations can again be constructed. One additional remark is worth mentioning, relating to the fact that in this approach one has an explicit model (Eq. (8.11)) which explains observed expenditure patterns. This allows to compare the simulated post-reform expenditure patterns with the simulated baseline expenditure patterns, instead of with the actually observed baseline expenditures. Actual and simulated pre-reform patterns will differ, in general, by the prediction error in the model: e~hj ≡ xh0;j − x^h0;j ; ∀j; h:
ð8:17Þ
The possibility to compare the reform situation with the simulated baseline allows to isolate the effect from the tax reform as such, from the prediction error of the model. Note of course that working with predicted expenditures also opens the possibility that, depending on the model used, predicted shares may be negative for some goods for some households. Alternatively, one could add the error term in Eq. (8.17) to the predicted pre- and post-reform expenditure shares or quantities. In Section 8.4 we will sketch how indirect taxes have been integrated in the tax benefit model EUROMOD, along the lines explained in this subsection. 8.3.2.4. Using estimated demand systems If, in addition to information on expenditures, one has information on prices (from outside of the expenditure survey), and variation in those prices, one can estimate a full demand system, that is the set of equations
242
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
h
Bart Cape´au, Andre´ Decoster and David Phillips h
h
d ðq; y ; z Þ referred to in the previous subsection, and incorporate this in the micro-simulation model.9 There are a variety of different types of models that can be estimated with popular examples including the AIDS (Deaton & Muellbauer, 1980b) and QUAIDS (Banks, Blundell, & Lewbel, 1997) models.10 We refer to the appendix for a brief description of the QUAIDS model. Neither of these models imposes that demands must be non-negative, although in practice, by grouping commodities together so that there are few observations with zeroexpenditures for a category, this problem can be minimised. However, it is possible to impose non-negativity constraints on such models as is done in Golan, Perloff, and Shen (2001) for an AIDS model of meat demand in Mexico. As in the case of Engel curve estimation, baseline spending patterns should be simulated to avoid ascribing deviations from the model to effects of the tax reform. Changes in tax payments can then be calculated by backing out pre- and post-reform quantities from the expenditure shares and multiplying by the appropriate tax rates; and revenue effects by summing these changes across households. Welfare effects can again be estimated using the approximations to compensating and equivalent variation discussed above. However, if the demand system one has estimated is integrable that is if it has a well-behaved indirect utility and expenditure function underlying it one can use the structure of the model to obtain the welfare effects of the reform. The technical appendix shows how both the compensating and equivalent variation measures can be calculated using a QUAIDS model. Note that these exact welfare calculations are consistent with the predicted spending patterns under the model rather than the actual observed spending patterns. The estimation of a demand system has real benefits one can estimate and incorporate the effects of changes in relative prices and real incomes on spending patterns, which then affect the revenue and welfare
9
As explained in Deaton and Muellbauer (1980a, pp. 138140), there are possibilities to ‘estimate’ price elasticities with almost no relative price variation in the data. This is obtained by assuming additive separability of preferences, which imposes a particular structure on the substitution matrix, leading to an ‘approximate proportionality of expenditure and price elasticities’. In that case ‘knowledge of one price elasticity is sufficient to allow cross-section household budget data to be used to ‘measure’ price elasticities. The measurement is, of course, largely by assumption’. However, additivity is strongly rejected in most econometric tests of this restriction on preferences. 10 AIDS stands for Almost Ideal Demand System, and QUAIDS for Quadratic Almost Ideal Demand System. Both demand systems allow to impose the adding-up, homogeneity and symmetry restriction stemming from consumer theory or preference maximisation. They are almost ideal in that the fourth restriction, negative semi-definiteness of the Slutsky matrix, cannot be imposed.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
243
estimates obtained in the model. In principle, incorporating this behavioural response improves the accuracy of a model. A micro-simulation model incorporating an integrable demand system, can also be used to investigate the economic inefficiency resulting from distortions to relative prices when there are different tax rates on different goods (in these standard demand systems, assuming that one can redistribute via the direct tax and benefit system, it is optimal to have uniform commodity tax rates). MEXTAX, described in Section 8.4 has been used to do just this. However, in order to utilise such demand systems, trade-offs have to be made. First, because analysis using demand systems means revenue and welfare estimates are based on predicted as opposed to actual spending patterns, results for individual households and, indeed, groups of households will be less accurate. The inaccuracies this introduces into distributional analysis may be larger than gains from accounting for changes in demand patterns (Abramovsky, Attanasio, & Phillips, 2012). One way that attempts to get ‘the best of both worlds’ is to estimate the true welfare effects given the predicted shares from the demand model, and add on the first order approximation (fixed quantities) estimate of the welfare effect associated with the effect of the tax change on the error term e~ h . For a change in the tax rate on good j, this means calculating the exact compensating variation using the expenditure function associated with the demand system, and then adding on the following term: e~hj ðt1; j − t0; j Þ ≡ ðxh0; j; actual − xh0; j; predicted Þðt1;j − t0;j Þ:
ð8:18Þ
Another issue that may result in inaccuracies is that when estimating demand systems (or Engel curves for that matter) it is often necessary to group commodities together to ensure that the demand system can be feasibly estimated (too many goods means too many parameters to estimate, and a greater chance of zero-spending on particular goods, which as mentioned already can cause problems). The aggregation in commodity groups could involve putting together commodities that face different tax rates. For example, different types of alcohol or tobacco subject to different duties may be grouped into a single ‘vices’ category, for instance. When doing this, one must calculate the average tax rate for the commodity group as a whole. The issues involved in doing this were discussed above. 8.3.3. Welfare and distributional analysis As discussed in Section 8.2, one of the main uses of indirect tax microsimulation models (and, indeed, tax micro-simulation models more generally) is to analyse the impact of indirect taxes and reforms to indirect taxes on the distribution of consumer welfare and purchasing power. This
244
Bart Cape´au, Andre´ Decoster and David Phillips
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
reflects general interest in whether existing tax structures and tax reforms are progressive or regressive, and more broadly, in the extent to which different groups of people are affected by indirect taxes and potential or actual reforms to these taxes. Thus, a key question is how to order the population of interest, or partition it into groups, so that the differential impact of indirect tax reform can be explored. 8.3.3.1. Ranking and grouping households The most common approach is to rank households by a proxy of their pre-reform welfare and analyse how gains or losses from a reform vary across the distribution of this welfare proxy. The welfare proxy chosen is typically either the household’s income or consumption, which in the latter case, is typically proxied itself by a measure of the household’s expenditure.11 Which is preferable? The issue at stake here is whether a household’s position in the income distribution or expenditure distribution gives a better indication of where they rank in the distribution of welfare or more simply, which gives a better indication of whether a household is ‘rich’ or ‘poor’. In order to assess this, one must first understand that household surveys generally pick up a ‘snapshot’ measure of income or expenditure (e.g. income in the last month, or spending on different types of items in periods ranging from one week to one year). But such a short-term measure might not accurately reflect the living standards of the household in either the short or long run. For instance, households with low incomes may be able to use borrowings, savings or previously purchased durable goods to maintain their living standards, at least in the short run. Many economists have argued that households should be ranked by their consumption as this takes account of such ‘smoothing’ of income shocks (Meyer & Sullivan, 2003, 2004, 2008, 2011; Poterba, 1989). The argument for using consumption is particularly persuasive if we believe households smooth their consumption over long periods of time and we are concerned with the long-term distributional impact of a policy change.
11
These measures are then typically ‘equivalised’ to account for the fact that the level of welfare obtained by household members for a given income or level of spending will differ depending on its size and structure (e.g. the number of children). There is a substantial literature on equivalence scales which finds that the types of equivalence scale traditionally used in micro-simulation models (and analysis of poverty and inequality) have substantial weaknesses (Balli, 2012). However, estimating more consistent equivalence scales is very challenging and if it is not deemed feasible by the modeller, it is better to use a standard scale, such as the OECD-modified scale, than none at all.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
245
This might suggest a preference for using expenditure to rank households. But expenditure is not the same as consumption: expenditure captures the purchase costs of durable goods like cars, whereas consumption captures the flow of benefits from these goods. Like income, expenditure may be volatile, with households purchasing certain items infrequently, especially larger durable goods such as motor vehicles or new kitchens (but also food if they bulk-purchase). Excluding durable goods from the measure of expenditure removes much of this problem but introduces a new one: you may rank households incorrectly if they devote different proportions of their budgets to durable goods. It is therefore not clear whether expenditure represents a better measure of a household’s living standards than income: both are volatile, and furthermore, both suffer significant measurement error in surveys. For this reason it is often worthwhile conducting analysis ranking households both according to their position in the income distribution and in the expenditure distribution. But one may also want to group households according to characteristics other than their income or expenditure. One may be concerned how a tax reform affects different ethnic groups harder than others; people of different ages; or households with different numbers of child and adult members (such as single parents vs. couples without children). One can also address a frequently posed policy question who are the gainers and losers of a tax reform? by reversing the analysis and ranking households by gains and losses and examining the characteristics of households at different parts of the distribution. To do this, one must first choose how to measure the gains or losses it could be in cash terms, or measured as a percentage of expenditure, for instance and then rank households from biggest loser to biggest gainer according to this measure. Then, one can analyse the sex, age, total expenditure or other socioeconomic characteristics of those households at different parts of the distribution of gains and losses. This method allows one to examine the heterogeneous impact of reforms on households of similar characteristics. For instance, it may be the case that poorer households gain from a reform, on average, but that among this group, a majority gain a little, and a minority suffer substantial losses (or, even that a majority lose but a small minority gain substantially outweighing those losses on average). In this respect, such analysis serves a similar purpose to examining quantiles of effects across the distribution of expenditure or by demographic group, for instance, rather than simply the mean effect. It also has the benefit of, in principle, requiring a lower degree of interpersonal comparability of welfare than analysis of average gains and losses: one needs only a ranking of gains and losses and not a quantification of the size of the gains or losses (i.e. it requires an ordinal rather than cardinal measure of gains/losses). Indeed, it is possible to examine the characteristics of
246
Bart Cape´au, Andre´ Decoster and David Phillips
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
‘gainers’ and ‘losers’ without attempting to quantify relative gains or loss. Within the group of losers or gainers, households can be ranked or subdivided by size of loss or gain.
8.3.3.2. Assessing progressivity of a reform or system More typically, however, which metric(s) to use to quantify the gains/ losses facing different households is an important methodological choice in any indirect tax micro-simulation modelling exercise. If one is willing to assume a particular functional form for the relationship between spending power and individual welfare and/or social welfare, one can use that to translate the cash-terms gains and losses into changes in welfare for households and aggregate that using social-welfare weights to calculate the change in social welfare. However, generally, researchers do not know and are not willing to assume this relationship. Instead, applied indirect tax micro-simulation modellers generally examine how the cash or proportional-terms gains/losses vary across household groups such as income or expenditure decile groups, and/or use summary metrics such as Gini coefficients, Concentration Curves, Concentration Coefficients or Kakwani indices, expressed in these cash terms, to determine whether reforms are ‘regressive’ or ‘progressive’. Calculating and presenting the gains and losses resulting from indirect tax reforms in both cash terms, and relative to pre-reform expenditure, is usually worthwhile, and allows for a more nuanced and policy-relevant analysis. For instance, zero or reduced rates of VAT on food and other necessities result in gains that are largest as a proportion of household expenditure for low-income and low-expenditure households due to the high fraction of total spending that they devote to such items. However, high-income and high-expenditure households generally spend the most in absolute terms on food and other necessities, and therefore gain most in absolute cash terms from zero and reduced rates. Presenting results on a cash- and proportional-terms basis therefore allows one to see that while such policies are progressive in relative terms, they may not be a particularly targeted way to redistribute resources to poorer households (see Abramovsky, Attanasio, Emmerson, & Phillips, 2011a, 2011b, and IFS et al., 2011 for such arguments). But a percentage of what? While it is not clear whether expenditure or income should be used to rank households from rich to poor, careful consideration shows that gains or losses due to indirect tax reforms are best expressed as a fraction of spending rather than income. This is because this provides a better understanding of the long-term impact of indirect tax changes. The reasoning behind this statement is best explained using an example. Consider the case of a uniform VAT on all goods and services. Over a lifetime, if lifetime income and lifetime expenditure are equal, this can
247
Consumption and Indirect Tax Models
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
12
clearly be seen as distributionally neutral as generally defined: as it is imposed on all goods and services at the same rate, it has the same proportional effect on the purchasing power (although not necessarily the welfare) of rich and poor households. VAT payments under such a system would be the same fraction of both lifetime income and lifetime expenditure for rich and poor households. But suppose, as in reality, we only have information on current income and spending. If VAT payments are presented as a fraction of current expenditure, this distributionally neutral pattern of payments would be found. However, because households with low current income tend to spend more than their income, and those with high current income tend to spend less, showing payments as a fraction of net income will make the uniform VAT look regressive if households are defined as rich or poor based on their current income. On the other hand, if households are defined as rich or poor based on their current expenditure, because households with the lowest spending tend to report incomes that are higher than their spending, and those with high spending tend to report incomes that are lower than their spending, showing VAT payments as a fraction of net income will make the uniform VAT look progressive. That is, a distributionally neutral uniform VAT can be misleadingly labelled progressive or regressive if VAT payments are expressed as a proportion of net income. For this reason, analysis showing VAT payments as a proportion of household expenditure should be considered more informative. A similar argument can be used to show that direct tax payments should be expressed as a proportion of household income. The argument that showing VAT payments as a fraction of income may give a misleading impression of the lifetime distributional impact of VAT is driven by the potential for households to borrow and save, but it does not rely on households being able to borrow freely or have large amounts of savings to draw-down. Neither does it rely on consumers being rational and forward-looking or engaging in optimal consumption smoothing.
12
The assumption that lifetime income and expenditure are equal means that in this example we abstract from gifts and bequests. This is for ease of exposition only: the argument with bequests is more complicated but conclusions are largely unchanged. For example, when assessing the proportional impact of VAT on households that are recipients of gifts and bequests, it seems clear that we would want to take into account those gifts and bequests when measuring their lifetime resources. We would not, for instance, wish to say that a household with zero income but large expenditures funded by gifts and bequests is hit infinitely hard by VAT. Including bequests and gifts in the lifetime resources of the recipient makes subtracting them from the resources of the giver attractive to avoid the double counting of gifts and bequests. Adding and subtracting gifts and bequests when calculating lifetime resources in this manner means a uniform VAT would be found to be a constant fraction of both lifetime resources (income) and lifetime expenditure, that is it would be distributionally neutral as in the case with no gifts and bequests.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
248
Bart Cape´au, Andre´ Decoster and David Phillips
To see this, consider a poor household with a long-run income of 100 euros per week but who is currently spending 200 euros per week, funded by drawing down the last of their savings. Furthermore, suppose that the rate of VAT is 25% on all goods and services. The household would pay 40 euros per week in VAT, equal to 20% of their current spending but 40% of their current income. The question is which measure is a better reflection of the impact of VAT on the household? It is true that their current income is a better measure of their long-run purchasing power than their current expenditure is. But it does not follow that expressing VAT payments as a proportion of current income gives a better measure of the impact of VAT on that long-run purchasing power. This is because when the household is forced to cut their spending back to the level of their long-run income (100 euros per week), the amount of VAT they would pay falls to 20 euros per week. This is equal to 20% of their current and long-run income, and their long-run expenditure of 100 euros per week. 8.3.3.3. Distributional analysis of reforms that involve changes to direct and indirect taxes But if one should express gains/losses due to indirect taxes as a percentage of expenditure and gains/losses due to direct taxes as a percentage of income, how can one combine the results of direct and indirect tax microsimulation models to look at the overall impact of a package of reforms? Recall that some of the key questions, assessing the impact of shifts from direct to indirect tax, or VAT base broadening combined with compensating changes to direct taxes and benefits, require such combination of models. The short answer is that there is no perfect way to combine results. However, there are three approaches that can be used that allow one to investigate the distributional effects of mixed tax reforms. First is to add up the cash gains/losses calculated using both the direct tax and benefit model and the indirect tax model and express results both as a percentage of income and as a percentage of expenditure. This means there is one figure for the total cash gain/loss and two figures for the percentage gain/loss. When assessing progressivity, households should also be ranked according to their income and expenditure, which then gives you four sets of results (incomeincome, incomeexpenditure, expenditureincome and expenditureexpenditure). When all four show the same pattern proportional gains, for instance, increasing or decreasing with income/expenditure then a reform can be judged as clearly regressive or progressive. Where the different sets of results differ, judgement must be used. Chapter 9 of Mirrlees et al. (2011) provides a good example of this approach. Second is to calculate the percentage gains/losses using each simulator separately and to simply add them together to a proxy of the overall percentage loss a household faces. Cash terms gains/losses can then be calculated based on either multiplying this total percentage gain/loss by
Consumption and Indirect Tax Models
249
either income or expenditure. Under this method, there is one figure for the total percentage gain/loss and two figures for the cash gain/loss. Third, if panel data on both expenditure and income is available, much of the problem disappears as over longer time periods expenditure and income are likely to be more similar than at any given snapshot, which makes combining results easier.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
8.3.4. Data requirements and processing The type of data required depends upon the specifics of the indirect tax model in question. For even the most basic model, one needs data on household expenditures, disaggregated into categories which align at least approximately with goods subjected to different tax rates under the baseline and reform tax systems, and information on the tax rates applied to the different categories. Such data are now collected on a regular basis in all EU countries, the United States, Canada, Australia, Japan and many other developed and developing countries. In addition, information on household and individual demographics and income is likely to be required for inclusion as explanatory variables in Engel curves or demand systems,13 or as part of the distributional analysis. Sometimes this data is not available in the expenditure survey and in this instance, data can be imputed from other surveys, or two surveys can be used together. Our description of the EUROMOD indirect tax simulation module (Section 8.4) includes an explanation of how this can be done. If one wishes to incorporate housing consumption in total expenditure/consumption or income, one may also need to impute this using observed rents and household and housing characteristics. Brewer and O’Dea (2012) describe a method of doing this. A choice that needs to be made is whether to adjust expenditure survey data to account for under-recording of household expenditures in expenditure surveys. For instance, the UK’s Living Costs and Food Survey captures only around 70% of overall household spending according to the UK’s National Accounts, and similar problems exist with other surveys. This under-reporting is not evenly spread across commodities, however: it is generally a bigger problem for ‘vices’ like alcohol and tobacco. If one does not account for this under-reporting, micro-simulation models are liable to significantly under-estimate the revenue effects of reforms, especially for excise duties. And because under-reporting is usually greater for ‘taxed’ than ‘untaxed’ expenditures (because of high excise duties and significant under-reporting for alcohol and tobacco),
13
Income is often used as an instrument for total expenditure.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
250
Bart Cape´au, Andre´ Decoster and David Phillips
estimates of the proportional effect of indirect taxes and tax changes on average household spending power or welfare are also likely to be underestimated. This suggests one might want to use National Accounts data to adjust for this under-reporting. But there is no particularly attractive way of doing this. Applying a uniform adjustment to all spending is likely to lead to over-estimates of spending on some items, and under-estimates on other items which is also likely to lead to inaccurate revenue and welfare effect estimates. On the other hand, applying different adjustments to different types of expenditure would change the spending patterns of households, and mean different adjustments to different households’ total expenditures, depending on the types of spending they undertake. This may have a substantial effect on the estimates of the distributional effects of reform. An issue with both methods is that they assume the amount not reported by a household is proportional to the amount of expenditure that is reported. This might not be true: under-reporting might be more significant at the top or bottom of the expenditure distribution, for instance; and the underreporting of certain categories of expenditure such as alcohol and tobacco is likely to be due to some households omitting such spending completely. Because of these difficulties, some practitioners recommend using unadjusted data, and only grossing-up to National Accounts or external revenues estimates outside of the micro-simulation model when calculating revenue effects of reforms. One can always test the sensitivity of results to alternative assumptions about this missing expenditure. As already mentioned, if one wishes to model the effect of un-reclaimable VAT or excise duties paid by firms on households, data from National Accounts input-output tables are required. Such tables typically do not have the same level of detail on individual goods and services as household expenditure surveys and, in particular, categories may not always align with those subject to different tax rates. And while one may be able to calculate average tax rates for these commodity groups based on expenditure weights from the household expenditure survey, there is no guarantee that these weights are also applicable to intermediate firms’ purchases. Thus, while use of input-output tables allows micro-simulation models to include a larger set of taxes, it also requires the modeller to make more assumptions. Lastly, if one wishes to estimate a consumer demand system, information on prices, and variation in those prices is also required. In most analyses for developed countries, this price information comes from outside the household expenditure survey: official consumer price indices are the most common (used in IFS et al., 2011 and Mirrlees et al., 2011, for instance). In order to get enough variation in prices this typically involves pooling several years of household expenditure survey data. For developing countries, information on expenditures and quantities is more often available within the household expenditure survey, at least for some goods such as food, clothing, and fuel. This allows the construction of unit
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
251
values (expenditure/quantity). Such unit values may be appealing because they typically exhibit substantial variation across households or localities. However, such variation is likely to reflect differences in quality as well as price, and if one does not account for this, one is likely to get biased estimates of price responsiveness. Methods exist to adjust for this (Crawford, Laisney, & Preston, 2003 and Deaton, 1988, 1990), but are not consistent with the kind of integrable demand systems that one might want to use in micro-simulation models (such as the AIDS or QUAIDS models). With this in mind, some authors have attempted to limit the problem of quality variation by making use of average unit values for relatively large geographic areas (Attanasio, Di Maro, Lechene, & Phillips, 2013), or official regional consumer price indices as in developed countries (Abramovsky, Attanasio, & Phillips, 2012). It is important not to re-introduce the ‘quality problem’ when calculating the prices of the commodity groups used in the demand system from the lower-level prices typically available in consumer price series. This means the weights used to calculate these ‘average’ prices should be population-average or group-average weights rather than individual-level weights.14 8.4. Examples of indirect tax micro-simulation models In order to understand the choices and issues faced when building and utilising indirect tax and consumption micro-simulation models, let us examine in more detail two models: the MEXTAX model of tax and consumer demand for Mexico, and the new indirect tax modules in EUROMOD. MEXTAX is based on the estimation of a full demand system (QUAIDS). The model also carefully distinguishes purchases in the formal from those in the informal sector (where no indirect taxes are charged). That distinction is especially important for many developing countries. It moreover allows for sensitivity-analysis with respect to different methods to the under-reporting of consumption from which many surveys suffer, and with respect to different assumptions on the pass-through of indirect taxes paid by producers or traders. EUROMOD was originally
14
To see why using individual weights can re-introduce the quality problem, consider the case of a demand system that requires a price for ‘beef’ for which there are two sub-goods: beef burgers and fillet steak. Household A likes beef and spends a lot on high quality and costly fillet steak. Using an individual level weight would give the household a high price for beef. Household B does not care much for beef and tends to buy cheap burgers. Using an individual level weight would give the household a low price for beef. Estimating a price elasticity from this data using a standard model would produce a positive price elasticity of demand because it would attribute the high expenditure of Household A on beef to the high price paid, and vice versa. But in reality the differences in spending on beef reflect preference differences and the differences in ‘price’ reflect differences in quality.
252
Bart Cape´au, Andre´ Decoster and David Phillips
a pure tax-benefit model, covering income taxes and social security contributions and benefits, developed for all the 27 EU member states. The last decade some attempts were made to integrate an indirect tax module into EUROMOD and we describe one such solution.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
8.4.1. Indirect tax micro-simulation in MEXTAX MEXTAX was developed at the Institute for Fiscal Studies (IFS) in order to examine the distributional, behavioural and revenue effects of a set of tax reforms proposed in Mexico in 2010 (Abramovsky et al., 2011a, 2011b), and Abramovsky et al. (2012). It includes both direct and indirect tax micro-simulation modules, and incorporates a demand system that allows the effect of reforms on spending patterns and welfare to be calculated. It also allows for different assumptions about the pass-through of indirect taxes to be made (with those taxes not passed through to consumer prices instead being borne in the form of lower earned or capital income). The main dataset used is the Encuesta Nacional de Ingresos y Gastos de los Hogares (ENIGH), a comprehensive survey of around 29,000 households undertaken every two years by the Mexican national statistical agency (INEGI). This includes comprehensive information on household and individual demographics, employment and income, and expenditure and consumption. Expenditure is recorded for more than 750 separate goods and services, with information on the type of vendor items were purchased from (for instance where it was a supermarket, small shop or informal hawker), the payment method used (for instance cash or credit card), and for food and clothing items, the quantity of purchase that the expenditure corresponds to (for instance, the weight in kilograms). MEXTAX has been designed so that the number of categories of expenditure included can easily be amended: the model is written in Stata and the characteristics of the input data and tax system are easily changeable parameters defined as scalars and globals. In analyses carried out so far, however, the 750 goods and services are grouped into 60 categories, 30 of which record ‘informal sector’ expenditures (which, by default, includes cash expenditures at street vendors, small markets and some small stores) on which it is assumed no tax is paid, and 30 of which record ‘formal sector’ expenditures (expenditure at other vendors and by credit card or other electronic means) for the same goods, on which the full tax due is assumed to be paid. Taking account of informal purchases is important when analysing the revenue and distributional impact of indirect taxes in developing countries: a large fraction of notional revenues are in fact not collected, and such taxes are avoided more often by poorer households than richer ones, and rural ones than urban ones. In constructing the expenditure categories, items are grouped together so that they all have the same tax treatment as far as possible: that is they are subject to the same rate of VAT or the same excise duty regime. In doing this, goods that are exempt from VAT are kept separate from those subject to a zero rate. This is important because the price of the
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
253
former could be affected by changes in VAT due to un-reclaimable input VAT, the effects of which one can approximate using social accounting matrices as the World Bank has recently done in its subsequent use of MEXTAX (the model has been made available to other users by the IFS). Under-reporting of expenditures (and incomes) is particularly evident for ENIGH, with aggregate expenditures only around one-third that recorded in the Mexican National Accounts. However, the extent of underreporting varies significantly by type of expenditure: reported expenditure on education services was 92% of that recorded in National Accounts, but just 9% for alcohol and tobacco. Because of this, and the issues discussed in Section 8.3.4, analysis using MEXTAX has relied on sensitivity-testing findings to use of alternate assumptions about under-reporting. MEXTAX has also been designed to allow for sensitivity-testing regarding the pass-through of indirect taxes to consumer prices. The default assumption is that pass-through is full. However, it is possible to assume any degree of pass-through between 0% and 100%, with any tax not passed through assumed to reduce formal-sector employment income and capital income (and the split between employment and capital income can also be varied). The model also allows a choice of whether to assume fixed quantities, fixed expenditure shares, or variable expenditure shares based upon an estimated QUAIDS model. Use of the demand system allows the impact of reforms on spending patterns to be predicted, and because it is integrable, allows for the calculation of welfare effects once one allows for changes in spending patterns. The 60 expenditure categories in the main MEXTAX model are collapsed into 12 demand system categories in the present version of the model (although the model allows one to define the number of demand system categories), based upon their functional characteristics and their treatment by the VAT system. However, the less-fine categorisation means that different types of alcohol and tobacco are put in a single category presently, despite differential treatment by the duties system, meaning that the demand system is currently of little use when simulating differential changes to duties on alcohol and tobacco. However, new demand systems can be estimated for particular purposes e.g. with a more detailed breakdown of alcohol expenditure and easily integrated with the model. Once the various simulation options have been chosen and the model run, MEXTAX produces a number of outputs. First, a household-level dataset that includes demographic variables, expenditures and calculated indirect taxes under baseline and reform systems is produced. This allows analysts to perform bespoke analysis. Second, a number of Stata log-files including distributional tables ranking households on both income and expenditure, and expressing gains/losses in cash terms and as a percentage of income and expenditure revenue estimates, estimated welfare effects and estimated behavioural effects are produced. These provide the most common summary statistics users will need.
254
Bart Cape´au, Andre´ Decoster and David Phillips
Full information on how to use MEXTAX, which is now integrated into the larger LATAX programme is available on request from the authors in the LATAX manual (Abramovsky & Phillips, 2013).
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
8.4.2. Indirect tax micro-simulation in EUROMOD EUROMOD is a multi-country EU-wide tax-benefit micro-simulation model, maintained and coordinated by the Institute for Social and Economic Research (ISER) at the University of Essex (see Chapter 4 of this Handbook and Sutherland & Figari, 2013 for an introduction to the model). As is common to most tax-benefit models, it focuses on personal income taxes, social security contributions and cash benefits, and has not modelled indirect taxes in its standard version. One of the reasons is that the underlying database, which for most of the EUROMOD countries is EU-SILC, does not have sufficiently detailed expenditure information in it, to allow for a detailed modelling of VAT and excises for the representative sample of households in EU-SILC. Naturally, there have been several attempts to extend EUROMOD with indirect taxes, starting with O’Donoghue, Baldini, and Montovani (2004), and resulting at the end of 2013 in a generic VAT-extension for the Belgian version of EUROMOD. Since, the main impediment to integrate indirect taxes in EUROMOD had to do with the lack of expenditure data in the underlying database with incomes, most of the time and efforts devoted to expand EUROMOD with an indirect tax module have been allocated to imputing expenditures into the underlying income database. In the EUROMODproject AIM-AP, it was investigated in depth which method to choose to carry out this imputation. Five methods of imputation were tested against each other: parametric and non-parametric Engel curves, constrained and unconstrained matching, and grade correspondence.15 The final assessment
15
Matching methods and grade correspondence do not rely on an explicit statistical model to impute the expenditure information. They rely on choosing one or more overlapping variables in both surveys. They try to directly concatenate expenditure information to observations in the income dataset by using the values of those overlapping variables for an observation in the household survey, that are ‘as similar as possible’ to the values of those variables for the target observation. Similarity is expressed mathematically as a distance function which has to be minimised and which can take the form of a numerical value for the matching methods, or of belonging to the same categories and having the same rank within these categories for the grade correspondence. Unconstrained matching allows replacement of already chosen records in the source dataset with expenditure information, while constrained matching forbids replacement. For a report on the evaluation of the different methods on the basis of imputation for five countries (Belgium, Greece, Ireland, Hungary and the United Kingdom), see Decoster et al. (2007) and Decoster et al. (2011), and for a review of the AIM-AP-project, see Sutherland, Decoster, Matsaganis, and Tsakloglou (2009).
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
255
was that, overall, parametric Engel curves performed best. The sequel to the AIM-AP-project, confined to Belgium and Germany only, relied therefore on parametric Engel curves to enrich the SILC-datasets for these two countries with detailed expenditures.16 The regression based imputation proceeds in several steps, and is described here for the Belgian application in which consumption microdata from the 2009 Belgian household budget survey (HBS) were used as an input to enrich EU-SILC data from 2010 (with reported incomes for 2009). First, total non-durable spending and durable spending have been estimated in the HBS as a polynomial of disposable income and other socio-demographic characteristics. The estimation of durable expenditures was done in two steps. To account for the fact that a large number of households report zero-spending on durable goods during the reporting period, a Probit model is estimated for the probability of positive demand for durable spending: At the second stage, the demand equation for total household durable commodities is estimated, for those households reporting positive expenditures. With these estimated coefficients, total nondurable expenditures, and durable expenditures are then imputed on the SILC-dataset. Durable expenditures are assumed to be zero when the household is not likely to expend on durables according to the Probit, while the estimated durable demand curve is used for imputing values for those likely to expend on durables. Because disposable income in SILC, as simulated by EUROMOD, differs from that in the HBS, an adjustment was needed when making these imputations: disposable income in the SILC dataset was rescaled, such that its mean and variance corresponded to the mean and variance of disposable income in the HBS (used for the estimation of spending).17 Given these imputations for durables and total non-durable expenditures, the amount of savings is defined residually as the difference between
16
17
This VAT-extension for EUROMOD-Belgium was carried out within the research project FLEMOSI (Flemish Models of Simulation) which delivered a web-based version of the Belgian version of EUROMOD, available at www.flemosi.be. The VAT-extension was carried out in parallel with the extension of the German EUROMOD-model. The methodology of imputing expenditures in EU-SILC was simultaneously developed for the German and the Belgian SILC. However, only the Belgian VAT-module was fully integrated in the EUROMOD-architecture, the German one standing as a side-routine outside EUROMOD. For both VAT-extensions of EUROMOD, see Decoster, Ochmann, and Spiritus (2013, 2014). Another reason for this discrepancy might be that not all personal income tax reductions are simulated in EUROMOD, resulting in too high tax liabilities and too low disposable incomes. The correction has only been applied in the imputation stage of total nondurable and durable expenditures. For the determination of residual savings, actual disposable income from EUROMOD is used again, such that adding-up conditions are fulfilled. For the remaining covariates, it is assumed that distributions are similar in both datasets.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
256
Bart Cape´au, Andre´ Decoster and David Phillips
the non-corrected simulated disposable income in SILC and the sum of these two imputations. Note that dissaving is explicitly allowed for. In a second step, total non-durable expenditures are allocated to 15 commodity groups (based on the COICOP classification). Among these 15 non-durable commodity groups, estimation and imputation is conducted differently for two sub-groups. The first sub-group consists of commodities that are typically exposed to many zero-expenditures in the data, which are tobacco, housing rent, public transport, and education. The second sub-group consists of the remaining 11 commodity groups. The estimation strategy for the first group closely follows the approach for durable spending: first a Probit model is estimated for the probability of positive demand for the respective commodity. Next, budget share equations for each of these four non-durable commodities in this first sub-group are estimated, conditionally on demand being positive. For the second group of the 11 remaining commodities only budget share equations are estimated.18 To have the estimated coefficients satisfying adding-up conditions, total non-durable remaining expenditure (i.e. after expenditure on the first group of four commodities had been subtracted) was used as the covariate in this case. The coefficients of these equations were then used to impute detailed expenditures in SILC. Again, before imputing the expenditure shares into the EU-SILC, total non-durable expenditures are adjusted such that their distribution matches more closely the distribution in the HBS. To obtain expenditure levels however, imputed expenditures shares are multiplied, not by this corrected total non-durable expenditures, but by the non-corrected ones. This guarantees consistency and adding-up within the simulations of EUROMOD. We refer to Decoster et al. (2014) for a detailed assessment of the expenditure distribution in the Belgian SILC dataset. Adding the routines to calculate VAT or excise liabilities, once expenditures are given, is usually easier than programming the detailed tax and benefit rules of a personal income tax system or eligibility rules for social benefits. VAT and excise legislation has been integrated into EUROMOD-BE by means of the implicit VAT and excise rates on the 15 commodity groups. These implicit rates are constructed as described in Section 8.3 by means of a separate Stata-plug-in (called SINTAX) to be appended to the EUROMOD-programme. SINTAX uses the tax legislation at the most detailed commodity level of the budget survey and the average expenditures at this detailed level, as an input, and delivers the implicit rates as output to EUROMOD. EUROMOD then applies
18
We used the QUAIDS specification for the budget shares, meaning in this case that we regressed the budget share on the logarithm of total expenditures, on the square of it, and on a number of socio-demographic characteristics. See the Technical Appendix for a brief description of QUAIDS.
257
these implicit rates to the imputed expenditures to calculate indirect tax liabilities for the households in SILC. The distribution of baseline VAT liability within EUROMOD-BE is presented in Figure 8.3. Decile average VAT payments are plotted in Euros per year as well as related to income or total non-durable spending in percent. The deciles are based on equivalised incomes, respectively nondurable expenditures, divided by the OECD-modified scale. VAT payments naturally increase across the decile distribution in absolute terms. When VAT payments in percent of household disposable income are considered, (right axis in Figure 8.3) the usual regressive pictures appear. VAT liabilities clearly decrease across the deciles of equivalised disposable income, from 14% for the bottom decile to 8.6% for the richest income decile. If on the contrary, VAT liabilities are related to total spending, instead of net income, the picture reverses: VAT is now slightly progressive, in the sense that tax liabilities increase across the deciles of equivalised non-durable total expenditures (from 9.9% to 10.7%). The reason for these two different pictures, well known from earlier research, has to do with the important distributional gradient of savings. It reconfirms the importance of the discussion in Section 8.3.3, about whether to choose disposable income or expenditures as the concept to rank households from worse to better off.
Figure 8.3. Baseline VAT incidence in the Belgian EUROMOD baseline (2012 policies on 2010 SILC) VAT Liability in €'s
VAT % of income (right axis)
VAT % of expenditures (right axis) 5183
5000
3750
3500
13
12.0
3000
11.7
12.1
3087 11.6
12 11.2
2500 10.6 10.5
2216
2525
10.2
10.3
10.9 10.6
1979 1542 10.1
10.5
10.6
10.7
10.6
11 10
10.4 9.9
9.9
1000
10.7
Percent
3323
3189
1500
14
4262 4029
4000
2000
15
14.0
4500
Euro
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
9 500 8.6
0
8 Average
1
2
3
4
5
6
7
8
9
10
Deciles of equivalised income, respectively non-durable expenditures
Source: Own calculations with the Belgian EUROMOD, extended with the SINTAX module for VAT.
258
Bart Cape´au, Andre´ Decoster and David Phillips
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Of course, the value added of having incorporated VAT and excises into an established static micro-simulation model like EUROMOD, goes beyond the confirmation of these already well-known facts. The main contribution lies in the enhanced possibility of combined simulations of both changes in the tax benefit legislation, and changes in VAT or excises. In the current version of EUROMOD, the simulations of changes in the VAT and/or excise rates are performed under the following assumptions: • labour supply is fixed, but disposable income can of course change due to changes in the tax benefit legislation in other parts of EUROMOD; • the change in disposable income is translated into a change in total expenditures (durable and non-durable) expenditures by keeping the amount of savings the same as in the baseline; • the binary status whether the household has non-zero expenditures on durables is kept fixed from the baseline, but for those with non-zero expenditures, expenditures are adjusted to account for an eventual price change in durables, keeping the quantity of durables fixed; this allows for a change in tax payments on durables; the result is a post-reform level of total non-durable expenditures; • for the allocation of the new level of total non-durable expenditures, the binary status of having non-zero expenditures on the first four commodities is also kept fixed from the baseline; given this status, the budget share equations are used to predict the new post-reform budget share in total non-durable expenditures; • for the 11 other commodities, the Engel curves are used to predict the new post reform budget shares, using the new level of remaining nondurable expenditures as the covariate. These assumptions make clear that, contrary to the MEXTAX model discussed in the last section, only real income effects are taken up as behavioural reactions in the current version of the extended EUROMOD. No relative price effects have been estimated, and hence such effects cannot be incorporated. Of course, the real income effect is not only introduced in the positive part of the model (i.e. changing the expenditure pattern), it is also translated into a welfare effect along the lines explained in Section 8.3.2. We illustrate the possibility of the results with a table from Decoster, Loughrey, O’Donoghue, and Verwerft (2010), in which matched income and expenditure data were used to simulate a shift from social security contributions of employees (decreased in EUROMOD by 25%) towards VAT (where a rise of the standard VAT rate was simulated, necessary to compensate fully for the loss of government revenue from the reduction in social contribution). Table 8.1 illustrates how the welfare gain consists of two parts. On the one hand, the welfare level increases due to the rise of total non-durable expenditures, itself following from the increase in disposable income out of increased labour income. But on the other hand, rising prices also decrease the affordable quantities of goods for a given
259
Consumption and Indirect Tax Models
Table 8.1. Decomposition of the welfare cost into income effect and price change for decrease in social security contributions and compensatory increase in standard VAT-rate (h’s per year)
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Decile of equivalised non-durable expenditures 1 2 3 4 5 6 7 8 9 10 Mean
Change non-durable expenditures
Price effect
Welfare cost
−43 −79 −159 −237 −389 −482 −614 −735 −837 −1,162 −473
193 262 308 366 417 455 509 557 607 858 453
150 183 149 129 28 −26 −105 −178 −230 −305 −20
Source: Adapted from Decoster et al. (2010), Table 14. The table is expressed in terms of ‘welfare cost’. Hence a negative sign denotes a welfare gain, a positive sign a welfare cost.
budget. The price rise therefore exerts a downward pressure on the welfare gain. In Table 8.1 this is calculated according to Eq. (8.10). The first component of the welfare cost is negative for all deciles, which is explained by the fact that disposable income can only increase by the tax reform and because savings are kept constant. The second component is positive for all households, since the increase of the standard rate increases the consumer price of the commodities subject to the standard rate, for all households. Table 8.1 indicates that the price effect dominates the change in expenditures in the lower equivalised expenditure deciles, so that the welfare cost of the reform is positive up to the fifth decile. For the higher deciles, the situation is reversed and these groups become better off after the reform. 8.5. Summary and future directions This chapter has discussed some of the main issues involved in constructing micro-simulation models for the analysis of the effects of indirect taxes. These issues include: what types of taxes and consumption to model; what forms of behavioural response to allow for; how to measure the welfare effects of tax changes; how to assess the distributional impact of reforms; and the data requirements for the different types of model that can be built. We have done this with a mix of description (i.e. saying what choices are made in existing models) and prescription (i.e. where there are good reasons for doing so, saying what are and what are not good modelling choices).
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
260
Bart Cape´au, Andre´ Decoster and David Phillips
Although there are important choices to be made, and often tricky issues to deal with due to data limitations, most existing models are, in their essence, fairly simple, with behavioural response either absent entirely or limited to effects on consumer demand only. This means many interesting questions cannot be adequately addressed using most existing models. But there are three areas in particular, where ‘cutting edge’ research is likely to be particularly beneficial in extending the types of analysis that can be carried out with indirect tax micro-simulation models. First, the integration of indirect tax micro-simulation models that incorporate consumer demand responses with models of labour supply and direct taxes and benefits. Second, the estimation of partial equilibrium models where pre-tax prices are not fixed, and where producers as well as consumers can respond to tax changes. And third, the linking of microsimulation models with general equilibrium models to assess economywide impact of reforms. To start with the first, we illustrated in Section 8.4.2 how EUROMOD keeps labour supply fixed. It is however possible to use EUROMOD as an input for estimating a labour supply model (see among others, e.g. Bargain et al., 2013). Assuming weak separability between labour supply and the allocation of disposable income to consumption goods and services, the methods developed to extend EUROMOD with an indirect tax model, as discussed in Section 8.4, remain valid. A more ambitious approach would be to jointly estimate labour supply and consumer demand behaviour, allowing for non-separabilities between labour supply and consumption of particular goods and services. Such models allow for differences in spending patterns due to differences in how much one works (and not because of the income one gets from work), and conversely, for differences in the relative prices of goods to affect how much one decides to work. If such non-separabilities are present (i.e. if some goods, like convenience food, are complementary to work, and others, like gardening tools, are complementary to leisure), optimal tax rates on different goods may vary (Atkinson & Stiglitz, 1972, 1976). Incorporating nonseparabilities in a linked labour supply-consumer demand model would allow one to analyse how commodity tax rates should optimally vary across commodities. And when integrated with a micro-simulation model, one would be able to estimate the behavioural, revenue and welfare effects of tax reforms without the strict (and probably unrealistic) assumption of separability currently used. A second broad avenue of research is the development of indirect tax micro-simulation models that do not rely on the assumption of fixed producer prices, and competitive product markets. These will allow for the estimation of pass-through rates for indirect taxes, and the incorporation of producer as well as consumer response to reforms. One such model is currently in development at the IFS as part of a larger project on prices, food and nutrition. Making use of very detailed data from the Kantar
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
261
consumer expenditure panel, a structural model incorporating strategic pricing behaviour by multi-product firms, product differentiation, consumer preference heterogeneity, and consumer demand response to prices and product characteristics has been developed and estimated. So far this has been applied to examine the impact of excise and ad valorem taxes on saturated fats on firm and consumer behaviour in the market for butter and margarine (Griffith, Nesheim, & O’Connell, 2010), with work ongoing to extend the model to a wider range of food stuffs. Unfortunately the data used in this analysis is not widely available for academic research, and estimating both supply and demand elasticities from household expenditure survey data is likely to be infeasible (An alternative approach may be to use external estimates of supply elasticities). And models of this kind, although highly detailed, incorporate only partial equilibrium effects. The third area where research and development is particularly active is the linking of micro-simulation models with general equilibrium (GE) macro-models. These allow the first round effects of the reforms estimated in a micro-simulation model to affect the whole economy via a GE model. These economy-wide changes in labour supply, output and prices may then be re-incorporated into the microsimulation model. Chapter 9 of this book provides further detail on the methods and issues involved in developing and using such Macro-Micro models. Finally, it is worth recognising that analysis could be improved by utilising panel data on households’ expenditures. Parker, Souleles, and Carroll (2012) discusses the numerous benefits of panel data in consumer expenditure surveys, including for the purposes of estimating the effects of tax policy. For instance, the use of panel data to assess the distributional effects of a reform involving both direct and indirect taxes would help reduce the problems associated with volatile expenditure and income discussed in Section 8.3.3. Unfortunately, with the exception of the Consumer Expenditure Survey and Panel Survey of Income Dynamics in the United States, and the Spanish Encuesta de Presupuestos Familiares, household expenditure data are based on single cross-sections of data. Encouraging national statistical agencies to collect panel data on expenditures is therefore important and would have benefits for many other fields of economics.
References Abramovsky, L., Attanasio, O., Emmerson, C., & Phillips, D. (2011a). The distributional impact of reforms to direct and indirect tax in Mexico: Methodological issues. Report for the World Bank. Abramovsky, L., Attanasio, O., Emmerson, C., & Phillips, D. (2011b). The distributional impact of reforms to direct and indirect tax in Mexico: Analytical report and results. Report for the World Bank.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
262
Bart Cape´au, Andre´ Decoster and David Phillips
Abramovsky, L., Attanasio, O., & Phillips, D. (2012). Demand responses to changes in consumer prices in Mexico: Lessons for policy and an application to the 2010 Mexican tax reforms. Royal Economic Society Conference Paper. Abramovsky, L., & Phillips, D. (2013). LATAX: A multi-country flexible tax micro-simulation model. Mimeo. Ahmad, E., & Stern, N. (1984). The theory of reform and Indian indirect taxes. Journal of Public Economics, 25(3), 259298. Anton, A. S., Hernandez, F., & Levy, S. (2012). The end of informality in Mexico? Fiscal reform for universal social insurance. Washington, DC: Inter-American Development Bank. Atkinson, A. B. (1977). Optimal taxation and the direct versus indirect tax controversy. Canadian Journal of Economics, 10(4), 590606. Atkinson, A. B., Stern, N., & Gomulka, J. (1980). On the switch from direct to indirect taxation. Journal of Public Economics, 14(2), 195224. Atkinson, A. B., & Stiglitz, J. E. (1972). The structure of indirect taxation and economic efficiency. Journal of Public Economics, 1(1), 97119. Atkinson, A. B., & Stiglitz, J. E. (1976). The design of tax structure: Direct versus indirect taxation. Journal of Public Economics, 6(12), 5575. Attanasio, O., Di Maro, V., Lechene, V., & Phillips, D. (2013). The effect of increases in food prices on consumption and welfare in rural Mexico. Journal of Development Economics, 104, 136151. Bach, S., Haan, P., Hoffmeister, O., & Steiner, V. (2006). Increasing the value-added tax to refinance a reduction of social security contributions? A behavioral microsimulation analysis for Germany, mimeo. Balli, F. (2012). Are traditional equivalence scales still useful? A review and a possible answer. Universita Degli Studi di Siena Working Paper No. 656. Banks, J., Blundell, R., & Lewbel, A. (1997). Quadratic Engel curves and consumer demand. The Review of Economics and Statistics, 79(4), 527539. Bardazzi, R., Parisi, V., & Pazienza, M. G. (2004). Modelling direct and indirect taxes on firms: A policy simulation. Austrian Journal of Statistics, 33(12), 237259. Bargain, O., Decoster, A., Dolls, M., Neumann, D., Peichl, A., & Siegloch, S. (2013). Welfare, labor supply and heterogeneous preferences: Evidence for Europe and the US. Social Choice and Welfare, 41(4), 789817. Bird, R., & Gendron, P. P. (2007). The VAT in developing and transitional countries. Cambridge: Cambridge University Press. Blomquist, S., & Christiansen, V. (2008). Taxation and heterogeneous preferences. Finanz Archiv: Public Finance Analysis, 64(2), 218244.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
263
Bourguignon, F., & Spadaro, A. (2006). Microsimulation as a tool for evaluating redistribution policies. Journal of Economic Inequality, 4(1), 77106. Brewer, M., & O’Dea, C. (2012). Measuring living standards with income and consumption: Evidence from the UK. ISER Working Paper Series No. 2012-0. ISER, Colchester. Browning, M., & Meghir, C. (1991). The effects of male and female labor supply on commodity demands. Econometrica, 59(4), 925951. Chiappori, P., & Donni, O. (2011). Non-unitary models of household behaviour: A survey of the literature. In J. Molina (Ed.), Household economic behaviors (pp. 140). Berlin: Springer Verlag. Cnossen, S. (2012). Taxing consumption or income: Du pareil au meˆme? Working Paper No. 5(13). School of Public Policy Research Papers, University of Calgary. CPB, & CAPP, et al. (2013). Study on the impacts of fiscal devaluation. European Commission Taxation Papers. Working Paper No. 362013. Crawford, I., Laisney, F., & Preston, I. (2003). Estimation of household demand systems with theoretically consistent Engel curves and unit value specifications. Journal of Econometrics, 114, 221241. Cremer, H., Pestieau, P., & Rochet, J. C. (2001). Direct versus indirect taxation: The design of the tax structure revisited. International Economic Review, 42(3), 781799. Deaton, A. (1988). Quality, quantity and special variation in price. American Economic Review, 78(3), 418430. Deaton, A. (1990). Price elasticities from survey data: Extensions and Indonesian results. Journal of Econometrics, 44(3), 281309. Deaton, A., & Muellbauer, J. (1980a). Economics and consumer behaviour. Cambridge: Cambridge University Press. Deaton, A., & Muellbauer, J. (1980b). An almost ideal demand system. The American Economic Review, 70(3), 312326. Decoster, A., De Rock, B., De Swerdt, K., Flannery, D., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2007). Comparative analysis of different techniques to impute expenditures into an income data set. Workpackage 3.4 of AIM-AP. Retrieved from https://www.iser. essex.ac.uk/euromod/research-and-policy-analysis-using-euromod/ aim-ap/deliverables-publications Decoster, A., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2010). How regressive are indirect taxes? A microsimulation analysis for five European countries. Journal of Policy Analysis and Management, 29(2), 326350. Decoster, A., Loughrey, J., O’Donoghue, C., & Verwerft, D. (2011). Microsimulation of indirect taxes. International Journal of Microsimulation, 4(2), 4156.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
264
Bart Cape´au, Andre´ Decoster and David Phillips
Decoster, A., Ochmann, R., & Spiritus, K. (2013). Integrating indirect taxation into EUROMOD. Documentation and results for Germany. EUROMOD Working Paper No. EM 20/13, ISER-Essex. Decoster, A., Ochmann, R., & Spiritus, K. (2014). Integrating indirect taxation into EUROMOD. Documentation and results for Belgium. EUROMOD Working Paper No. EM 12/14, ISER-Essex. de Mooij, R., & Keen, M. (2012). Fiscal devaluation and fiscal consolidation: The VAT in troubled times. IMF Working Paper No. 12/85. Ebrill, L., Keen, M., Bodin, J., & Summers, V. (2001). The modern VAT. Washington, DC: International Monetary Fund. Golan, A., Perloff, J. M., & Shen, E. Z. (2001). Estimating a demand system with nonnegativity constraints: Mexican meat demand. The Review of Economics and Statistics, 83(3), 541550. Griffith, R., Nesheim, L., & O’Connell, M. (2010). Sin taxes in differentiated product oligopoly: An application to the butter and margarine market. CEMMAP Working Paper No. 37/10. Institute for Fiscal Studies. Hoddinott, J., Alderman, H., & Haddad, L. (2002). Testing competing models of intrahousehold allocation. In J. Hoddinott, H. Alderman, & L. Haddad (Eds.), Intrahousehold resource allocation in developing countries (pp. 129141). Baltimore, MD: John Hopkins University Press. Institute for Fiscal Studies, et al. (2011). A retrospective evaluation of the element of the EU VAT system. Evaluation Report for the EU Commission. Kay, J., & King, M. (1990). The British tax system (5th ed.). Oxford: Clarendon University Press. Meyer, B., & Sullivan, J. (2003). Measuring the well-being of the poor using income and consumption. Journal of Human Resources, 38(Special issue), 11801220. Meyer, B., & Sullivan, J. (2004). The effects of welfare and tax reform: The material well-being of single mothers in the 1980s and 1990s. Journal of Public Economics, 88(78), 13871420. Meyer, B., & Sullivan, J. (2008). Changes in the consumption, income, and well-being of single mother headed families. American Economic Review, 98(5), 22212241. Meyer, B., & Sullivan, J. (2011). Further results on measuring the wellbeing of the poor using income and consumption. Canadian Journal of Economics, 44(1), 5287. Mirrlees, J., Adam, S., Besley, T., Blundell, R., Bond, S., Chote, R., … Poterba, J. (2011). Tax by design: The Mirrlees review. Oxford: Oxford University Press. O’Donoghue, C., Baldini, M., & Montovani, D. (2004). Modelling the redistributive impact of indirect taxes in Europe: An application of Euromod. EUROMOD Working Paper No. EM7/01.
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
Consumption and Indirect Tax Models
265
Parker, J. A., Souleles, N. S., & Carroll, C. D. (2012). The benefits of panel data in consumer expenditure surveys. In C. D. Carroll, Th. Crossley, & J. Sabelhaus (Eds.), Improving the measurement of consumer expenditures. Chicago, IL: University of Chicago Press. Poterba, J. (1989). Lifetime incidence and the distributional burden of excise taxes. American Economic Review, 79(2), 325330. Reister, T., Spengel, C., Heckemeyer, J. H., & Finke, K. (2008). ZEW corporate taxation microsimulation model (ZEW Tax CoMM). ZEW Discussion Paper No. 08117. ZEW, Mannheim. Rode, A. (2011). Literature review: Non unitary models of the household (Theory and evidence), Mimeo. Saez, E. (2002). The desirability of commodity taxation under non-linear income taxation and heterogeneous tastes. Journal of Public Economics, 88(2), 217230. Sandmo, A. (1993). Optimal redistribution when tastes differ. Finanz Archiv: Public Finance Analysis, 50(2), 149163. Sutherland, H., Decoster, A., Matsaganis, M., & Tsakloglou, P. (2009). AIM-AP, Accurate Income Measurement for the Assessment of Public Policies. Final Report STREP Project no. 028412, Essex. Retrieved from https://www.iser.essex.ac.uk/euromod/research-andpolicy-analysis-using-euromod/aim-ap/deliverables-publications Sutherland, H., & Figari, F. (2013). EUROMOD: The European Union tax-benefit microsimulation model. International Journal of Microsimulation, 6(1), 426. Tamaoka, M. (1994). The regressivity of a value added tax: Tax credit method and subtraction method A Japanese case. Fiscal Studies, 15(2), 5773.
266
Bart Cape´au, Andre´ Decoster and David Phillips
Technical appendix
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
In this appendix we provide further information on: (a) how to use expenditures and retail prices to model excise duties, levied on quantities for which often no information is available in the datasets used; (b) approximations to some welfare measures of indirect tax reforms, such as the compensating and equivalent variation; (c) using the CobbDouglas expenditure function that underlies the fixed expenditure shares approach; (d) how to calculate the effect of a tax change on expenditure shares and goods quantities using the Engel curve approach; (e) an example of using a full QUAIDS demand system to assess the effects of a reform on welfare. Using expenditures and retail prices to model excise duties Assume t is the (ad valorem) VAT-rate (as a percentage of the producer price, p), a is the excise expressed as monetary amount per unit of the good purchased, and τ the ad valorem excise rate measured as a percentage on the consumer price, q, (as e.g. in the case of Belgian special excise duty on tobacco). Consumer and producer prices are measured in the same unit as the excise a. Finally, let x be the quantity of the good bought (expressed in the same measurement unit as the one in which the excise duty is expressed). So, we have pð1 þ tÞ þ a þ τq = q:
ð8:A:1Þ
If one knows q, t, a and τ, then the producer price, p, can be calculated as: p=
qð1 − τÞ − a ; 1þt
ð8:A:2Þ
and this in turn allows to calculate the total indirect tax rate as an ad valorem rate of the consumer price: θ=
tþτþ
a q
1þt
;
ð8:A:3Þ
from which the amount of taxes paid, T(x), can be calculated on the basis of expenditures ηx : = q⋅x, as follows: TðxÞ = θ ⋅ ηx :
ð8:A:4Þ
Approximations to welfare measures of indirect tax reforms In general, the difference between pre- and post-reform expenditures necessary to obtain the pre-reform bundle, x0 ; that is: n X j=1
xh0;j ðq1;j − q0;j Þ;
ð8:A:5Þ
Consumption and Indirect Tax Models
267
is a first-order approximation of the welfare cost for individual h of a tax reform causing a change in consumer prices from q0 to q1 (Bourguignon & Spadaro, 2006). Indeed, let the welfare from a given set of prices q and income yh, be measured by the indirect utility function V h ðq; yh Þ (i.e. the maximal utility a person h can obtain with income yh, at prices q). Accordingly, the difference between pre- and post-reform welfare equals: V h ðq1 ; yh Þ − V h ðq0 ; yh Þ:
ð8:A:6Þ
Applying a first-order Taylor expansion around q0 , to V h ðq1 ; yh Þ gives:
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
V h ðq1 ; yh Þ − V h ðq0 ; yh Þ ≈
to:
∂V h ðq0 ; yh Þ ðq1 − q0 Þ: ∂q0
ð8:A:7Þ
h ∂V h ðq; yh Þ ∂V ðq; yh Þ = Using Roy’s identity i:e: − ⋅xh , this amounts ∂q ∂yh V h ðq1 ; yh Þ − V h ðq0 ; yh Þ ≈
n ∂V h ðq0 ; yh Þ X xh0;j ðq0;j − q1;j Þ; ∂yh j=1
ð8:A:8Þ
which is converted into a welfare cost expressed in monetary terms by dividing the last expression by − 1=ð∂V h ðq0 ; yh Þ=∂yh Þ.19 This gives indeed Eq. (8.A.5). In case producer prices are assumed to be fixed, Eq. (8.A.5) reduces to: n X j=1
xh0;j ðt1;j − t0;j Þ;
ð8:A:9Þ
that is, the difference between post- and pre-reform indirect taxes paid by household h, assuming pre-reform quantities remain unaltered (or Eq. (8.6) of the ‘fixed quantities’-assumption in Section 8.3). An alternative measure is calculating the difference between post- and pre-reform tax liabilities, using post-reform quantities: n X j=1
xh1;j ðt1;j − t0;j Þ:
ð8:A:10Þ
We will now establish a relation between Eqs. (8.A.9) and (8.A.10) on the one hand, and two popular monetary measures of the welfare cost of price changes (e.g. as a consequence of an indirect tax reform), the compensating (CV) and equivalent variation (EV), on the other hand.
19
This division by the marginal utility of money is not an innocuous transformation when one intends to use this measure for making interpersonal welfare comparisons.
268
Bart Cape´au, Andre´ Decoster and David Phillips
The CV is the monetary compensation a household or individual20 should receive in the post-reform situation, in order to be as well off as in the pre-reform situation. The EV is the amount of money the household would have to forego in the baseline, in order to be indifferent between the pre- and post-reform prices. Formally, the compensating and equivalent variation are implicitly defined as:
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
V h ðq1 ; yh þ CV h Þ = V h ðq0 ; yh Þ and V h ðq0 ; yh − EV h Þ = V h ðq1 ; yh Þ:
ð8:A:11Þ
The expenditure function, Eh(q, u), is the minimal amount of money a household or individual needs in order to attain a welfare level u, when commodity prices equal q. In this sense, it is a monetary measure of the welfare level u, measured at reference prices q, also known as a money metric utility. The expenditure function can be found by inverting the indirect utility V h ðq; yh Þ around yh. Using this in Eq. (8.A.11) the following explicit definitions of compensating and equivalent variation can be derived: CV h = Eh ðq1 ; V h ðq0 ; yh ÞÞ − yh = Eh ðq1 ; V h ðq0 ; yh ÞÞ − Eh ðq1 ; V h ðq1 ; yh ÞÞ and h EV = yh − Eh ðq0 ; V h ðq1 ; yh ÞÞ = Eh ðq0 ; V h ðq0 ; yh ÞÞ − Eh ðq0 ; V h ðq1 ; yh ÞÞ;
ð8:A:12Þ
where, for the last step in each of these two equations, use has been made of the fact that Eh ðq; V h ðq; yÞÞ = y, for all q and y. That is, the minimal expenditures at prices q to obtain the maximal welfare level one can obtain with an income y, given prices q, is equal to that income y. Thus the CVh, respectively EVh, amounts to the difference in the money metric measures of the pre- and post-reform welfare, measured at reference prices q1 , respectively q0 . For a pure price reform, keeping disposable incomes yh fixed, it holds that: Eh ðq0 ; V h ðq0 ; yh ÞÞ = yh = q00 xh0 = q01 xh1 = yh = Eh ðq1 ; V h ðq1 ; yh ÞÞ. Using this in Eq. (8.A.12) gives: CV h = Eh ðq1 ; V h ðq0 ; yh ÞÞ − q0 0 xh0 and EV h = q0 1 xh1 − Eh ðq0 ; V h ðq1 ; yh ÞÞ:
20
ð8:A:13Þ
To make welfare comparisons between individuals living in households and singles, monetary welfare measures as the ones discussed here that apply to households, are usually divided by an equivalence scale. This scale is intended to reflect how much more than a single with income y, a household should expend in order to be able to guarantee all household members the same welfare as that individual can obtain with that income. Generally, this scale depends on the demographic composition of the household (number and age of household members).
269
Consumption and Indirect Tax Models h
h
h
h
h
h
In these equations, E ðq1 ; V ðq0 ; y ÞÞ and E ðq0 ; V ðq1 ; y ÞÞ are unknown, but since Eh ðq; uÞ is the minimal expenditure level in order to reach the utility level u and xht guarantees a utility level V h ðqt ; yh Þ, though not necessarily at minimal expenditures when prices differ from qt (t = 0; 1), it follows that: q10 xh0 ≥ Eh ðq1 ; V h ðq0 ; yh ÞÞ and q00 xh1 ≥ Eh ðq0 ; V h ðq1 ; yh ÞÞ:
ð8:A:14Þ
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
So, we can derive an upper (respectively lower) boundary for CVh (respectively EVh) as follows: CV h = Eh ðq1 ; V h ðq0 ; yh ÞÞ − q0 0 xh0 ≤ ðq1 − q0 Þ0 xh0 and EV h = q0 1 xh1 − Eh ðq0 ; V h ðq1 ; yh ÞÞ ≥ ðq1 − q0 Þ0 xh1 :
ð8:A:15Þ
When producer prices, say p, are fixed, pre- and post-reform prices are respectively equal to p þ t0 and p þ t1 . In that case, Eq. (8.A.9) serves as an upper bound for CVh, and Eq. (8.A.10) as lower bound for EVh.
Using the CobbDouglas indirect Utility Function and Expenditure Function to Calculate Welfare Effects Fixed expenditure shares are consistent with consumers having a particular kind of preferences that can be represented by a CobbDouglas utility function. If households’ preferences are actually of this form, the CobbDouglas expenditure and indirect utility functions can be used to calculate exact monetary measures of the welfare effects of indirect tax reforms. To see this, note that the CobbDouglas utility function is: n
h
vh ðxh Þ = ∏ ðxhj Þsj ;
ð8:A:16Þ
j=1
where shj is the share of good j in total expenditure for household h. In log-form: ln ðvh ðxh ÞÞ =
n X j=1
shj ln ðxhj Þ:
ð8:A:17Þ
Maximising this utility function subject to the budget constraint (q0 xh ≤ yh , see Eq. (8.2) in the main text), yields: xh j =
shj ⋅yh : qj
ð8:A:18Þ
270
Bart Cape´au, Andre´ Decoster and David Phillips
Plugging these optimal quantities in Eq. (8.A.17) gives the indirect utility associated with the direct utility function Eq. (8.A.17): V h ðq; yh Þ =
n n o X shj ln shj þ ln yh − ln qj :
ð8:A:19Þ
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
j=1
Evaluating this equation at the initial set of tax rates (t0 ) (yielding the consumer price vector q0 ), one can calculate the level of utility under the initial tax system, say v = V h ðq0 ; yh Þ: It is also possible to calculate utility under the new price vector q1 when taxes change to t1 , to obtain an estimate of the effect of the reform on utility. To express this utility effect in monetary terms, one can use the expenditure obtained from solving Eq. (8.A.19) for yh, yielding: !shj n qj h E ðq; uÞ = expðuÞ ∏ h : ð8:A:20Þ j = 1 sj For instance, using the new prices q1 in this equation and combing with the first line of Eq. (8.A.13), one obtains an exact expression for the compensating variation: h
h
h
CV = E ðq1 ; v Þ − y = y
h
! shj q1;j ∏ −1 : j = 1 q0;j n
ð8:A:21Þ
Engel Curves Engel curves establish a relation between income and commodity demands. These are abbreviated demand functions, keeping commodity prices fixed. That is, they give the commodity bundles that maximise utility given the budget constraint, for a fixed set of prices, but allowing household income to vary. As they are abbreviated demand functions, it is worthwhile to keep in mind that the income concept affecting demand is in fact a real income concept (income deflated by a price index), a fact that usually does not pop up in empirical specifications of Engel curves since these are applied in cases where consumer prices are fixed, and then the difference between real and nominal income vanishes. But the real character of the income concept is exploited when using these Engel curves in indirect tax micro-simulation models, and it is therefore made explicit in the following formal definition of these curves: xh = f h ðyh =Ph ðqÞ; zh ; ɛh Þ;
ð8:A:22Þ
where zh is a set of personal characteristics of h, ɛh is a set of n random disturbance terms, and Ph(q) is a price index function. That is a function
271
Consumption and Indirect Tax Models
that is homogenous of degree one (P (λq)=λP (q), for all λ ≥ 0), satisfying Ph ðq; …; q Þ = q, for all q ≥ 0. |fflfflffl{zfflfflffl} h
h
n times
Normalising the baseline such that all consumer prices are equal to one, one can estimate Engel curves on a cross-section without price variation, which establishes a relation between (nominal) income at the period of observation and demand. The baseline quantities according to this estimated Engle curve, say x^ h0 , are then equal to:
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
x^ h0 = f h ðyh =Ph ðq0 Þ; zh ; eh Þ = f h ðyh ; zh ; eh Þ;
ð8:A:23Þ
where eh is an n-vector of real numbers, serving as a particular realisation of the random terms ɛh . The last equality follows from the normalisation of the base line consumer prices to one, and Ph ð1; …; 1 Þ = 1. |fflfflffl{zfflfflffl} n times
To simulate the post-reform demand, an assumption has to be made about the price index function (since it could not be estimated from a cross-section with fixed prices). If we use a Divisia price index, Ph ðqÞ = sh
∏nj= 1 qj j , where shj is the budget share of expenditures on j by h, that is: shj =
qj xhj ; yh
then the post-reform demand can be estimated by: ! n h h h s1;j h h h h h h h h x^ 1 = f ðy =P q1 ; z ; e Þ = f y ∏ q1;j ; z ; e :
ð8:A:24Þ
j=1
Post-reform shares are not known. A first-order Taylor expansion of sh yh =∏nj= 1 q1;j1;j around q0 can be used however: 0 1 n yh yh yh X q − q 1;j 0;j A sh0;j @ n sh ≈ n sh − n sh q0;j 1;j 0;j 0;j j = 1 ∏ q1;j ∏ q0;j ∏ q0;j ð8:A:25Þ j=1 j=1 j=1 ! n X = yh 1 − sh0;j d ln q0;j ; j=1
where the last equality follows from the normalisation of the baseline sh
prices q0 to be all equal to one and hence ∏nj= 1 q0;j0;j = 1. Welfare Effects with the QUAIDS Demand System The Quadratic Almost Ideal Demand System (QUAIDS) is a generalisation of the Almost Ideal Demand System (AIDS) model that allows for
272
Bart Cape´au, Andre´ Decoster and David Phillips
quadratic Engel curves. This demand system, developed in Banks et al. (1997) can allow a good to be a luxury at one level of income and a necessity at another, a property these authors find to be of empirical relevance. The QUAIDS demand system is based on the following indirect utility function:21 ( )−1 ln y − ln aðqÞ − 1 ln Vðq; yÞ = þ λðqÞ ; ð8:A:26Þ bðqÞ where y is total expenditure, aðqÞ; bðqÞ and λðqÞ are defined as:
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
ln aðqÞ = α0 þ
n X
αj ln ðqj Þ þ
j=1 n
n X n 1X γ ln ðqj Þlnðqk Þ; 2 j = 1 k = 1 jk
β
bðqÞ = ∏ qi i ;
ð8:A:27Þ
ð8:A:28Þ
i=1
λðqÞ =
n X
λj ln ðqj Þ;
ð8:A:29Þ
j=1
where j = 1,…, n indexes the goods. Applying Roy’s identity gives the following equation for si ; the share of expenditure on good i in total expenditure, for each household: 2 n X y λi y si = αi þ ln γ ij ln ðqj Þ þ βi ln : ð8:A:30Þ þ aðqÞ aðqÞ bðqÞ j=1 Restrictions required for integrability can be imposed using linear restrictions on the parameters of the model: (adding-up) n X
αj = 1;
j=1
n X j=1
βi = 0;
n X j=1
γ jk = 0; ∀k;
n X
λj = 0;
j=1
(homogeneity) n X γ jk = 0; ∀ j; k=1
(symmetry) γ jk = γ kj ; ∀j; k:
21
For simplicity of notation, we drop superscripts h referring to the household in this section.
Consumption and Indirect Tax Models
273
The model allows for household demographics to affect demands in a fully theoretically consistent manner. Demographics enter as taste-shifters in the share equations, and to maintain integrability, they are considered to affect the αj -terms in ln aðqÞ: ( ) n M X X ln aðqÞ = α0 þ α~ j þ αjm zm ln ðqj Þ j=1
m=1
n X n 1X γ ln ðqj Þ ln ðqk Þ; þ 2 j = 1 k = 1 jk
ð8:A:31Þ
Downloaded by University of Sussex Library At 08:28 10 July 2016 (PT)
which gives us the following new adding-up conditions that supersede Pn j = 1 αj = 1: n X j=1
α~ j = 1;
n X
αjm = 0; ∀m:
ð8:A:32Þ
j=1
Once one has estimated a fully specified demand system, one can use this to estimate the impact of price changes and tax changes on consumer welfare using the associated expenditure functions. Using compensating or equivalent variation as a welfare measure, the effect is: CV ≡ Eðq1 ; v Þ − Eðq0 ; v Þ = Eðq1 ; v Þ − y and EV ≡ Eh ðq1 ; v Þ − Eh ðq0 ; v Þ = y − Eh ðq0 ; v Þ;
ð8:A:33Þ
where v , respectively v , is the value of the utility index in the baseline, respectively in the post-reform situation, q0 is the initial price vector, q1 is the new price vector and Eðqt ; uÞ (t = 0, 1) is: − 1! 1 − λðqt Þ Eðqt ; uÞ = aðqt Þexp bðqt Þ : ð8:A:34Þ ln u and where ln u can be calculated using the indirect utility function, for u = v , respectively, u = v .
CHAPTER 9
Macro-Micro Models
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
John Cockburn, Luc Savard and Luca Tiberti
9.1. Introduction Since the late seventies, researchers and policy makers have sought to analyse and simulate the impacts of macro policy reforms on income distribution. Concerns such as the social implications of structural adjustment policies (see, e.g. Cornia, Jolly, & Stewart, 1989), poverty/inequality effects of trade liberalization (see, e.g. Anderson, Cockburn, & Martin, 2010), pro-poor or inclusive growth (see, e.g. Annabi, Cisse´, Cockburn, & Decaluwe´, 2008) or the poverty impacts of the global food and, subsequently, financial and economic crisis (see, e.g. Cockburn, Fofana, & Tiberti, 2012; Wodon & Zaman, 2009) have driven this research agenda. This type of analysis requires tools that combine both macro and micro frameworks. The integration of microsimulation techniques within a computable general equilibrium (CGE) model constitutes such a tool. While CGE models focus on the sectoral, macro and price effects of major policy reforms, they generally fail to adequately capture distributive impacts. On the other hand, microsimulation techniques focus on the household- and individual-specific distributive effects, but are generally confined to micro reforms as they are unable to model general equilibrium effects notably on the prices of factors and products, as well as other macro variables of macro reforms. Combining these tools allows the analyst to track the impact of a major policy change or external shock on macroeconomic or sectoral variables down to the change in income or welfare at the household level. The flexibility of both tools has allowed inter alia for distributional impact analysis of various policies and programmes in the context of the Millennium Development Goals
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293008
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
276
John Cockburn, Luc Savard and Luca Tiberti
(MDGs) and the Monterrey Consensus (see McGill, 2004; Ortega Diaz, 2011; Vos, Sa´nchez, & Kaldewei, 2008).
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
9.1.1. Context The primary reason for introducing microsimulations into a CGE model is to analyse and exploit the individual heterogeneity in the sampled population in terms of behaviours and factor endowments in evaluating the impacts of a shock with significant general equilibrium effects, whether this be an external shock (e.g. world commodity prices) or an internal policy change. In fact, individuals are affected by, and react to, such shocks differently, notably according to their sources of income and consumption patterns. If such heterogeneity is not fully taken into account, distributive and poverty results may be incorrect. It should also be stressed that micro analysis plays other roles in CGE microsimulation approaches. First, it may serve to calibrate some macro data (e.g. labour force by production sectors or by skill category). Second, it can be used to estimate the parameters of key behavioural functions in the CGE model. For example, price and income elasticities can be estimated using data available in a typical household budget survey.1 Most of the time and this is a major critique of most CGE models these parameters are ‘borrowed’ from the literature for nearby (or not) countries and rarely correspond exactly to the product categories, functional forms and year used in the model. Third, in the case of macro shocks that have micro foundations, the micro module can be used to precisely estimate the shock to introduce into the CGE model. This is, for example, the case of tax reforms involving changes in exemptions or deductions, in which case the microsimulation model is useful to estimate changes in effective tax rates, which can then be transmitted into the CGE model (Feltenstein, Lopes, Porras-Mendoza, & Wallace, 2013). Finally, the reconciliation of the databases for the micro and CGE analysis can help flag problems in the CGE database. 9.2. Methodological characteristics and choices There are numerous ways that microsimulations have been used in conjunction with a CGE model in the literature.2 In this section, we classify and present all six principal approaches. In the subsequent section, we
1
2
According to Peichl (2009, p. 309), ‘Typical variables and parameters used in this bottomup linkage include labour supply elasticities, income components, average and marginal tax rates, consumption patterns, income levels and tax revenues.’ For previous reviews, see Davies (2009) and Colombo (2010).
Macro-Micro Models
277
present numerous examples of applications of the various approaches for illustrative purposes. In the first two approaches, microsimulations are integrated directly into the CGE modelling framework, whereas the CGE and microsimulation models are separate in the four other approaches. The integrated approaches are distinguished by their use of representative or actual households. The other four approaches are distinguished by the way that the CGE and microsimulation models are linked CGE results fed down to the microsimulations, or vice versa, or a looping technique and in the nature of the microsimulation model itself (with and without behavioural responses).
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
9.2.1. The representative household approach The oldest approach to conducting ‘micro’ analysis in a CGE framework is the representative household (RH) approach.3 This approach breaks down households into a number of categories based on socio (e.g. education/sex of household head), economic (e.g. income deciles or activity/skill level of household head) or geographic (i.e. rural/urban, regions) criteria. This can range from few to many. Although it is not strictly a CGE microsimulation approach, given that there is no modelling of specific individuals/households, it was a step in the direction of integrating more micro considerations into CGE models and served as a starting point for other developments. This approach has been and continues to be used widely. When combined with a household-level income-expenditure database, this approach can be used to perform poverty analysis. The change in income of the RH simulated by the CGE model is applied to the incomes of all corresponding households in the database according to various hypotheses.4 Some have proposed to apply the percentage variation in income of the RH to all corresponding households (e.g. Aka, 2004; Decaluwe´, Dumont, & Savard, 1999). This is a strong assumption that evacuates all the heterogeneity between households within a given category, leaving only inter-category changes.5 Other authors propose to model the distribution of income with a lognormal function and generate the change in income distribution with
3
4
5
Key early contributions include Adelman and Robinson (1978) for South Korea, Taylor and Lysy (1979) for Brazil and Dervis, de Melo, and Robinson (1982). Among the first applications were Adelman and Robinson (1978) for South Korea, de Janvry, Sadoulet, and Fargeix (1991) for Ecuador and Chia, Wahba, and Whalley (1994) for Coˆte d’Ivoire. Numerous studies have shown that these can be as large as or larger than between-group changes, for example, Huppi and Ravallion (1991), Savard (2005), Cockburn (2006) and Cogneau and Robilliard (2008).
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
278
John Cockburn, Luc Savard and Luca Tiberti
the theoretical relationship between the mean and the variance for the lognormal function. This assumes that the change in income of the RH is equal to the change in the average income of the corresponding households.6 This is also a strong assumption. The RH in a CGE model is the sum of incomes of the corresponding households and thus the structure of income and expenditure of the group is overrepresented by the richest households. As a result, the change in aggregate income likely has little correlation with the change in average income of the group. In the late 1990s, Decaluwe´, Patry, and Savard (1998) and Decaluwe´, Patry, Savard, and Thorbecke (1999) reviewed CGE models with poverty analysis. They proposed the use of more flexible functional forms to model income distribution and an endogenous poverty line to capture heterogeneity in consumption. In the following years, a large number of authors followed in this strand of research.7 In an application to the Philippines, Savard (2005) compares the RH approach with the TD/BU approach presented below and finds that the results of poverty and income distribution analysis can be completely reversed when taking into account within-group distributional effects. According to Bourguignon, Robilliard, and Robinson (2005), the RH approach can lead to a significant underestimation of changes in income distribution. Another drawback of this approach concerns the fact that the modeller is constrained to a single classification of households in the CGE model and is unable to then explore impacts according to alternative classifications (e.g. female-headed households).
9.2.2. The fully integrated approach To solve this problem, the fully integrated (FI) approach consists in incorporating a large number or all of the individual households from a household survey directly into the CGE model. Thus, RH categories are replaced by actual households that can number in the tens of thousands. This approach, which has become possible with improvement in computer processing capacities, thus captures within-group distributional effects, as each household of the survey will be affected according to its specific income structure and expenditure pattern. According to Bourguignon, Bussolo, and Pereira Da Silva (2008) it is the most theoretically sound approach to conducting microsimulations in a CGE framework. This
6
7
For example, Adelman and Robinson (1979) for Korea, de Janvry et al. (1991) for Ecuador, Azis, Azis, and Thorbecke (2001) for Indonesia, and Colatei and Round (2000) for Ghana. See Table 9.1 in the following section for a list.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
279
approach was first proposed by Decaluwe´ et al. (1999) and applied inter alia by Cogneau and Robilliard (2007), Gortz, Harrison, Neilsen, and Rutherford (2000), Cockburn (2006), and Boccanfuso, Estache, and Savard (2009). The CGE model structure is identical to the RH approach, but the number of household accounts is much larger. To integrate these households into the CGE model requires that the two databases be perfectly consistent, which is generally not the case without some adjustments. First, one must reclassify and aggregate the micro-data on income sources and consumption goods into the categories used in the CGE model. The next step involves adjusting each household’s income to match its total expenditure and savings. Finally, the aggregated household incomes by source and consumption by product must be balanced out with the corresponding income payments and final consumer demand in the CGE database.8 Compared to the RH approach, the FI approach makes it possible to perform theoretically sound, distributional analysis for different household classifications capturing intra-group distributional changes as the individual household results can be organized as desired. Based on this, one might expect widespread use of the approach, which is not the case for a number of reasons. First, some cite the fact that the data reconciliation procedure can be quite time intensive (e.g. Rutherford & Tarr, 2008). However, it could be argued that this reconciliation process is salutary in bringing out inconsistencies and errors in both the micro and CGE databases and disciplining the analyst to make judgments on the necessary adjustments. Second, FI models can quickly become very large as the number of equations increases with the number of households. Indeed, in early stages, models with over 5,000 households and more than ten sectors were difficult and slow to solve numerically (see Chen & Ravallion, 2004). However, as computing processors have improved, this is less of a problem. A third drawback, raised by Savard (2003) and Bourguignon and Savard (2008), concerns the limitations imposed by the CGE model on the behavioural functional forms that can be used. Many analysts are interested in investigating the impact of policies that involve discrete choice or regime switching by individuals or households. For example, following a macro shock an individual may move in or out of employment, or between the formal and informal sectors. These discrete types of behaviour cannot be captured in standard CGE models.
8
For social accounting matrix (SAM)-based CGE models, Lemelin, Fofana, and Cockburn (2013) provide a complete review of balancing techniques such as RAS or cross-entropy approaches.
280
John Cockburn, Luc Savard and Luca Tiberti
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
It is this concern that has made sequential approaches to integrating microsimulations into a CGE model the most popular, as recommended by authors such as Cogneau and Robilliard (2007), Savard (2003) and Bourguignon et al. (2005). In the following sub-sections, we explore four such methods. In the first two methods, results from the CGE model are fed down to a separate microsimulation model, which differs simply by whether it integrates behavioural responses or not. In the third method, it is the results from the microsimulation model that are instead fed up to the CGE model, whereas the final method establishes a loop to ensure consistency between the two models.
9.2.3. The top-down micro-accounting approach The most common method of conducting microsimulations in a CGE framework is the top-down micro-accounting approach (TD-MA).9 The approach is formally presented by Chen and Ravallion (2004) and has been extensively applied in recent years.10 The general idea is to feed product and factor price changes simulated by a CGE model into a microsimulation household model. In order to do this, categories of product and factor prices in the CGE model must be mapped to prices in the microsimulation model. Changes in the CGE factor prices are transferred to the microsimulation model, leading to household-specific income changes that vary according to factor endowments (labour, capital and other assets). This income change is combined with changes in consumer prices from the CGE model to compute welfare changes that take into account household-specific consumption patterns. This welfare change can then be used to analyse distributional impacts. This approach allows for rich distributional and poverty analysis based on full household survey data. It is an accounting approach in that, in the microsimulation model, there are no behavioural responses to these price changes. As such, this approach is particularly useful to analyse the immediate or short-term impacts of a shock, before agents are able to adjust their behaviour.
9
Some refer to this approach as the sequential (Boccanfuso & Savard, 2007; Lay, 2010), macro-micro layered (Peichl, 2009), non-parametric (Vos & Sanchez, 2010) or arithmetic (Clauss & Schubert, 2009) approach. 10 Among early applications of this approach are Vos and De Jong (2003) and King and Handa (2003). Other more recent examples are Boccanfuso and Savard (2007) for Mali and Abdelkhalek, Boccanfuso, and Savard (2009) for Morocco, Ahmed and O’Donoghue (2010) for Pakistan.
Macro-Micro Models
281
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Besides the lack of a behavioural response, this approach is also criticized for the absence of a micro-feedback effect to the CGE model.11 This criticism is discussed in more detail in the final paragraph of the following sub-section. The fact that the approach does not require any reconciliation between the CGE and micro databases can be seen as an advantage or drawback. Where adequate data are not available for full national-level data reconciliation for example, in the case of a non-nationally RH survey this approach can still be applied and give information on household-specific impacts. Where such data are available, some effort to reconcile data to detect any problems is desirable, even if it is not required by the approach.
9.2.4. The top-down with behaviour approach The TD-WB approach integrates, at the individual/household level, behavioural responses in the microsimulation model to the price changes fed from the CGE model.12 This approach was first proposed by Bourguignon et al. (2005). Its main contribution is to allow for more heterogeneity between households and hence richer income distributional analysis. Behavioural parameters are typically obtained through reduced-form econometric estimates using the household survey data. Behaviour typically encompasses only consumer choices and labour supply, but could include other decisions involving savings, human capital investments, crop production, etc. This focus can be justified by the fact that labour activities constitute the main source of household income and, especially in developing countries, most income goes to consumption (with marginal saving rate being relatively low). Also, in the immediate and short term, these components are likely to react fastest and most to policy changes or exogenous shocks. As with the FI and TD-MA approaches, it captures within-group distributional changes, but is richer and more flexible in terms of household behaviour.
11
12
This issue has been raised in two literature reviews of macro-micro modelling for poverty analysis namely Hertel and Reimer (2005) and Bourguignon and Spadaro (2006), and more recently in Bourguignon and Savard (2008) who provide an alternative to capture this feedback effect. In this last paper, the authors link the two models through consumption and labour supply. The two variables need to converge at the end of the resolution process. The marginal propensity to save is used to balance the budget constraint of the aggregate household in the CGE model. As a robustness check, other variables were used, such as income tax rate, but this did not have an impact on the distributional analysis given the small adjustments to this variable. See He´rault (2010) for a comparison between reweighting and behavioural approaches.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
282
John Cockburn, Luc Savard and Luca Tiberti
One criticism of this approach is the potential inconsistency between the behaviour of the aggregate household(s) in the CGE model and that of the individual households in the microsimulation model. Boeters and Savard (2013) show how aggregate behaviour can be made to mimic the micro household behaviour as a first approximation. This can be done by running a simulation using the microsimulation model with endogenous labour supply (e.g. a 1% increase in wage rates), computing the elasticity of the response based on this simulation and introducing an aggregate labour supply function with this elasticity into the CGE model. As with the TD-MA approach, this approach is also open to criticism for the lack of feedback effects from the microsimulation model to the CGE model.13 The importance of the feedback effect will depend on the aggregation error.14 If the behavioural functions allow for perfect aggregation, the results of the two models will be consistent and there will be no feedback effect. Household functions do not aggregate perfectly if they contain fixed (or exogenous) shares of consumption, savings or taxation that differ between households. Since these shares are generally calibrated to reproduce the average shares in the reference period in micro household models, the aggregation of micro behaviour functions will not scale up to that of the corresponding aggregate household in the CGE model. In addition, when the microsimulation model includes a discrete regime switching function such as entry/exit from the labour market consistency with the CGE model, which cannot include such behaviour, is lost.15 9.2.5. The bottom-up approach In contrast to the two preceding top-down approaches, the link goes from the microsimulation model to the CGE model in the BU approach. Hence, the impacts of a given shock or policy reform are first modelled in the microsimulation model. These changes are then aggregated and fed into the CGE model to analyse the macro/sectoral impacts. In general, authors apply this approach to analyse policies targeting individual labour supply decisions. For example, a policy could be designed to get individuals off social assistance and into the labour force. As the first run effects are on the labour supply, the econometrically
13
14
15
This critique of the top-down approaches is highlighted in two literature reviews of macromicro modelling for poverty analysis: Hertel and Reimer (2005) and Bourguignon and Spadaro (2006). For a detailed discussion of the aggregation of micro household behaviour to a representative household, consult Deaton and Muellbauer (1980). Bourguignon and Savard (2008) also discuss this issue in a CGE framework. For further details, see Bourguignon et al. (2005) and Appendix of this chapter.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
283
estimated microsimulation model is best designed to capture the direct effect on labour supply. The changes in labour supply are then fed into the CGE model as an exogenous shock. However, the approach can also be applied to shocks directly affecting other types of household/individual behaviour such as consumption and savings. Once again, this approach lacks any feedback effect, in this case back from the CGE model to the microsimulation model. Moreover, to make the link between the two models, it is necessary to be able to aggregate the micro household behaviour. For example, a (nested) multi-nominal logit specification of the individual direct utility function used to model discrete labour choices can be aggregated up to a CES utility function (Peichl, 2009). Savard (2010) also discusses this aggregation issue where an almost ideal demand system (AIDS) is used in a CGE microsimulation modelling context. 9.2.6. The iterative approach To address the lack of feedback effects in the two TD approaches and the BU approach, Savard (2003) proposed an iterative approach (IA).16 In his application, the TD-WB approach is extended by adding a bidirectional link between the CGE model and the microsimulation to obtain a topdown/bottom-up (TD/BU) approach. As in the two TD approaches, the downward link from the CGE model to the microsimulations is performed with good and factor prices. The feedback to the CGE model can be performed with one or more variables, typically consumption and labour supply.17 The new results fed down from the CGE model to the microsimulations are then updated and so on. The iteration process end when the results from the two models are fully consistent. A formal presentation of this approach can be found in Bourguignon and Savard (2008). Whereas the concern in Savard (2003) is to feedback microsimulation results to the CGE model, Tiberti et al. (2013) do just the opposite in what could be termed a bottom-up/top-down (BU/TD) version of the IA. Indeed, where policies are predominantly targeted at modifying micro-economic behaviour for example labour supply, consumption and where these adjustments are sufficient in scale to generate general
16
17
This approach was subsequently applied by Aaberge, Colombino, Holmøy, Strøm, and Wennemo (2007), de Souza Ferreira Filho and Horridge (2004), Muller (2004), Savard (2005, 2010), Rutherford and Tarr (2008), Arntz, Boeters, Gu¨rtzgen, and Schubert (2008), Mussard and Savard (2010, 2012). In Savard (2003), the upward link is performed with two types of labour supply and with consumption. In Savard (2005), the link is performed with only one variable, namely consumption.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
284
John Cockburn, Luc Savard and Luca Tiberti
equilibrium spill-over effect, a BU-TD approach is recommended. Tiberti et al. simulate the economy-wide effects of different possible reforms of the existing Child Support Grant for South African children. The microsimulation module is first used to estimate impacts on adult labour supply and consumption, which are fed (as exogenous shocks, together with the total cost of the reforms estimated with the micro-data) into the CGE model to study the general equilibrium effects of the cash grant and resulting increase in public spending, as in a standard BU approach. However, the CGE results (changes in consumer prices, wages, profits and employment) are then fed back to the microsimulation model to estimate the new real consumption (including the cash grant) to run poverty and distributive analyses. Thus household real income is affected not only by the direct change in the transfer but also by the general equilibrium effects generated by the social protection reform. In the same vein, Debowicz and Golan (2014) study the direct and indirect (or spill-over) effects on child labour supply induced by the Mexican Oportunidades programme. Through the BU-TD IA, they capture two main transmission channels: occupation and wage effects. The first effect is captured by the microsimulations, whereas the second is captured by the CGE model after the employment effect is transferred (as an exogenous shock) to the macro model. The authors find that the distributive effects of the programme are substantially greater than under partial equilibrium analysis. Like all of the non-RH approaches (FI, TD-MA, TD-WB and BU), the IA captures within-group income distributional changes. It also includes feedback effects, a characteristic that is shared only with the FI. However, the IA like the other sequential approaches offers much more flexibility in the micro behaviour than the FI approach. For example, one can include discrete choice behavioural functions in the microsimulation model to enrich the distributional analysis and integrate more heterogeneity at the household or individual level. In this regard, the IA requires more consistency between household behaviours in the CGE and microsimulation models than the other sequential approaches, given the constraint to obtain a converging solution. The main shortcoming of this technique is that convergence is not guaranteed and must be verified for each simulation. An illustration of difficulties in obtaining a converging solution can be found in Savard (2010), where use of an AIDS function with identical parameters in the CGE and microsimulations models led to an infeasible solution. 9.2.7. Choice of approach Faced with the multitude of approaches, it can be a challenge to select the appropriate approach to conducting microsimulations in a CGE framework. While the RH approach is the least attractive due to its failure
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
285
to capture intra-group distributional effects, the choice between the other approaches depends on a variety of factors. A first factor to consider is data availability. The sequential approaches require a rich micro database in order to econometrically estimate the microsimulation model. The FI approach requires internal consistency in the micro-data (i.e. income equal to expenditure plus savings), as well as consistency between the CGE and micro databases. While some adjustments are inevitable, if these become too significant, the credibility of the resulting model is put into question. The second important criterion is the research question at hand. If one is to analyse a policy in which regime switching behaviour such as entry/exit from employment figures prominently, a sequential approach is required. Where micro behaviour is less complex, the FI approach may be preferable. A third factor is the time frame available. The TD-MA approach is less time-consuming to implement and more appropriate if there is a tight time constraint. Another factor is the human resources available for the project. The TD-WB and IA approaches require expertise both in CGE modelling and econometric analysis. Generally, a full research team including experts in both these areas is required for these approaches. While computer or software limitations are increasingly irrelevant, they may apply in very sophisticated models or in cases where powerful computers are not available. To the best of our knowledge, the only example of validation of CGE microsimulation models is Ferreira, Leite, Pereira da Silva, and Picchetti (2008), who compare predicted changes in occupations, earnings and incomes due to macroeconomic shocks with actual changes between 1998 and 1999 in Brazil.18 They find that the CGE microsimulation model (a TD-WB) correctly predicts the observed direction and broad pattern of incidence, whereas a CGE with RH sometimes even fails to predict the ‘true’ direction of changes. The difference between simulated and actual changes can be attributed solely to sampling errors, which is not the case for the CGE-RH. This said, CGE microsimulation models are generally used not to provide forecasts, but rather to produce ‘what if’ scenarios. This is achieved by comparing simulation results with a baseline scenario. Naturally, we would expect that differences with actual results would increase with the length of simulation period and unexpected shocks that are not included in the baseline and simulation scenarios.
18
Dixon and Rimmer (2013) provide an overview of validation issues in the wider CGE (without microsimulations) literature.
286
John Cockburn, Luc Savard and Luca Tiberti
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
9.2.8. Data considerations For all approaches, the normal database for constructing and calibrating a CGE model are required, typically an up-to-date social accounting matrix (SAM). In addition estimates of key external parameters for example price and income elasticities must be available for the country, borrowed from a similar country (based on similar product categories and functional forms to the CGE model) or estimated directly from household survey data. Household survey with detailed income/expenditure data is required to carry out distributive analysis in the RH approach and to implement all other approaches. Additional specialized micro-data may also be required for specific research issues, such as the impact of labour market policies. For all approaches, the CGE and micro databases must be mapped. For example, all the individual products identified in the household expenditure data must be linked to a specific product category in the CGE model. The same is true of labour and capital categories and other income sources (e.g. types of transfers). In this way, simulated changes in variables from the CGE model can be applied to their counterparts in the microsimulations, or vice versa in the case of the BU approach. In the TD linkage, changes in consumer prices, wages, returns from capital and per worker revenues from self-employment activities are the most common variations fed into the microsimulation model. When labour supply behaviour is introduced (as in the TD-WB approach), changes in the employment levels of the different types of workers are also transmitted. For consistency reasons, these changes should be fed into the microsimulation model in a way that average variations predicted by the CGE model are verified also in the microsimulation model. As discussed particularly for the FI approach, the integration of microsimulations into a CGE framework requires or incites reflection on the consistency of the CGE and micro databases. For example, the CGE database may be developed from various sources including national accounts, labour market surveys, producer surveys, government financial accounts, household surveys and a fair amount of adjustments and judgement. Nothing guarantees that total final consumption, for example, will match with its value estimated from the household survey, but in general at least the socio-economic structure of the economy (e.g. household consumption shares and employment shares of different type of workers) should be fairly consistent in the two data sets. The same is true for total factor incomes, transfers, savings, etc. The confrontation of these two databases can be demanding but, if conducted systematically, should lead to an improvement of the quality of both. The case of recursive dynamic CGE models poses additional challenges. The growth in factor endowments (capital/assets and labour) in the CGE model need to be transmitted to the microsimulation model to perform credible distributional analysis. This can be done in various
Macro-Micro Models
287
ways such as introducing capital accumulation and labour endowment functions,19 and reweighting the household sample to replicate the demographic evolution of the population, as done in Robichaud, Tiberti, and Maisonnave (2013).
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
9.3. Uses and applications CGE microsimulations are used to examine a wide variety of research issues involving both macro and micro components. Typically, they are used to explore distributive and other micro impacts of major shocks with economy-wide (macro) scope. However, in some cases especially the BU approach they can be used to explore the macro/sectoral effects of major shock/policy reforms that act primarily at the micro-level (e.g. labour market and social protection policies/programmes). The choice of CGE microsimulation technique has, to some degree, been linked to the research issues addressed. Table 9.1 provides a broad overview of examples of the types of issues addressed in CGE microsimulation models. Let us now look in more detail at the key types of analyses conducted using CGE microsimulations. 9.3.1. Structural adjustment programmes One of the earliest studies on the distributive impacts of structural adjustment programmes was a RH model developed by Bourguignon, Michel, and Miqueu (1983) for Venezuela. Later, in the early 1990s, the OCDE sponsored further RH work by Thorbecke (1991), de Janvry et al. (1991), Bourguignon, de Melo, and Suwa (1991) and Morrisson (1991) to analyse the impact of structural adjustment programmes on income distribution in a variety of contexts. The first study to extend this analysis to poverty impacts was de Janvry et al. (1991) with an RH application to Ecuador using FosterGreerThorbecke (FGT, 1984) indices. 9.3.2. Trade liberalization One of the first applications of the FI approach to an actual country examined the distributive impacts of trade liberalization in Nepal (Cockburn, 2006). He finds that urban poverty falls and rural poverty increases, as initial tariffs were highest for agricultural imports. Impacts increase with income level, resulting in rising income inequality.
19
See Boeters and Savard (2013) for a discussion on the challenges of introducing dynamics with endogenous labour supply in a microsimulation model linked to a CGE model.
288
John Cockburn, Luc Savard and Luca Tiberti
Table 9.1.
Applications of CGE microsimulation techniques
Authors Agriculture Chitiga and Mabugu (2008)
Boccanfuso and Savard (2008) Boccanfuso and Savard (2007)
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Arndt, Benfica, Tarp, Thurlow, and Uaiene (2010) Fiscal policy Llambı´ , Laens, Perera, and Ferrando (2010) Cury et al. (2010)
de Souza Ferreira Filho, dos Santos, and do Prado Lima (2010) Trade liberalization Chitiga and Mabugu (2005) He´rault (2007)
Vos and De Jong (2003) Chitiga et al. (2007) Cororaton and Cockburn (2007) Cockburn, Corong, Decaluwe´, Fofana, and Robichaud (2010) Environment Boccanfuso, Savard, and Estache (2013)
Title
Approach
Evaluating the impact of land redistribution: A CGE microsimulation application to Zimbabwe Groundnut sector liberalization in Senegal: A multi-household CGE analysis Impacts analysis of cotton subsidies on poverty: A CGE macro-accounting approach Biofuels, poverty, and growth: A computable general equilibrium analysis of Mozambique
TD-MA
Assessing the impact of the global financial and economic crisis in developing countries: The case of Uruguay The impacts of income transfer programs on income distribution and poverty in Brazil: An integrated microsimulation and computable general equilibrium analysis Tax reform, income distribution and poverty in Brazil: An applied general equilibrium analysis
TD-MA
The impact of tariff reduction on poverty in Zimbabwe: A CGE top-down approach Trade liberalization, poverty and inequality in South Africa: A computable general equilibrium-microsimulation analysis Trade liberalization and poverty in Ecuador: A CGE macro-microsimulation analysis Trade policy and poverty: Results from a CGE micro-simulation analysis Trade reform and poverty Lessons from the Philippines: A CGE microsimulation analysis Impacts of trade liberalization in Senegal
TD-MA
The distributional impact of developed countries’ climate change policies on Senegal: A macro-micro CGE application Buddelmeyer, He´rault, Kalb, Linking a microsimulation model to a and van Zijll de Jon (2012) dynamic CGE model: Climate change mitigation policies and income distribution in Australia
FI TD-MA
TD-MA
IA (BU-TD)
IA (TD/BU)
TD-MA
TD-MA FI FI
FI
FI
TD-MA
289
Macro-Micro Models
Table 9.1. Authors Araar, Dissou, and Duclos (2011) Vandyck (2013) Labour market Boeters and Feil (2009)
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Cury et al. (2010)
Peichl (2009)
Boeters, Feil, and Gu¨rtzgen (2005)
(Continued ) Title
Approach
Household incidence of pollution control policies: A robust welfare analysis using general equilibrium effects Efficiency and equity aspects of energy taxation
TD-MA
Heterogeneous labour markets in a microsimulation AGE model: Application to welfare reform in Germany The impacts of income transfer programs on income distribution and poverty in Brazil: An Integrated microsimulation and computable general equilibrium analysis The benefits and problems of linking micro and macro models: evidence from a flat tax analysis Discrete working time choice in an applied general equilibrium model
IA (TD/BU)
TD-MA
IA (BU-TD)
IA (TD/BU)
BU
Chitiga, Mabugu, and Kandiero (2007) and Cororaton and Cockburn (2007) used the same approach to analyse trade liberalization in South Africa and the Philippines, respectively. Chitiga et al. (2007) find that the complete removal of tariffs reduces poverty while inequality hardly changes, but results differ between rural and urban areas. As for Cororaton and Cockburn (2007), their results indicate that the tariff cuts implemented between 1994 and 2000 were generally poverty-reducing but increased inequality. We can also cite applications of the TD-MA approach to analyse trade reforms. In South Africa and Zimbabwe, He´rault (2007) and Chitiga and Mabugu (2005), respectively, conclude that trade liberalization reduces poverty. Chitiga and Mabugu also find that it increases inequality. In another early application of the TD-MA approach to analyse trade liberalization, Vos and De Jong (2003) find mild aggregate welfare gains, but rising income inequality and virtually no poverty-reducing effect in Ecuador. 9.3.3. Poverty-reduction policies Rising inequality and impatience with the promised trickle-down effects of growth has motored increased policy concern with the poverty impacts of macro policies and shocks since the early 1990s. Policy manifestations included the adoption of poverty reduction as the first millennium development goal and the adoption by the Bretton-Woods institutions of
290
John Cockburn, Luc Savard and Luca Tiberti
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
conditional debt relief (HIPC programme) linked to country poverty reduction strategy programmes (PRSP). The major changes in the development policy debate pushed researchers to find more appropriate tools to link policy reforms (macro and micro-economic policy reforms) to changes in income distribution. Chia et al. (1994) analyse poverty targeting programmes using a RH model in Coˆte d’Ivoire. Cury, Coelho, and Pedrozo (2010) adopt a BU model to analyse the distributive impacts of cash transfer programmes in Brazil. Tiberti et al. (2013) use an IA model to study child cash grants in South Africa. They conclude that the poverty and distributive effects do not differ substantially from a partial equilibrium analysis, as the cash grants outweigh all general equilibrium effects in determining income of the poor.
9.3.4. Fiscal reform The BU approach is used by de Souza Ferreira Filho, dos Santos, and do Prado (2010) to study cuts in food taxes and a reduction in taxes on agricultural inputs in Brazil. They show that the former is more povertyreducing, whereas the latter is more inequality-reducing. Using a TD-MA approach, Llambı´ et al. (2010) conclude that the 2007 tax reform in Uruguay reduces the incidence, gap and severity of poverty, as well as inequality. With the same approach, Cury, Coelho, and Pedrozo (2011) study the PIS-COFINS tax reform in Brazil and discover substantial deterioration in poverty indicators following the implementation of this reform.
9.3.5. Agricultural policies Boccanfuso and Savard (2007) study the impacts of cotton subsidies in Mali with a TD-MA approach and find that removing cotton subsidies produce a reduction in poverty and contribute to easing inequality in Mali. Arndt et al. (2010) analyse the development of biofuels in Mozambique with the same approach. Overall, they find that the biofuel investment trajectory analysed increases economic growth by 0.6 percentage points and reduces the incidence of poverty by about 6 percentage points over a 12-year phase-in period. Boccanfuso and Savard (2008) evaluate the distributional impact of liberalizing the groundnut sector in Senegal with the FI approach. They find that reducing the special import tax on edible oils reduces poverty and that the reduction of world groundnut prices has relatively strong negative effects on poor households if farmers are not protected via a fixed price scheme.
Macro-Micro Models
291
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
9.3.6. Labour market policies The BU approach is most commonly adopted for the analysis of labour market policies. Three examples applied to Germany include Boeters et al. (2005) (labour market stimulation through social assistance cuts); Fuest, Peichl, and Schaefer (2008) (a flat tax rate programme); and Peichl (2009) (a wage tax). Boeters et al. report a broad range of macro and labour supply effects for different subsets of households. Fuest et al. find that a low flat rate tax with a low basic allowance yields positive static welfare effects but increases income inequality. Peichl also analyses a flat tax system and finds weak efficiency gains with increasing inequality. Boeters and Feil (2009) use an IA (TD/BU) approach to simulate a reform of the German transfer system that stimulates labour supply at the lower end of the wage distribution. This programme produces an increase in GDP but produces tax revenue losses because of the shift in functional income distribution. 9.3.7. Environmental policies CGE microsimulation approaches have gained popularity for the distributional impact of environmental and climate change (CC) policies. Among these, we found application of the TD-MA approach by Araar et al. (2011), Buddelmeyer et al. (2012) and Vandyck (2013). Araar et al. analyse three pollution control policies in Canada and conclude that they have a positive impact on poverty, with a small increase in inequality. Buddelmeyer et al. (2012) use the same approach to assess the effects of climate change mitigation policies in Australia and find that these policies are likely to have positive distributional effects despite a slightly negative effect on average real income. Finally, Vandyck (2013) analyse the distributional effects of increased oil excise taxes in Belgium. His results suggest that distributional effects of the environmental tax reform depend strongly on changes in factor prices and welfare payments. Boccanfuso et al. (2013) adopt the FI approach and find that international CC policies produce a slight increase in poverty in Senegal, jointly with a loss of land productivity. 9.4. Summary and future directions 9.4.1. Behavioural content A first area where there is considerable scope to extend and improve CGE microsimulations is in enriching the behavioural content of the microsimulation component. As discussed earlier, most of the TD-WB and IA studies reviewed in this chapter focus efforts on capturing individual/household labour supply and, less often, consumption
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
292
John Cockburn, Luc Savard and Luca Tiberti
behaviour. However, as attention is increasingly directed to inclusive growth policies and the medium-to-longer term effects of other macro shocks and policies, dynamic behaviour, notably linked to human capital accumulation, marriage and demography become increasingly important. As the other chapters of this book illustrate, there have been huge strides in microsimulation techniques, many of which could be gainfully integrated into a CGE framework. One example is the modelling of agricultural household behaviour, as outlined in Chapter 17 (Farms Richardson, Hennessy, O’Dononghue). Kimhi (1996) proposes an on- and off-farm labour choice model with a regime switching mechanism that could be integrated into a BU or IA (TD/BU) approach. Newman and Gertler (1994) propose a model to estimate family labour supply and consumption decisions for developing countries, again with regime switching behaviour, that could also integrate well into an IA (TD/BU) application. 9.4.2. Social dimensions of well-being Linked to this enrichment of the behavioural content of CGE microsimulations is the need to broaden the dimensions of well-being analysed beyond monetary poverty and inequality measures. Indeed, human capital accumulation and demographic effects, along with health and nutritional effects, are of interest in themselves. One important attempt to widen the scope of CGE microsimulation analyses is the Maquette for MDG Simulations (MAMS),20 developed to explore the fiscal and financing requirements to attain the MDGs and to provide guidance on related decisions concerning public spending allocation, fiscal and aid policies. A sequential dynamic TD-WB approach is adopted. Education, health and environmental indicators and their interaction are captured in the CGE component of MAMS, whereas the microsimulation component is relatively standard and focuses on monetary poverty impacts. This exposes this approach to the same critiques as the RH approach, where the individual heterogeneity and transmission channels linked to these variables are not captured. 9.4.3. Education Robichaud et al. (2013) adapt and extend the MAMS education module in a sequential dynamic TD/WB approach to study the growth and distributive impacts of different scenarios of public spending on primary and secondary education in Uganda. Like MAMS, the CGE module models school entry, dropout/repetition/promotion for each grade, and
20
For more details, see Lo¨fgren, Cicowiez, and Diaz-Bonilla (2013).
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
293
graduation for each cycle (primary and secondary/tertiary) corresponding to the categories of workers based on their skills. Unlike MAMS, the authors also included microsimulations of all the above-mentioned education behaviours, based on econometric (binary) models with error terms calibrated to reflect initial observed choices.21 Individuals are followed year by year from the beginning of schooling through to their entry into the labour market. Individual schooling statuses are updated to match CGE results using individual probabilities and a queuing approach. Drop-outs and graduates are fed into the supply of skilled/unskilled labour. In this way, the heterogeneous long-term effects of education spending reforms on individual human capital accumulation and, consequently, labour productivity are captured. Of course, all this can also have important effects on monetary poverty and inequality which, in turn, affect education in the following year.22 A drawback of this and the MAMS models is that education behaviour lacks an explicit optimization framework. Instead, individuals decide based on ad hoc demand functions with estimated elasticities of demand. Cloutier, Cockburn, and Decaluwe´ (2008) develop a static RH model where individuals maximize welfare by equalizing total (direct and opportunity) costs of education with the expected benefits from the skilled wage premium. The inclusion of a microsimulation component and a dynamic modelling structure would allow much more realistic analysis of the micro impacts on the distribution of human capital and the resulting monetary poverty/inequality effects. In a more comprehensive approach, in addition to the effects on the labour market, human capital accumulation could affect, for example, fertility, child/adult nutrition and mortality, which could, in turn, feed back into education decisions. 9.4.4. Health As for education, an interesting extension to CGE microsimulation techniques would be to allow individual health status to have long-term effects on human capital (see Chapter 14: Health Schofield, Carter, Edwards). MAMS includes some links between health and education modules children’s health status (proxied by under-five mortality rate)
21
22
For the estimation of errors terms in binary models, see Gourieroux, Monfort, Renault, and Trognon (1987). The Global Income Distribution Dynamics (GIDD) proposes a wider framework than in Robichaud et al. (2013) but uses a simpler, non-parametric (i.e. non-behavioural) approach to introduce changes in the skills composition and the demographic structure (Bourguignon & Bussolo, 2013). This is done by reweighting the original sampling weights in a way that the projected aggregate population and education figures are replicated.
294
John Cockburn, Luc Savard and Luca Tiberti
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
affects educational behaviour and performances but the lack of a microsimulation components means that it fails to capture important individual effects. Brown et al.’s (2007) evaluation of a diabetes prevention campaign in Australia represents an interesting example of how individual health behaviour can be modelled in the microsimulations and then passed onto the CGE model to estimate general equilibrium effects. The authors estimate the effect of diabetes on individual labour supply and the likely impact of a prevention campaign on aggregate labour supply. The change in the aggregate labour supply is then fed into the CGE model as an exogenous shock and used to estimate different economy-wide effects of the campaign through the change in labour supply.23
9.4.5. Demographics Robichaud et al. (2013) adopted a simple static ageing approach where sample weights are calibrated so that the total population in the microdata corresponds to population projections in each year.24 If fertility and mortality modules are introduced, a so-called dynamic ageing procedure can be applied. The next steps should be the development of a CGE microsimulation model of all key individual life events fertility, marriage, separation, migration, mortality along the lines of Chapter 11 (demographics Paul Mason). However, in all cases, as stated by Bourguignon and Bussolo (2013, pp. 1432), ‘[e]mpirical models cannot do better than theory, and a full dynamic theory that would permit us to include a full representation of lifetime individual behaviour and its heterogeneity in the population within a dynamic and stochastic general equilibrium framework is simply not available at this stage’.
9.4.6. Rehabilitating the FI approach Despite its theoretical superiority and the automatic accounting for feedback effects, the FI approach has been rarely used. The key reason for this is the rigidity of behavioural specifications currently possible within a CGE framework, notably with regards to discrete choices. With recent improvements in software used for CGE modelling (GAMS, GEMPACK, etc.) and the development of new models, possibly outside the CGE framework, with this software, it would certainly be worthwhile to
23
24
As noted in Bourguignon and Bussolo (2013), the approach is restrictive as the authors do not take into account the feedback effects of the macro model on the micro-economic behaviour. To do that, one can use the algorithm developed by Deville and Sa¨rndal (1992).
Macro-Micro Models
295
systematically explore the extent to which it is possible to repatriate some of the behaviour specifications in the behavioural sequential approaches into the FI framework. Acknowledgments
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
This work was carried out with financial and scientific support from the Partnership for Economic Policy (PEP), with funding from the Department for International Development (DFID) of the United Kingdom (or UK Aid), and the Government of Canada through the International Development Research Center (IDRC). References Aaberge, R., Colombino, U., Holmøy, E., Strøm, B., & Wennemo, T. (2007). Population aging and fiscal sustainability: Integrating detailed labour supply models with CGE models. In A. Harding & A. Gupta (Eds.), Modelling our future: Population ageing, social security and taxation (Vol. 15, pp. 259290). Amsterdam: Elsevier. Abdelkhalek, T., Boccanfuso, D., & Savard, L. (2009). Politiques e´conomiques, pauvrete´ et ine´galite´s au Maroc: Analyse en e´quilibre ge´ne´ral micro simule´. Mondes en De´veloppement, 37(4), 99118. Adelman, I., & Robinson, S. (1978). Income distribution policy in developing countries: A case study of Korea (p. 346). Stanford, CA: Stanford University Press. Adelman, I., & Robinson, S. (1979). Income distribution policy: A computable general equilibrium model of South Korea. In I. Adelman (Ed.), The selected essays of Irma Adelman (Vol. 1, pp. 256289). Dynamics and Income Distribution. Aldershot, UK: Economists of the Twentieth Century Series. Ahmed, V., & O’Donoghue, C. (2010). Global economic crisis and poverty in Pakistan. International Journal of Microsimulation, 3(1), 127129. Aka, F. B. (2004). Poverty, inequality and welfare effects of trade liberalisation: A CGE model for Coˆte d’Ivoire. Research Paper No. RP-160. AERC, Nairobi, October 2006. Anderson, K., Cockburn, J., & Martin, W. (Eds.). (2010). Agricultural price distortions, inequality, and poverty. Washington, DC: World Bank Publications. Annabi, N., Cisse´, F., Cockburn, J., & Decaluwe´, B. (2008). Trade liberalisation, growth and poverty in Senegal: A dynamic microsimulation CGE model analysis. E´conomie et Pre´vision, 186(5), 117131. Araar, A., Dissou, Y., & Duclos, J.-Y. (2011). Household incidence of pollution control policies: A robust welfare analysis using general
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
296
John Cockburn, Luc Savard and Luca Tiberti
equilibrium effects. Journal of Environmental Economics and Management, 61(2), 227243. Arndt, C., Benfica, R., Tarp, F., Thurlow, J., & Uaiene, R. (2010). Biofuels, poverty, and growth: A computable general equilibrium analysis of Mozambique. Environment and Development Economics, 15(1), 81105. Arntz, M., Boeters, S., Gu¨rtzgen, N., & Schubert, S. (2008). Analysing welfare reform in a microsimulation-AGE model: The value of disaggregation. Economic Modelling, 25(3), 422439. Azis, I., Azis, E., & Thorbecke, E. (2001). Modelling the socio-economic impact of the financial crisis: The case of Indonesia. Final report prepared for IFPRI and the World Bank. Washington. Boccanfuso, D., Estache, A., & Savard, L. (2009). Impact analysis of electricity reforms in Senegal: A macro-micro analysis. Journal of Development Studies, 45(3), 351375. Boccanfuso, D., & Savard, L. (2007). Impacts analysis of cotton subsidies on poverty: A CGE macro-accounting approach. Journal of African Economies, 16(4), 629659. Boccanfuso, D., & Savard, L. (2008). Groundnut sector liberalization in Senegal: A multi-household CGE analysis. Oxford Development Studies, 36(2), 159186. Boccanfuso, D., Savard, L., & Estache, A. (2013). The distributional impact of developed countries’ climate change policies on Senegal: A macro-micro CGE application. Sustainability, 5(6), 27272750. Boeters, S., & Feil, M. (2009). Heterogeneous labour markets in a microsimulation AGE model: Application to welfare reform in Germany. Computational Economics, 33(4), 305335. Boeters, S., Feil, M., & Gu¨rtzgen, N. (2005). Discrete working time choice in an applied general equilibrium model. Computational Economics, 26(34), 129. Boeters, S., & Savard, L. (2013). The labour market in computable general equilibrium models. In P. B. Dixon & D. W. Jorgenson (Eds.), Handbook of computable general equilibrium modelling (pp. 16451718). North Holland: Elsevier B.V. Bourguignon, F., & Bussolo, M. (2013). Income distribution in computable general equilibrium modelling. In P. B. Dixon & D. W. Jorgenson (Eds.), Handbook of computable general equilibrium modelling (pp. 13831437). North Holland: Elsevier B.V. Bourguignon, F., Bussolo, M., & Pereira Da Silva, L. A. (2008). Introduction: Evaluating the impact of macroeconomic policies on poverty and income distribution. In F. Bourguignon, M. Bussolo, & L. A. Pereira Da Silva (Eds.), The impact of macroeconomic policies on poverty and income distribution: Macro-micro evaluation techniques and tools. Houndmills, England: Palgrave-Macmillan.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
297
Bourguignon, F., de Melo, J., & Suwa, A. (1991). Modelling the effects of adjustment programs on income distribution. World Development, 19(11), 15271544. Bourguignon, F., Michel, G., & Miqueu, D. (1983). Short-run rigidities and long run adjustments in a computable general equilibrium model of income distribution and development. Journal of Development Economics, 13(12), 2143. Bourguignon, F., Robilliard, A. S., & Robinson, S. (2005). Representative versus real households in the macroeconomic modelling of inequality. In T. J. Kehoe, T. N. Srinivasan, & J. Whalley (Eds.), Frontiers in applied general equilibrium modelling. Cambridge: Cambridge University Press. Bourguignon, F., & Savard, L. (2008). A CGE integrated multi-household model with segmented labour markets and unemployment. In F. Bourguignon, L. A. Pereira Da Silva, & M. Bussolo (Eds.), The impact of macroeconomic policies on poverty and income distribution: Macro-micro evaluation techniques and tools. Houndmills, England: Palgrave-Macmillan. Bourguignon, F., & Spadaro, A. (2006). Microsimulation as a tool for evaluating redistribution policies. Journal of Economic Inequality, 4(1), 77106. Brown, L., Harris, A., Picton, M., Thurecht, L., Yap, M., Harding, A., … Richardson, J. (2007). Linking microsimulation and macroeconomic models to estimate the economic impact of chronic disease prevention. In A. Zaidi, A. Harding, & P. Williamson (Eds.), New frontiers in microsimulation modelling. Vienna: Ashgate. Buddelmeyer, H., He´rault, N., Kalb, G., & van Zijll de Jon, M. (2012). Linking a microsimulation model to a dynamic CGE model: Climate change mitigation policies and income distribution in Australia. International Journal of Microsimulation, 5(2), 4058. Chen, S., & Ravallion, M. (2004). Welfare impacts of China’s accession to the world trade organization. The World Bank Economic Review, 18(1), 2957. Chia, N.-C., Wahba, S., & Whalley, J. (1994). Poverty-reduction targeting programmes: A general equilibrium approach. Journal of African Economies, 3(2), 309338. Chitiga, M., & Mabugu, R. (2005). The impact of tariff reduction on poverty in Zimbabwe: A CGE top-down approach. South African Journal of Economics and Management Sciences, 8(1), 102116. Chitiga, M., & Mabugu, R. (2008). Evaluating the impact of land redistribution: A CGE microsimulation application to Zimbabwe. Journal of African Economies, 17(4), 527549.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
298
John Cockburn, Luc Savard and Luca Tiberti
Chitiga, M., Mabugu, R., & Kandiero, T. (2007). Trade policy and poverty: Results from a CGE micro-simulation analysis. Journal of Development Studies, 43(6), 11051125. Clauss, M., & Schubert, S. (2009). The ZEW combined microsimulationCGE model: Innovative tool for applied policy analysis. ZEW Discussion Paper No. 09062. ZEW Zentrum fu¨r Europa¨ische Wirtschaftsforschung [Center for European Economic Research]. Cloutier, M. H., Cockburn, J., & Decaluwe´, B. (2008). Education and poverty in Vietnam: A computable general equilibrium analysis. CIRPE´E Working Paper No. 0804. Universite´ Laval, Que´bec, Canada. Cockburn, J. (2006). Trade liberalisation and poverty in Nepal: A computable general equilibrium micro simulation analysis. In M. Bussolo & J. Round (Eds.), Globalization and poverty: Channels and policies. London: Routledge. Cockburn, J., Corong, E., Decaluwe´, B., Fofana, I., & Robichaud, V. (2010). The growth and poverty impacts of trade liberalization in Senegal. International Journal of Microsimulation, 3(1), 109113. Cockburn, J., Fofana, I., & Tiberti, L. (2012). Simulating the impact of the global economic crisis and policy responses on children in West and Central Africa. In C. Harper, N. Jones, R. U. Mendoza, D. Stewart, & E. Strand (Eds.), Children in crisis: Seeking childsensitive policy responses. New York, NY: Palgrave MacMillan. Cogneau, D., & Robilliard, A. S. (2007). Growth, distribution and poverty in Madagascar: Learning from a microsimulation model in a general equilibrium framework. In A. Spedaro (Ed.), Microsimulation as a tool for the evaluation of public policies: Methods and applications (pp. 73111). Madrid: Fundacion BBVA. Cogneau, D., & Robilliard, A.-S. (2008). Simulating targeted policies with macro impacts: Poverty alleviation policies in Madagascar. In F. Bourguignon, L. A. Pereira da Silva, & M. Bussolo (Eds.), The impact of macroeconomic policies on poverty and income distribution: Macro-micro evaluation techniques and tools. Washington, DC: World Bank. Colatei, D., & Round, J. I. (2000). Poverty and policy: Experiments with a SAM-based CGE model for Ghana. Paper presented at the XIII international conference on Input-Output Techniques, 2125 August, Macerato, Italy. Colombo, G. (2010). Linking CGE and microsimulation models: A comparison of different approaches. International Journal of Microsimulation, 3(1), 7291. Cornia, G. A., Jolly, R., & Stewart, F. (1989). Adjustment with a human face: Protecting the vulnerable and promoting growth: A study by UNICEF. Oxford: Oxford University Press.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
299
Cororaton, C., & Cockburn, J. (2007). Trade reform and poverty Lessons from the Philippines: A CGE microsimulation analysis. Journal of Policy Modelling, 20(1), 141163. Cury, S., Coelho, A. M., & Pedrozo, E. (2010). The impacts of income transfer programs on income distribution and poverty in Brazil: An integrated microsimulation and computable general equilibrium analysis. MPIA Working Paper No. 201020. Partnership for Economic Policy, Dakar, Senegal. Cury, S., Coelho, A. M., & Pedrozo, E. (2011). Economic analysis of PISCOFINS tax reform: Integrating a microsimulation model with a computable general equilibrium model. Paper presented at the Segundo Encuentro Regional sobre Modelos de Equilibrio General Computable, San Jose´-Costa Rica 24–25 noviembre 2008. Retrieved from http://www.cepal.org/comercio/noticias/paginas/4/ 34614/Economic_Analysis_of_PIS-CONFINS_Tax_Reform.pdf Davies, J. B. (2009). Combining microsimulation with CGE and macro modelling for distributional analysis in developing and transition countries. International Journal of Microsimulation, 2(1), 4965. Deaton, A., & Muellbauer, J. (1980). Economic and consumer behaviour (p. 450). Cambridge: Cambridge University Press. Debowicz, D., & Golan, J. (2014). The impact of oportunidades on human capital and income distribution. A top-down/bottom-up approach. Journal of Policy Modelling, 36(1), 2442. Decaluwe´, B., Dumont, J.-C., & Savard, L. (1999). How to measure poverty and inequality in general equilibrium framework. Cahier de recherche, CREFA Working Paper No. 99-20. Universite´ Laval, Que´bec. Decaluwe´, B., Patry, A., & Savard, L. (1998). Income distribution, poverty measures and trade shocks: A computable general equilibrium model of an archetype developing country. Working Paper No. 9814. CREFA, Universite´ Laval. Decaluwe´, B., Patry, A., Savard, L., & Thorbecke, E. (1999). Poverty analysis within a general equilibrium framework. Working Paper No. 9909. African Economic Research Consortium, Nairobi. de Janvry, A., Sadoulet, E., & Fargeix, A. (1991). Adjustment and equity in Ecuador. Paris: Centre de De´veloppement de l’OCDE. Dervis, K., de Melo, J., & Robinson, S. (1982). General equilibrium models for development policy (p. 526). London: Cambridge University Press. de Souza Ferreira Filho, J. B., dos Santos, C. V., & do Prado, S. M. (2010). Tax reform, income distribution and poverty in Brazil: An applied general equilibrium analysis. International Journal of Microsimulation, 3(1), 114117.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
300
John Cockburn, Luc Savard and Luca Tiberti
de Souza Ferreira Filho, J. B., & Horridge, M. (2004). Economic integration, poverty and regional inequality in Brazil. COPS/IMPACT Working Paper G-149. Monash University. Deville, J. C., & Sa¨rndal, C. E. (1992). Calibration estimators in survey sampling. Journal of the American Statistical Association, 87(418), 376382. Dixon, P. B., & Rimmer, M. T. (2013). Validation in computable general equilibrium modelling. In P. Dixon & D. Jorgenson (Eds.), Handbook of computable general equilibrium modelling (Vol. 1, pp. 12711330). North-Hollland, Oxford, UK: Elsevier. Feltenstein, A., Lopes, L., Porras-Mendoza, J., & Wallace, S. (2013). The impact of micro-simulation and CGE modelling on tax reform and tax advice in developing countries: A survey of alternative approaches and an application to Pakistan. International Center for Public Policy Working Paper No. 1309. Georgia State University, Andrew Young School of Policy Studies, Atlanta. Ferreira, F. H. G., Leite, P., Pereira da Silva, L. A., & Picchetti, P. (2008). Can the distributional impacts of macroeconomic shocks be predicted? A comparison of top-down macro-micro models with historical data for Brazil. In F. Bourguignon, L. A. Pereira Da Silva, & M. Bussolo (Eds.), The impact of macroeconomic policies on poverty and income distribution: Macro-micro evaluation techniques and tools. Houndmills, England: Palgrave-Macmillan. Foster, J., Greer, J., & Thorbecke, E. (1984). A class of decomposable poverty measures. Econometrica, 52(3), 761766. Fuest, C., Peichl, A., & Schaefer, T. (2008). Is a flat tax reform feasible in a grown-up democracy of Western Europe? A simulation study for Germany. International Tax and Public Finance, 15(5), 620636. Gorman, W. (1953). Community preference fields. Econometrica, 21(1), 6380. Gortz, M., Harrison, G., Neilsen, C., & Rutherford, T. (2000). Welfare gains of extending opening hours in Denmark. Economic Working Paper No. B-00-03. University of South Carolina, Darla Moore School of Business, Columbia, SC. Gourieroux, C., Monfort, A., Renault, E., & Trognon, A. (1987). Generalised residuals. Journal of Econometrics, 34(12), 532. Heckman, J., Lochner, L., & Taber, C. (1998). Explaining rising wage inequality: Explorations with a dynamic general equilibrium model of labour earnings with heterogeneous agents. Review of Economic Dynamics, 1, 158. He´rault, N. (2007). Trade liberalization, poverty and inequality in South Africa: A computable general equilibrium-microsimulation analysis. Economic Record, 83(262), 317328. He´rault, N. (2010). Sequential linking of computable general equilibrium and microsimulation models: A comparison of behavioural and
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Macro-Micro Models
301
reweighting techniques. International Journal of Microsimulation, 3(1), 3542. Hertel, T., & Reimer, J. (2005). Predicting the poverty impacts of trade reform. Journal of International Trade & Economic Development, 14(4), 377405. Huppi, M., & Ravallion, M. (1991). The sectoral structure of poverty during an adjustment period: Evidence for Indonesia in the mid-1980s. World Development, 19(12), 16531678. Kimhi, A. (1996). Off-farm work participation of Israel farm couples: The importance of farm work participation status. Canadian Journal of Agricultural Economics, 44(4), 481490. King, D., & Handa, S. (2003). The welfare effects of balance of payments reforms: A macro-micro simulation of the cost of rent-seeking. The Journal of Development Studies, 39(3), 101128. Lay, J. (2010). Sequential macro-micro modelling with behavioural microsimulations. International Journal of Microsimulation, 3(1), 2434. Lemelin, A., Fofana, I., & Cockburn, J. (2013). Balancing a social accounting matrix: Theory and application, mimeo, partnership for economic policy. Retrieved from www.pep-net.org/programs/ mpia-development-policy-modelling/pep-standard-cge-models/sambalgpcema/ Llambı´ , C., Laens, S., Perera, M., & Ferrando, M. (2010). Assessing the impact of the global financial and economic crisis in developing countries: The case of Uruguay. MPIA Working Paper No. 201116. Partnership for Economic Policy (PEP). Lo¨fgren, H., Cicowiez, M., & Diaz-Bonilla, C. (2013). MAMS A computable general equilibrium model for developing country strategy analysis. In P. B. Dixon & D. W. Jorgenson (Eds.), Handbook of computable general equilibrium modelling (pp. 159276). North Holland: Elsevier B.V. McGill, E. (2004). Poverty and social analysis of trade agreements: A more coherent approach. Boston College International and Comparative Law Review, 27(2), 371427. Morrisson, C. (1991). Adjustment incomes and poverty in Morocco. World Development, 19(11), 16331651. Muller, T. (2004). Evaluating the economic effects of income security reforms in Switzerland: An integrated microsimulation Computable general equilibrium approach. Mimeo: University of Geneva. Mussard, S., & Savard, L. (2010). Macro/micro modelling and Gini multidecomposition: An application to the Philippines. Journal of Income Distribution, 19(2), 5178. Mussard, S., & Savard, L. (2012). The Gini multi-decomposition and the role of Gini’s transvariation: Application to partial trade liberalization. Philippines Applied Economics, 44(10), 12351249.
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
302
John Cockburn, Luc Savard and Luca Tiberti
Newman, J. L., & Gertler, P. J. (1994). Farm productivity, labour supply, and welfare in a low income country. The Journal of Human Resources, 29(4), 9891026. Ortega Diaz, A. (2011). Microsimulations for poverty and inequality in Mexico using parameters from a CGE model. Social Science Computer Review February, 29(1), 3751. Peichl, A. (2009). The benefits and problems of linking micro and macro models: Evidence from a flat tax analysis. Journal of Applied Economics, 12(2), 301329. Preston, M. H. (1959). A view of the aggregation problem. The Review of Economic Studies, 27(1), 5864. Robichaud, V., Tiberti, L., & Maisonnave, H. (2013). Impact of increased public education spending on growth and poverty in Uganda: A combined micro-macro approach. PEP-MPIA Working Paper No. 201401. Partnership for Economic Policy. Rutherford, T., & Tarr, D. (2008). Poverty effects of Russia’s WTO accession: Modelling ‘real’ household and endogenous productivity effects. Journal of International Economics, 75(1), 131150. Savard, L. (2003). Poverty and income distribution in A CGE-household micro-simulation model: Top-down/bottom-up approach. CIRPEE Working Papers No. 0343. Universite´ Laval. Savard, L. (2005). Poverty and inequality analysis within a CGE Framework: A comparative analysis of the representative agent and microsimulation approaches. Development Policy Review, 23(3), 313332. Savard, L. (2010). Scaling up infrastructure spending in the Philippines: A top-down bottom up micro-simulation approach. International Journal of Microsimulation, 3(1), 4359. Taylor, L., & Lysy, F. (1979). Vanishing income redistributions: Keynesian clues about model surprises in the short-run. Journal of Development Economics, 6(1), 1129. Thorbecke, E. (1991). Adjustment growth and income distribution and equity in Indonesia. World Development, 19(11), 15951614. Tiberti, L., Maisonnave, H., Chitiga, M., Mabugu, R., Robichaud, V., & Ngandu, S. (2013). The economy-wide impact of the South African child support grant: A micro-simulation-computable general equilibrium analysis. Cahiers de recherche CIRPE´E 1303. Universite´ Laval, Que´bec, Canada. Vandyck, T. (2013). Efficiency and equity aspects of energy taxation. EUROMOD Working Paper No. EM 12/13. Institute of Social & Economic Research, University of Essex, Essex, UK. Vos, R., & De Jong, N. (2003). Trade liberalization and poverty in Ecuador: A CGE macro-microsimulation analysis. Economic System Research, 15(2), 211232.
Macro-Micro Models
303
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Vos, R., & Sa´nchez, M. V. (2010). A non-parametric microsimulation approach to assess changes in inequality and poverty. DESA Working Paper No. 94. UNDP. Vos, R., Sa´nchez, M. V., & Kaldewei, C. (2008). Latin America and the Caribbean’s challenge to reach the MDGs: Financing options and trade-offs. DESA Working Paper No. 68. United Nations. Wodon, Q., & Zaman, H. (2009). Rising food prices in sub-Saharan Africa: Poverty impact and policy responses. World Bank Research Observer, 25(1), 157176.
304
John Cockburn, Luc Savard and Luca Tiberti
Downloaded by KAI NAN UNIVERSITY At 11:49 24 July 2016 (PT)
Appendix: The Aggregation Problem According to Deaton and Muellbauer (1980), the aggregation problem is defined as the passage from the micro-economic behaviour of consumers (or workers) to the aggregate demand (labour supply) analysis. Or, as Preston (1959) states, the aggregation problem is tied to the link between micro and macro theory and therefore differences that can occur between large models (microsimulation models) and smaller models (macro models) relying on aggregated variables and parameters. This is exactly the problem at hand with linking CGE models with microsimulation models. To respond to this problem, a few decades back Gorman (1953) demonstrated that using or assuming the same marginal consumption and saving propensities was sufficient to solve this problem and obtain perfect linear aggregation. According to Deaton and Muellbauer (1980), this solution is extremely restrictive since it imposes linear and identical Engle curves for all households in a microsimulation model. Moreover, this assumption is incompatible with empirical analysis of household consumption behaviour. The second problem is linked to the household-specific labour supply. Deaton and Muellbauer (1980) present the conditions for aggregation of labour supply with the following cost function: cðu; w; pÞ = wT þ μ = Y where u is the utility level, w, the wage, p, the price level of goods, T, time endowment for work, μ, the non-work income or transfer from other agents and Y, the income of the worker. In this context, leisure is treated as a good with price w. Perfect linear aggregation is possible if the cost function has the following form: ch ðuh ; w; pÞ = αh ðw; pÞ þ uh bðw; pÞ Average leisure must be a function of average income ðYÞ, wage (w) and prices (p). We can see that the problem is tied to the demand for goods. Indeed, it is plausible that prices are the same for all consumers, yet w varies between households given specific characteristics such that the function b(w, p) will be specific to each household. Therefore, the marginal consumption share will be household specific for good i, log b/log p. In this case, perfect aggregation is impossible. To obtain perfect aggregation, the derivative of the labour income with respect to non-labour income, μ, and the derivative between labour income and time endowment must be identical for all workers. According to Heckman, Lochner, and Taber (1998), worker-specific labour supply is one of the most important factors in explaining the differential distributional impact of policy reform.
CHAPTER 10
Dynamic Models Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Handbook of Microsimulation Modelling
10.1. Introduction A dynamic microsimulation model is a model that simulates the behaviour of micro-units over time. Orcutt, Greenberger, Kobel, and Rivlin (1961) described the first dynamic microsimulation model following the inspiration of Orcutt’s (1957) article. Most dynamic microsimulation models that have developed in following decades trace a direct or indirect link back to this model. Urban planners use microsimulation techniques to estimate traffic flows, while in the field of economics and social science, microsimulation is often used to analyse the social economic policies. In this chapter, we shall review how dynamic microsimulation in social science has developed, with main focuses on the economic models developed in the past decade. Micro-level data, such as data obtained from a household survey, is often chosen as the basis for social economic research. In order to evaluate certain impacts of public policies, for example the redistributive impact over the course of a lifetime, it is necessary to utilise a long panel dataset. In general, such datasets are not available, either because the analysis relates to the future, as in the case of pension forecasts, or because collected datasets do not cover sufficiently long time periods; therefore, analysts use dynamic microsimulation models to assist in their analysis, a concept which was first suggested by Orcutt in 1957. Essentially, microsimulation is a tool to generate synthetic micro-unit based data, which can then be used to answer many ‘what-if’ questions that, otherwise, cannot be answered. Microsimulation models, as in the field of policy modelling, are usually categorised as ‘static’ or ‘dynamic’. Static models, for example
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293009
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Handbook of Microsimulation Modelling
306
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
EUROMOD (Mantovani, Papadopoulos, Sutherland, & Tsakloglou, 2006), are often used to evaluate the immediate distributional impact of policy changes upon individuals and households, without reference to the time dimension and extensive behavioural adjustment. Some more recent static models, for example IZAΨMOD (Peichl, Schneider, & Siegloch, 2010), improved the traditional model by incorporating certain behaviour responses assuming the market adjusts to the new steady state overnight. Dynamic models, for example DESTINIE, PENSIM, and SESIM (Bardaji, Se´dillot, & Walraet, 2003; Curry, 1996; Flood, 2007), extend the static model by allowing individuals to change their characteristics due to endogenous factors within the model (O’Donoghue, 2001) and let individual units progress over time. Because of the integrated long-term projections and time dependent behaviour simulations, dynamic microsimulation models could offer further insights in theory. More than ten years ago, O’Donoghue (2001) surveyed the dynamic microsimulation models that had been developed up to that point. However the 2000s have seen many of the barriers that existed for model development until that point overcome. Data collection projects such as the European Community Household Panel (ECHP) and the increased availability of longitudinal administrative data such as the Lifetime Labour Market Database in the United Kingdom or the GSOEP in Germany have eliminated to some degree data constraints. A number of new model were developed in the past decade, for instance Pensim2 (Emmerson, Reed, & Shephard, 2004), IFS Model (Brewer et al., 2007) and SAGE (Zaidi & Rake, 2001) models in the United Kingdom, APPSIM in Australia (Harding, 2007a) and DESTINIE2 (Blanchet, Crenner, & Minez, 2009) in France, MIDAS (Dekkers et al., 2010) in Belgium, etc. Meanwhile, a few generic microsimulation programmes have emerged, such as ModGen (Wolfson & Rowe, 1998), UMDBS (Sauerbier, 2002), Genesis (Edwards, 2004) and LIAM (O’Donoghue, Lennon, & Hynes, 2009), eliminating the need to create a model from scratch. It has allowed an internationalisation of the models with developments in Belgium (Dekkers & Belloni, 2009), Italy (Dekkers et al., 2010), Canada (Spielauer, 2009) and the United Kingdom (Emmerson et al., 2004). Nevertheless, the decade has seen the demise of several models such as DYNACAN in Canada, CORSIM in the United States, NEDYMAS (Dekkers, Nelissen, & Verbon, 1993) in the Netherlands, the Belgian model (Joyeux, Plasman, & Scholtus, 1996) and MIDAS in New Zealand. The micro-econometric and micro-economic understandings of the processes that make up a dynamic microsimulation model have also greatly improved over this period. It is therefore worth considering the progress made by the discipline over the past decade. In this chapter, we shall describe the models developed, irrespective of whether they are still in use or not, their uses and data issues, and some of the methodological choices faced. We then review the progress made by
Dynamic Models
307
the discipline since the earliest models and suggest some directions for future development. It draws on earlier work by two of the authors (Li & O’Donoghue, 2013), but is not intended for specialists in the field and aims to give an introductory birds-eye view on the field of dynamic microsimulation in the social sciences.
Handbook of Microsimulation Modelling
10.2. Uses and applications Dynamic microsimulation models can have many uses and this section provides an overview of the principle uses. Table 10.1 summarises many of the existing dynamic microsimulation models in terms of their main purpose, which covers projection, evaluating or designing public policies, inter-temporal behaviour studies, etc. Given the most accessible micro datasets for social scientists are household or individual level information, most models do not incorporate information on business establishments, with a few exceptions for models like MOSES in Sweden (Eliasson, 1977), NEDYMAS in the Netherlands, where business behaviours are incorporated through market equilibriums in the models. There are only a few firm-level microsimulation models, for example DIECOFIS (Parisi, 2003), and they are mostly static. Following the introduction of the time dimension into dynamic microsimulation, these models can provide useful projections for the trend of socio-economic development under current policies. DYNASIM2/3 (Favreault & Smith, 2004; Wertheimer, Zedlewski, Anderson, & Moore, 1986), APPSIM (Harding, 2007a), the SfB3 population model (Galler & Wagner, 1986), DYNAMITE (Ando et al., 2000), SADNAP (Van Sonsbeek, 2009) and DESTINIE1/2 (Blanchet et al., 2009; Bonnet & Mahieu, 2000) have all been used for these purposes. In some cases, dynamic microsimulation models have been used as an input for macromodels as in the case of the MOSART (Andreassen & Solli, 2000), DYNASIM2 and DARMSTADT models. Table 10.1.
Uses of dynamic microsimulation models
Pensions Inequality and redistribution Intergenerational General ageing Demographic Health and LT care Education Spatial Labour market Benefit forecasting Savings, wealth and macro Source: Li and O’Donoghue (2013).
34 13 6 4 10 3 4 5 1 1 5
Handbook of Microsimulation Modelling
308
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Dynamic microsimulation models can also be used to evaluate the future performance of various long-term programmes such as pensions, educational financing, and health and long-term care, by analysing simulated future cross-sectional data. The governmental models such as DYNACAN (Morrison, 2000), POLISIM (McKay, 2003), PENSIM2 (Emmerson et al., 2004), the Sfb3 models (Galler & Wagner, 1986), MOSART (Andreassen, Fredriksen, & Ljones, 1996), PENMOD (Shiraishi, 2008) and SESIM (Ericson & Hussenius, 1999; Klevmarken & Lindgren, 2008) have been extensively used for this purpose. The existence of baseline projections allows the design of a new public policy by simulating the effect of potential reforms. Models such as LIAM (O’Donoghue et al., 2009), PRISM (Kennell & Sheils, 1990), the Belgian dynamic model (Joyeux et al., 1996), the SfB3 population model (Galler & Wagner, 1986), LIFEMOD (Falkingham & Johnson, 1995), SESIM (Klevmarken et al., 2007) and Belgium MIDAS (Dekkers, 2010; Dekkers, Desmet, Fasquelle, & Weemaes, 2013) have all been used to look at pension reform. A number of models such as DYNAMOD (Antcliff, 1993), the SfB3 cohort model (Hain & Helberger, 1986), LIFEMOD (Harding, 1993), SAGE (Zaidi & Scott, 2001) and GAMEO (Courtioux, Gregoir, & Houeto, 2009) have been used to examine changes to education finance, whereby education costs are to be paid for over an individual’s lifetime. Fo¨lster (2001) used a microsimulation model to examine reforms to social insurance utilising personal savings accounts. By using longitudinal information created from dynamic microsimulation models, researchers can study the inter-temporal processes and behaviours at both the aggregate and individual levels. For example, CORSIM (Keister, 2000), DYNAMOD (Baekgaard, 1998), the New Zealand MIDAS model (Stroombergen, Rose, & Miller, 1995) and more recently CAPP_DYN (Tedeschi, Pisano, Mazzaferro, & Morciano, 2013) have all been used to look at wealth accumulation. Creedy and Van de Ven (2001), Nelissen (1996) and others have used dynamic microsimulation models to explore lifetime earning redistributions. Models such as DESTINIE1/2, LIAM, LifePaths, and IFSIM have been used to examine ˇ intergenerational transfers (Baroni, Zamac, & O¨berg, 2009; Blanchet et al., 2009; Bonnet & Mahieu, 2000; O’Donoghue et al., 2009; Rowe & Wolfson, 2000), whilst FAMSIM (Lutz, 1997) has been used to study the demographic behaviour of women, and MICROHUS (Klevmarken & Olovsson, 1996) examined the impact of a tax-benefit system on labour market mobility. Le´gare´ and De´carie (2011) looked at disability status amongst the elderly. Models that simulate these processes can be used to design policies to combat these problems, for example DYNASIM was used to study the effect of teenage childbearing, while CORSIM has been used to look at dental health within the US population (Brown, Caldwell, & Eklund, 1992). The models FEM and POHEM were designed to evaluate the evolution of the population’s health status and its budget
Handbook of Microsimulation Modelling
Dynamic Models
309
implications for the United States and Canada (Will, Berthelot, Nobrega, Flanagan, & Evans, 2001; Zucchelli, Jones, & Rice, 2012), whilst the LifePaths modelling framework has been used in Canada to examine time use issues (Wolfson & Rowe, 1998). By combining spatial information with dynamic microsimulation models, the model can then be used to predict the geographical trend of certain social economic activities. This type of model is usually referred to as a dynamic spatial microsimulation model, for example MOSES (Wu, Birkin, & Rees, 2008). There are a number of models that attempt to analyse policy changes at the national level. For instance, the SVERIGE model simulates a number of demographic processes for policy analysis in Sweden (Holm, Holme, Ma¨kila¨, Mattsson-Kauppi, & Mo¨rtvik, 2006; Vencatasawmy et al., 1999), whilst the SMILE model (Ballas, Clarke, & Wiemers, 2005; O’Donoghue, Loughrey, & Morrissey, 2011) analyses the impact of policy change and economic development on rural areas in Ireland. Besides modelling economic policy, SimBritain (Ballas, Clarke, Dorling, et al., 2005) looks at the evolution of health while models such as HouseMod (Phillips & Kelly, 2006) and SustainCity (Morand, Toulemon, Pennec, Baggio, & Billari, 2010) focus on the housing issues with a time dimension. Dynamic microsimulation models typically project samples of the population over time. If a full cross-section of the population is projected, then one can, for example, examine future income distributions under different economic and demographic scenarios. DYNASIM2/3 (Favreault & Smith, 2004; Wertheimer et al., 1986), APPSIM (Harding, 2007a), the SfB3 population model (Galler & Wagner, 1986), DYNAMITE (Ando et al., 2000), SESIM (Klevmarken & Lindgren, 2008), SADNAP (Van Sonsbeek, 2009), DESTINIE1/2 (Blanchet et al., 2009; Bonnet & Mahieu, 2000) and MIDAS (Dekkers et al., 2010) have been used for these purposes. These models typically utilise macro-models or forecasts to align their own projections. However, occasionally the opposite has occurred, where dynamic microsimulation models have been used as input into macro-models as in the case of MOSART (Andreassen & Solli, 2000), DYNASIM2 and the DARMSTADT models. A full list of models and their broad characteristics is presented by Li and O’Donoghue (2013); this chapter uses some derived data from their paper to sketch the field in broader lines. Although Table 10.1 tries to cover many known models irrespective of whether they are in use today, it is nearly impossible to list all models as new ones are being developed every year. In addition, the list focuses more on the dynamic microsimulation models that are mainly used for social economic analyses. Certain regional dynamic spatial models and transportation models are not included. One can also track the development of models through a number of lineages. The original Orcutt Socio-economic System (Orcutt et al., 1961) led to DYNASIM described above, which in turn led to CORSIM which
Handbook of Microsimulation Modelling
310
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
led to POLISIM, DYNACAN and SVERIGE models. In parallel, large modelling developments in the 1970s took place in Sweden and Germany with current antecedents, while the LSE welfare state programme of the 1980s have spawned the LIFEMOD, PENSIM, PENSIM2 and SAGEMOD models in the United Kingdom as well as the HARDING model in Australia and LIAM model in Ireland. Subsequently the HARDIING model led within the creation of NATSEM to a range of models in Australia, while the LIAM model has influenced a number of European models including the LIAM2 modelling framework. Separately to these largely related developments, Statistics Canada has developed a series of LifePath/MODGEN based models based upon the original DEMOGEN. All these powerful dynamic microsimulation models come with the cost of high complexity. Compared with static microsimulation, dynamic microsimulation is much more costly to develop and has more methodological challenges. This chapter intends to discuss some of the methodological issues related to the construction of a dynamic microsimulation model, surveying current practice in the field around the world.
10.3. Methodological characteristics and choices This section continues to discuss methodological issues faced in constructing dynamic microsimulation models but focuses on the technical implementation and choices made in a model. Dekkers and Belloni (2009, p. 5) discern simulation characteristics1 of a model from its technical characteristics. The present discussion will merely be on the latter, although consequences of technical choices on simulation characteristics will be discussed as well. Table 10.2 provides an overview of the technical choices covered in this section.
10.3.1. Base dataset selection Base dataset selection is important in a microsimulation model as the quality of the input data determines the quality of the output. However,
1
‘Simulation characteristics’ are defined as those characteristics of a model that have consequences for the actual or potential research problems that can be covered by a model, as well as the implicit or explicit assumptions that a model makes when handling a specific research problem. For example, if the use of a model is to assess inference aspects of a certain potential policy measure, then the choice of model should be consider whether a model can simulate distributional effects. This feature then is a simulation characteristic. Whether the model is developed in C++, Fortran is a technical characteristic that is in this case of less relevance.
311
Dynamic Models
Table 10.2.
An overview of the technical choices made by dynamic microsimulation models
Characteristic Base population Type of time modelling Open or closed model Use of alignment algorithms Use of behavioural equations
Category
Share of models (%)
Cross Cohort Discrete Continuous Open Closed
75.9 24.1 88.9 16.7 14.8 75.9 68.5 29.6
Source: Li and O’Donoghue (2013).
Table 10.3.
Base dataset selection of dynamic microsimulation models
Handbook of Microsimulation Modelling
Data source Survey Census Synthetic Admin
Share 43.8 23.4 15.6 17.2
Source: Li and O’Donoghue (2013).
selection of a base dataset is not an easy task as hardly any micro dataset contains all the information required by a dynamic population microsimulation model. The difficulties of picking a base dataset have been discussed by Zaidi and Scott (2001), Cassells, Harding, and Kelly (2006), Klevmarken and Lindgren (2008) and many other papers. Table 10.3 describes the types of base datasets used by different dynamic microsimulation models, including detailing of the data source and sample size. Typically, a dynamic microsimulation model starts with one or several of the following types of dataset according to their sources: • • • •
Administrative Data Census Data Household Survey Data Synthetic Dataset
Administrative data often contains extensive information on taxable earnings and basic (tax-related) social economic variables and the data is often collected for the most part of the population, with a much bigger sample size compared with survey data. Because the data is often collected for taxations or law enforcement purpose, the data could be accurate for core variables, for example employee earnings, but limited or less specific
Handbook of Microsimulation Modelling
312
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
for peripheral variables such as education levels. In addition, some specific information that is relevant to social economic research may be missing or available only for certain sub-groups. This is often the case for the educational attainment level of the individuals. Self-reported information, for example on household wealth, may be misleading. For these reasons, models using administrative data often seek to supplement information from external sources. Both SESIM in Sweden and MIDAS in Belgium imputed a few variables on survey datasets. Legal and privacy reasons may also prevent administrative data from being accessible. Models such as CORSIM, DYNACAN and DYNAMOD use census data and while census data typically have better coverage than household surveys, they often contain less information and have to be supplemented with imputed information from other sources. Household survey data, for example the LII survey utilised in the LIAM model, are also frequently used as the base dataset because it is rich in variables of interest and offers information on the dynamics of behaviours. However, household survey datasets may have the issues of smaller sample size and weights adjustment. The use of weights in a dynamic model adds complexity to many areas and can result in individuals being given different weightings at different points in their lives. As microsimulation aims at inference to a finite population, one potential solution, as implemented in MIDAS, the DYNAMITE and ANAC, is to replicate household population according to their frequency weights, so that each household would have the same basic weight. Another type of base dataset is synthetic data. These are selected when either a longitudinal model is used, as in the case of DEMOGEN, HARDING, LIFEMOD, LIAM and BALDINI (O’Donoghue, 2001), or where no data exists, as in the case of the NEDYMAS model, where a synthetic initial sample representative of the Dutch population in 1947 was generated. Synthetic datasets are artificially created with all required variables populated based on some known macro statistics and distributional assumptions. It is often used to understand theoretical implications of a single policy in depth. However, significant adjustments and justifications are required before inferring the policy effects in the actual population. For microsimulation models analysing the dynamics of elderly earnings or pensions, the dataset requirement is usually higher than average microsimulation models as it requires historical variables that can affect the evolution of the elderly social economic status. This necessity implies that either retrospective information or a long panel dataset containing rich demographic, employment, and pension data is essential. Most researchers unfortunately do not have access to datasets that can fulfil this requirement. Instead, hybrid sources of datasets are often used where a combination of datasets from various sources, statistical matching and simulation techniques are used. For example, DYNASIM3 (Favreault &
Handbook of Microsimulation Modelling
Dynamic Models
313
Smith, 2004) matches two survey datasets, namely, Survey of Income and Program Participation (SIPP) and Panel Study of Income Dynamics (PSID) to construct its base dataset. CBOLT (Oharra, Sabelhaus, & Simpson, 2004) uses a similar approach to complement its main dataset with SIPP, PSID and data from the Current Population Survey (CPS). A recent model T-dymm (Tedeschi, 2011) intends to match administrative records with the European Union Statistics on Income and Living Conditions (EU-SILC) dataset. For researchers without access to the required data, simulation is used to impute the longitudinal history. The CORSIM model simulates part of the longitudinal profile based on a historical cross-sectional dataset and matches the model output to historical aggregate information such as fertility and mortality rates (Caldwell, 1996). LIAM also simulates historical profiles by exploiting retrospective variables, previous census information and other survey data (Li & O’Donoghue, 2012). Each of data matching and imputation methods has its pros and cons. The method is often tailor-made to the specific datasets and projects. Statistical matching can be used when there are sufficient matching variables in a comparable dataset. This method has the desirable feature of having a ‘real-world’ value, although the quality of matching may vary substantially depending on the quality and quantity of matching variables. Synthetic simulation has the advantage of being flexible but longitudinal consistency may be an issue due to limited benchmark information. Additionally, it is also common to extract behavioural relations from a dataset other than the one used by the simulation. In this case, however, one may want to consider the comparability and consistency issues carefully due to the survey design and variable definitions. Another issue in the base dataset selection is the sample size. The larger the sample size, the more the sub-groups of the population can be considered. Many dynamic models have a baseline dataset with more than 100,000 observations (Table 10.4). Sample sizes are particularly important for inter-temporal analysis because similar individuals in a cross-sectional sample may have taken different paths to reach the same state. Regardless the source of the dataset, panel data is usually preferred as it records changes over time. Sample size also has an impact on the model run time, where larger datasets will take longer to simulate, although it is less of an issue with faster computers. Table 10.4. Base data size ≤15,000 15,000 100,000 100,000 + Source: Li and O’Donoghue (2013).
Sample size distribution Share 37.3 18.6 44.1
314
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Handbook of Microsimulation Modelling
10.3.2. Cohort model or population model One issue that is closely related to the base dataset selection is the type of data structure that a model uses. Harding (1993) and others have categorised inter-temporal dynamic models into two types: cohort models that simulate a single cohort over a relative long time period (usually lifetime), and population models that simulate a population cross-section over a defined period of time. In addition, some models focus only on adults (i.e. ignore children) and thus they do not represent the entire age spectrum, although these models may contain a cross-section of the population. From a model design perspective, the distinction between cohort and population model may be a simulation property rather than a technical characteristic. The distinction made in the literature from a historical viewpoint has more to do with the computational capacities and data constraints rather than any major methodological differences. Cohort models were typically used because the computing costs required to simulate whole lifetimes for cross-sections with sufficient sample sizes for cohort examinations were too high. The method typically features less microunits interactions as compared with a full-fledged population model. Both types of models can be simulated in the same modelling environment: a cohort model is simply a model that ages a sample of individuals in a particular age group, while a population model ages a sample of individuals of different ages. Both samples are passed through ageing procedures, to produce life event histories over the modelled period. It is also possible to model both types using the same computing platform. The potentially larger size of the cohort modelled in dynamic cohort models allows life time income patterns for smaller population groups such as recipients of disability benefits or lone parents to be studied. Some cross-section models such as MOSART combine both types of modelling technique as they may use a very large dataset. With increasing computation and modelling capacities, newer models tend to use be population models as one may get more information and draw inference to the population directly. Furthermore, cohort models by definition are less useful for the simulation of households and their income. This means that many indicators that use household-level information, including the at-risk-of-poverty rate, the low-work-intensity-rate and the Gini, can be simulated. For this reason, population models can be more useful for applied research.
10.3.3. Ageing method in dynamic microsimulation Ageing within a microsimulation context may be defined as the process of updating a database to represent current conditions or of projecting a database for one or more years to represent expected future conditions. There are two types of ageing processes: static ageing and dynamic ageing.
Handbook of Microsimulation Modelling
Dynamic Models
315
Static ageing involves adjusting the weights of the observations so that the simulated population distribution matches the macro-aggregates. For example, in order to simulate an ageing society, the weighting of young people gradually decreases over time while the weighting of elderly people would increase; however, there is no change to the attributes of these individuals. Dynamic ageing in contrast changes the attributes of the individuals instead of altering their weights. In the same example of simulating an ageing society, models with dynamic ageing will update the age and other related attributes of individuals over time instead of changing their weights. The method can be referred as cross-sectional dynamic ageing if all individuals are updated before a model moves on to the next time period in a dynamic ageing process, or longitudinal dynamic ageing if a model simulates all time periods for one individual before repeating the same process for the next one in the population. The difference is far from trivial, because cross-sectional ageing allows for the matching and interactions between individuals in the dataset, whereas longitudinal ageing needs to resort to creating artificial individuals for the sole purpose of forming a partnership. Thus, modellers who want to simulate household characteristics in population models will resort to cross-sectional ageing in almost all cases. Generally speaking, dynamic ageing is more popular and may sometimes be used as the criteria to judge whether a model belongs to the camp of dynamic microsimulation models.2 While static ageing can ideally produce the same population representative cross-sectionals as models with dynamic ageing (Dekkers & van Camp, 2011), it works in a different way as it does not update social economic variables. In many cases, the only variable that needs to be changed over time is the weight of the observations. This might be attractive for modellers who already have a static microsimulation model. However, static ageing also has a number of disadvantages. Klevmarken (1997) highlighted that whereas static ageing may avoid some problems of drift in the projected cross-section associated with dynamic ageing because of misspecification in dynamic equations, it cannot account for mobility between states. In addition, he pointed out that it is inefficient not to use all available historical information to project into the future. A consequence of not modelling the mobility of individuals between points in time is that it reduces the type of analyses that can be undertaken by a microsimulation model, for example, it is not possible to conduct analyses that require life event histories such as the simulation of pensions.
2
This chapter uses a broad definition of dynamic microsimulation, where the main difference between static models and dynamic models is the inclusion of the time horizon. Some of the methodological discussions such as estimations and behavioural responses only apply to a dynamically aged microsimulation model.
Handbook of Microsimulation Modelling
316
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Furthermore, future weights are needed to age a dataset. Although macro-models or other forecasting devices can be used, they may not forecast weights at the level of detail required. Besides, the weight calculation may be further complicated when the target is multidimensional.3 Buddelmeyer, He´rault, Kalb, and van Zijll de Jong (2009) and De Blander, Schockaert, Decoster, and Deboosere (2013) have discussed some recent applications of static ageing. Generally speaking, static ageing cannot be used when there is no individual in the sample in a particular state (Dekkers & Van Camp, 2011). If there are a small number of cases in a particular household category, a very high weight may have to be applied, resulting in unstable predictions. As a result, static ageing procedures are mostly used in short to medium term forecasts, where it can be expected that large changes have not occurred in the underlying population. However, it may be more difficult to use static ageing over longer periods of time due to changing characteristics of the population. Dynamic ageing aims to reflect the ageing process in real life though it could make a model very complicated and computational expensive. Cross-sectional dynamic ageing is the most common method while longitudinal ageing is sometimes used in cohort models. Dynamic ageing can consistently estimate characteristics of future income distributions under ideal circumstances in which all transition probabilities and state specific expectations can themselves be estimated consistently. This may be possible in a simple model with a small number of processes, but in a fully dynamic model of work and life histories, many more processes need to be jointly estimated, a formidable requirement given the available data. Therefore, it is necessary to make some assumptions to make estimation feasible, for example education choice happens before labour participation choice, etc. In addition, one may sometimes need to assume independent error terms and some other arbitrary assumptions in order to simplify the estimation. Although these assumptions are common in practice, they may lead to theoretical pitfalls and biased results when excessively used without proper testing. In addition, projections over time at the micro-level are particularly susceptible to assumptions on the stability of the panel data, the absence of structural breaks as well as misspecification error, as modelling at this level involves more details than in macromodels. Additionally, current knowledge regarding micro-behaviour is not good enough to specify a fully dynamic model. As a result, dynamic ageing often combines with an alignment (calibration) mechanism to keep aggregate outputs in line with predictions from macro-models. The method allows individual transitions to be simulated while ensuring that
3
For some examples of the multi-dimensional reweight algorithm, see Deville, Sarndal, and Sautory (1993), Tanton, Vidyattama, Nepal, and McNamara (2011).
Dynamic Models
317
aggregate outputs track macro forecasts (see, for example, Che´nard, 2000a, 2000b).
Handbook of Microsimulation Modelling
10.3.4. Discrete or continuous time modelling Another choice in the development of dynamic microsimulation models is the treatment of time. Discrete time models simulate which individuals experience particular events in given time intervals while continuous time models treat time as a continuous variable and determine the exact time that an event occurs (Willekens, 2006). Discrete time microsimulation models changes individual attributes once per time period. Take demography for example, demographic modules in dynamic models are often constructed using annual transition probability matrices. Individuals are passed through a collection of transition matrices in each time period of the simulation (usually a year) to determine their simulated life paths, for example death. This method often assumes a sequential order of life events, even if they may be interdependent in real life. As in the example given above, the order in which the transition matrices are applied is very important. If the marriage pattern is determined first, the potential fertility rate will change. Similarly, a premarital pregnancy will increase the probability of getting married. Galler (1997) discussed a number of options in this situation including the procedure of random ordering as used by the DARMSTADT (Heike, Hellwig, & Kaufmann, 1987) and Hungarian models (Csicsman & Pappne, 1987). There are a number of problems with this type of approach. Firstly transitions are assumed to take place at a single point in each time period and the duration of the event must last at least one time period (typically a year, but may be of shorter duration). For example if the time period is a year, this approach rules out transitions in and out of unemployment over the course of a year. This is unrealistic, as many people will have unemployment transitions for periods of less than one year as in the case of seasonal workers. Therefore, the discrete time transitions simulate net transitions (see Galler, 1997) at discrete points in time, ignoring the transition path taken to reach the end state. Some models, for example MICROHUS and SESIM, therefore developed a workaround where the end state is stimulated together with an extra variable describing the transition. Take unemployment as an example, the method simulates both the employment status (end state) and the length of unemployment, which can be used to partially describe the transition with greater details. Continuous time microsimulation models, on the other hand, usually use survival models to simulate the time of events. Rather than simulating annual transition probabilities, survival functions model the length of time an individual will face in his or her current state, for example DYNAMOD and SOCSIM (Hammel, 1990). The method was extensively discussed by Willekens (2006). Once a referencing event such as marriage
Handbook of Microsimulation Modelling
318
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
has occurred, an individual is passed through each survival function that they are eligible for on condition of the current state. For example, once an individual is married, they become eligible for divorce. This process is continuously repeated until the end state, for example death, of the simulated individual. While the continuous time model has some theoretical advantages as it pinpoints the time of events, it also has considerable practical limitations. The estimation of competing risks and survival functions place very high requirements on the data that are rarely matched by the actual data available (Zaidi & Rake, 2001). Given that most base datasets were collected yearly and many taxation procedures are reviewed annually, it is easier to incorporate a discrete time framework. Although a continuous time model could simulate the sequence of event occurrences, it still faces the estimation problems of interdependent processes and correlated error terms. In addition, the potential interdependence of transitions for members (e.g. family) further raises the complexity of implementation. See Zinn (2012) for a discussion of this issue in the case of the matching of individuals in partnership. Alignment for continuous models is more difficult as crosssectional adjustments would erode the advantages of duration models, and the potential computation cost of alignment is much higher in continuous time models. 10.3.5. Open versus closed model A decision dynamic microsimulation model builder has to consider whether the model should be open, closed or a mixture of the two. A model is often considered as ‘closed’ if, except in the case of new born and migrants, the model only uses a fixed set of individuals to create and maintain social links. Thus, if an individual is selected to be married, their spouse is selected within the existing population of the model. Similarly, a baby is always attached to a family within the sample. In contrast, an open model starts with a base population and new individuals are generated exogenously if spouses are required. This has the advantage that simulations for individuals (and their immediate families) can be independent of other individuals, thus allowing the model to run in parallel on different computer processors to reduce the run time. Open models, for instance, PENSIM and LifePaths, have the advantage of having simpler interaction models, for example a newly married partner can be created artificially to fit the social economic characteristics of an individual. However, an open model is more difficult for matching external macro-aggregates, as the sample may not stay representative of the population as new individuals are created. Although possible, it is a non-trivial task to align a varying population with macro-aggregates, as the weights would require constant dynamic reweighting and in the case of heavy alignments, the benefits of running the model in parallel might
Dynamic Models
319
be lost. Furthermore, the individuals created in an open model may not replicate the variations among actual individuals. Thus, a closed model approach is preferable when the model is used to simulate household-level variables, including equivalent household income and its derived variables such as the Gini or the at-risk-of-poverty rate. As a result, most dynamic population models in use utilise a closed model method in a crosssectional ageing framework, whereas cohort models are often open and use longitudinal ageing method.
Handbook of Microsimulation Modelling
10.3.6. Link between micro- and macro-models Microsimulation models increasingly interact with macro economy through either an alignment process or the computational general equilibrium (CGE) feedback. Alignment, as discussed earlier, offers a simple but limited way to enforce the aggregate statistics within a simulation. However, it is usually limited to very specific variables and does not change based on the feedback from simulated micro-data. Besides alignment, it is also possible to use CGE models to link macro, meso and micro models (see Ahmed & O’Donoghue, 2007; Davies, 2004). CGE models offer a potential opportunity to allow macro-models to interact with micro models via prices in different markets, which is particularly useful for analysing large scale macroeconomic shock. For instance, IFSIM links a microsimulation model with a simple CGE model with a single sector economy. There are a few papers discussing the potential methods of linking a microsimulation model and a CGE model. Cockburn (2001) used an integrated approach to link a survey dataset within a CGE framework, where the main concept was to replace the traditional unit of analysis in CGE, representative household, with a real household. Another approach is to separate macro and micro components while allowing the result of the micro or macro-models is fed into the other models. Depending on the direction of the output feeding and the number of iterations, this approach was further subcategorised into ‘Top-Down’, ‘Bottom-Up’, ‘Top-Down Bottom-Up’ and ‘Iterated Top-Down Bottom-Up’ approaches (Baekgaard, 1995; Galler, 1990; Savard, 2003). Colombo (2010) compared several CGE microsimulation linkage methods and suggested the ‘Iterated Top-Down Bottom-Up’ as the currently most complete approach. However, with only few exceptions like NEDYMAS (Dekkers et al., 1993) which used the iterated approach, most macromicro linking attempts in dynamic microsimulation models are limited to one-way only. The integration of CGE with microsimulation is still limited at the current stage (Ahmed & O’Donoghue, 2007). This might be the result of several factors, including modelling complexity, data issues, model stability and computational costs. Robilliard and Robinson (2003) indicated
320
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Handbook of Microsimulation Modelling
that current approaches in linking micro-macro might still need to be refined before addressing distributional issues. In addition, linking with CGE requires decent quality of household income and expenditure data, which is not widely available. Furthermore, the integration between CGE and dynamic microsimulation could potentially exaggerate the uncertainty introduced in results due to the complexities in interactions of different social economic variables and a greatly increased computation time. Given the complexity in incorporating a complete CGE model in microsimulation, it might be more feasible to incorporate a partial equilibrium into the model. A static microsimulation model, IZAΨMOD, allows feedback from a computed labour market equilibrium model to refine the labour supply behavioural responses. In a spatial microsimulation model, one may consider to model the feedback from the housing market. This type of single market equilibrium implementation can sometimes avoid the complexities introduced by the social accounting matrix and inaccurate expenditure data. 10.3.7. Links and integrations with agent based models Although this chapter mostly focuses on the development of dynamic microsimulation models, it is also worth to note that microsimulation is closely related to two other individual level modelling approaches, cellular automata and agent based models (ABMs) (Williamson, 2007). In particular, ABMs are also used in social science to analyse macro level phenomena gathered from micro-units. An ABM typically consists of a set of autonomous decision-making entities (agents), a set of agent relationships and methods of interaction, and the agents’ environment (Macal & North, 2010). It is often used to show how macro level properties such as spatial patterns and levels of co-operation emerge from adaptive behaviours of individuals. Traditionally, ABMs are highly abstract and theoretical without many direct empirical applications (Boero & Squazzoni, 2005; Janssen & Ostrom, 2006). In recent years, however, there is a growing interest in ABM literature of injecting empirical data in an attempt to simulate some real-world phenomenon (Hassan, Pavon, & Gilbert, 2008; Parker, Manson, Janssen, Hoffmann, & Deadman, 2003). From a practical point of view, when ABMs add more socio-economic attributes to the agents, and when microsimulation models add more behaviour rules and interactions, they are moving toward to a common ground (Williamson, 2007). Some papers, for example Eliasson (1991), Baroni et al. (2009), etc., use the words interchangeably when behavioural models are included in microsimulation. ABMs cover an important aspect of social economic modelling. For example, network effects, which have long been discussed by sociologists and economists, hardly exist in microsimulation models beyond the
Handbook of Microsimulation Modelling
Dynamic Models
321
spouse matching. Microsimulation modellers often implicitly assume that the effects of social pressures and peer effects are already embedded in the existing distribution and they are likely to keep constant, that is there is no need to update the model as time passes. While this assumption might be acceptable for some research, such as tax reform analysis, it might be too strong for some other types of research, for example evaluating alternative health intervention policy. ABMs, on the other hand, often explicitly model these interactions and allow certain social factors to change as the population evolves. With a growing number of social networking data, it is now possible to integrate empirically tested adaptive behaviours from ABM into microsimulation models to produce a more realistic model. The potential introduction of network effects could benefit a set of microsimulation models, for example health simulation models, in which the social factors may play a role. In addition, peer effects may also help to model the evolution of marriage and fertility patterns, the formations and dissolutions of neighbourhoods, which have been extensively discussed by Richiardi (2014). It should also be noted that this potential integration may also bring some disadvantages. The implementations of micro interactions would greatly increase the computational cost and complexity, thus makes the model more difficult to understand and validate. Besides, the current base datasets of the microsimulation models are often standard surveys or census data that do not cover extensive network attributes. At the current stage, the implementation of extensive interactions like ABM in microsimulation models is still at its infancy, and the existing attempts are limited to the introduction of simple behaviour rules, for example copying consumption habits as in Lawson (2011).
10.3.8. Modelling transitions and behaviours Microsimulation models could use structural behavioural models, reduced form statistical model or as simple transition matrix to simulate changes. Behavioural models are grounded in economic theory, in the sense that changes in institutional or market characteristics cause changes in the individual behaviours through an optimisation process. In contrast, reduced form statistical models aim to model the probabilities using relevant variables. It aims to reproduce observed distributional characteristics in sample surveys without explicit considerations on policies. Reduced form models usually do not respond to external market and institutional characteristics and implicitly assume a stable policy environment. Transition matrix is often a time-homogeneous Markov chain with limited number of states (e.g. age group, gender). It is the easiest way to model potential changes with least theoretical considerations.
322
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Reduced form models and transition matrices are often used to simulate mortality, fertility, family formation, labour market transitions, etc. As these models usually do not depend on policy parameters, they are often restricted to simulating status quo, and are not suitable for reform analysis. The method is often used in static tax-benefit microsimulation models as well as the demographic components of dynamic microsimulation models. In a structural behavioural model, policy parameters have a direct or indirect impact on the individual behaviours. An example could be a labour supply model that responds to changes in the tax-benefit system. Klevmarken (1997) outlined three criteria for choosing behavioural equations in a microsimulation model:
Handbook of Microsimulation Modelling
(1) They should be relevant for the objectives of the model. (2) There should be major behavioural adjustments to the policy that the model is built to analyse. (3) Behaviour that influences the fiscal balance should be included. Examples of behavioural responses that fit these requirements include labour supply, retirement decisions, the effect of income and price changes on consumption, fertility and marital decisions, the take-up of social benefits and many others. In the case of labour supply models, structural behaviour simulation models typically consist of three subcomponents: an arithmetic tax-benefit model to estimate budget constraints, a quantifiable behaviour model using variables that can be simulated and a mechanism to predict the labour supply under a new policy environment (Creedy, Duncan, Harris, & Scutella, 2002). Compared with earlier microsimulation models, more models today have incorporated behavioural responses into their design although these responses are often limited to labour market simulations. Models such as MICROHUS, PRISM, NEDYMAS, SAGE and LIAM all incorporate labour supply behavioural responses to the tax-benefit system, while SESIM, DYNAMITE, ANAC and SADNAP model retirement decisions depending on the social security system. However, there is still only limited implementation of life-cycle models in microsimulation and the study on the impact of prediction errors on simulation results is scarce. 10.3.9. Alignment with projections As statistical models are typically estimated using historical datasets with specific characteristics and period effects, projections of the future may therefore contain errors or may not correspond to exogenous expectations of future events. In addition, the complexity of micro-behaviour modelling mean that simulation models may over or under predict the occurrence of a certain event, even in a well-specified model (Duncan & Weeks, 2000). Furthermore, behavioural models are often estimated on
Handbook of Microsimulation Modelling
Dynamic Models
323
cross-sectional data or panel data of a very short time spans. The simulation results of these models over time may therefore become unrealistic, even though their results at any point in time are reasonable. Because of these issues, methods of calibration known as alignment have been developed to correct for issues related to the adequacy of micro projections. As a fortunate side effect, alignment allows for the use of dynamic microsimulation models for the policy assessment together with (and making use of) the simulation results of macroeconomic models (Dekkers, 2013). Scott (2001) defines alignment as ‘a process of constraining model output to conform more closely to externally derived macro-data (‘targets’).’ Clearly, in an ideal world, a system of equations would be estimated that could replicate reality and give effective future projections without the need for alignment. However, as Winder (2000) stated, ‘microsimulation models usually fail to simulate known time-series data. By aligning the model, goodness of fit to an observed time series can be guaranteed.’ Some modellers suggest that alignment is an effective pragmatic solution for highly complex models (O’Donoghue et al., 2009), as it offers a connection between micro and macro-data. Alignment also has its downsides, several being highlighted by Baekgaard (2002). Concerns raised regarding alignment include the issue of consistency within the estimates and the level of disaggregation at which this should occur. The implementation of alignment may twist the relations of key variables in an undesired way (Li & O’Donoghue, 2014). The existence of an alignment mechanism may constrain model outputs to always hit aggregate targets even if there has been an underlying behavioural or structural change. For example, if education levels rose, mortality rates would fall and the female labour force participation might increase. If the alignment mechanism for each process did not incorporate the impact of educational achievement, then an increase in the education level would have no effect on these aggregates. It has been suggested that equations should be reformulated rather than constrained ex post. Klevmarken (2002) demonstrated various potential methods in incorporating alignment information in estimations. Furthermore, changes of the individual states induced by alignment may be at odds with individual entry conditions. For example, in situations where the entry conditions into a state are based on retrospective data, alignment tables may not contain all variables required. It is for this reason that the so-called ‘hard take and leave conditions’ have been developed to be used jointly with alignment. These conditions a priori force or prevent individuals with predetermined characteristics to enter or leave a certain state and adapt the alignment tables to account for the sizes of these groups. In most cases, alignment methods are only documented briefly as a minor technical part of the main model, and there is very limited number of studies analysing how projections and distributions change as a result of the use of different alignment methods. Despite the potential pitfall of
324
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
its statistical properties, aligning the output of a microsimulation model to exogenous assumptions has become standard over the past decade. As Anderson (2001) noted, almost all existing dynamic microsimulation models are adjusted to align to external projections of aggregate or group variables when used for policy analysis. Continuous variables such as earnings are typically aligned with a fix ratio in order to meet the projected average or distribution, whilst binary variables, such as working status, are aligned with various methods, including multiplicative scaling, sidewalk and sorting based algorithms (see Morrison, 2006). Microsimulation models using historical datasets, for example CORSIM, align their output to historical data to create a more credible profile (SOA, 1997), while Models that work prospectively, for example APPSIM, also utilise the technique to align their simulations with external projections (Kelly & Percival, 2009).
Handbook of Microsimulation Modelling
10.3.10. Model complexity To clarify the discussion, this section makes a difference between microsimulation models and partial models, where the former are included in the latter. Dynamic microsimulation is mostly built on the assumed parameters, estimated Markov chains and the conditional probability distributions estimated by various econometric methods. It usually involves many equations and parameters estimated or fixed by laws and regulations. Once the estimations and parameters are put in place, most microsimulation models follow a straightforward execution process without invoking computational complicated algorithms. The complexity of a microsimulation model, as a result, often comes from the constructions of the partial models and is mostly guided by the potential policy questions that the model is required to answer. Microsimulation models focusing on pension issues usually simulate detailed labour market behaviour for decades ahead, as a change in the pension system can only mature when the youngest cohort in the labour market retires. In contrast, short term tax policy models usually forward simulate 3 5 years and are typically limited to tax-related variables only. If a model is being used for a broad range of research questions, it usually needs to simulate more variables for a longer period of time, which involves greater complexities. An ideal microsimulation model should have the capacity to simulate details of all possibly related variables. However, the costs of building large models, both in terms of model validity and management, need to be taken into consideration. Dynamic microsimulation models have the reputation of being complex and the potential to run ‘out-of-spin’ with regard to some aspects. This might be a particular concern when simulating policy reforms. Partial models, especially reduced form models, are often criticised for simulation purpose as the stability of the model structure is questionable when policies change. This argument is also known as
Handbook of Microsimulation Modelling
Dynamic Models
325
Lucas’ critique. As a result, structural models are usually seen as a better choice. Since some part of the policies (e.g. tax) can be explicitly included in the structural model, the estimated utility parameters are perceived to be more stable (Klevmarken & Lindgren, 2008), although utility parameters can sometimes be very sensitive to even a small change in the model specification. Over fitting may also be a potential issue when the list of explanatory variables grows. Due to the number of partial models that one microsimulation model can invoke and the budget/time constraints, many microsimulation models are primarily constructed using reduced form partial models with some partial models of a structural nature in key components. Complex structural models are much more difficult to validate and may often contain bugs in their implementation due to the increased complexity. In addition, the complexity of the processes often means long development time. Large general purpose microsimulation models are usually built by large teams with access to large and complex datasets. These models usually simulate a wide variety of economic and demographic processes and can therefore be used for many different applications. These forecasting models usually incorporate alignment systems in order to keep consistencies with external forecasts or macro-models. Models of this type include DYNASIM from the United States, the Canadian Pensions Program DYNACAN, SESIM in Sweden, MOSART in Norway, APPSIM in Australia, MIDAS in Belgium, and others. 10.3.11. Model validation Given the increasing complexity of models, it is important to validate a model in order to maintain its credibility. Unfortunately, only limited effort has been placed on validation matters and there is no international consensus on validation procedures. Klevmarken and Lindgren (2008) suggest that validation should be put in the same context as estimation and testing and should involve the identification of all sources of errors and their properties. Given the size and structure of a large microsimulation model, bootstrapping and Monte Carlo exercises are likely to be more practical than the analytical deduction. In addition, sensitivity analysis on the models should also be part of the microsimulation validations (Klevmarken & Lindgren, 2008). In Morrison (2008)’s paper, DYNACAN published their method of the validation from a practical point of view. It lists several important components one should cover during a microsimulation validation process: context of validation, data/coefficient/parameter validation, programmers/algorithmic validation, module-specific validation, multi-module validation and policy impact validation. Ex post analyses of previous periods can also be used to assess the reliability of a model, and this is also the reason why a number of
Handbook of Microsimulation Modelling
326
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
the major microsimulation projects have taken historic datasets as their starting population base for simulations. For example, the CORSIM and POLISIM models are based on a sub-sample of the 1971 and 1960 US Censuses, respectively, and the DYNACAN model takes a sample of the 1970 Canadian Census as its base. By running the model forward to the present day, the model forecasts can be compared to what has actually happened (see for example Caldwell & Morrison, 2000; Morrison, 2000). However, these models invariably incorporate historical information such as macro-aggregates into the model and this may produce better forecasts compared with the results when the historical information is not available. One method to overcome this is to directly compare generated forecasts with what happened in reality, for example comparing forecasted labour participation rates with actual rates. Another method described by Caldwell (1996) is to use an indirect approach, known as a multiple module approach. An example cited by Caldwell is the case of validating the numbers of married persons with health insurance, when the directly simulated processes are marriage and medical insurance membership. Sources of error may result from errors in either or both direct processes, or because of mis-specified interactions. Some types of dynamic models, however, may have no comparable source of validation. For example, some theoretical models that solely look at a single cohort living in a steady state have nothing with which they can be validated through external data source as they do not attempt to mimic real life. These types of models, due to the lack of validations, are often restricted in their interpretations of policy impact. Additionally, countries that have only recently developed their micro-data resources may not have alternative sources of data with which to validate, although this problem will become progressively less with time. Recent developments suggest an alternative validation method using a simplified model. Since no future data is available to validate a forecasting dynamic microsimulation model, Morrison (2008) suggests comparing a model’s result to a trustworthy model’s result. Dekkers (2013) argues that the general trend of certain indicators estimated by a simple model could be seen as a benchmark for more complicated microsimulation model as there is no black box in a simple model. The Belgium MIDAS model used this approach to validate against a ‘simple stylised’ model, which is essentially a representative household model with only demographic and pension indexation components. This approach, however, raises another question on the criteria of a ‘trustworthy’ model. It is difficult to say which model is correct when the output of a stylised benchmark model differs significantly from the result of a comprehensive population model. Without further analysis, the differences between model outputs may only be used as indicative validation tests rather than anything conclusive.
Dynamic Models
327
10.4. Summary and future directions 10.4.1. Progress of dynamic microsimulation modelling since 1970s
Handbook of Microsimulation Modelling
In reviewing progress made by the field, it is useful to consider an early model development, the DYNASIM model developed by Orcutt, Caldwell, and Wertheimer (1976) in the Urban Institute in the early 1960s to mid1970s. In terms of our classification above, DYNASIM was a longitudinal closed model running a 10,000 person dataset. It contained: • A demographic module, modelling leaving home, births, deaths partnership formation and dissolution, disability, education and broad location. • A labour market module containing participation, hours, unemployment and labour income. • A Tax-Transfer and Wealth module containing capital income and the main tax and transfer instruments. • A marriage matching module. • As well as a simple macroeconomic model and feedback loops linked with the microsimulation model via alignment. Thus in terms of generic structure, this 1970s model incorporates much of what has been included in later dynamic microsimulation models, although each component has been largely improved by the newer models. Despite the progresses in 1970s and 1980s, early microsimulation modellers faced a number of challenges, which were summarised by Hoschka (1986): (a) Many of the behavioural hypotheses in microsimulation models are of insufficient theoretical and/or empirical basis. (b) Dynamic changes in the behaviour of the population are mostly not regarded by micro modellers. (c) The problems of including more than the primary effects of a policy programme are still unresolved. (d) Quality and accessibility of the data required by micro models often are restricted severely. (e) The development of micro-models frequently needs too much time and its costs are accordingly high. (f) Running micro models usually requires a lot of computer time. (g) The prediction quality of micro-models has not yet been systematically evaluated and validated. (h) Large microsimulation models are so complex that they are difficult to comprehend and control. These challenges can be broadly categorised into five different areas: behaviour response modelling (a-c), micro-data quality (d), development cost (e), limited computation capacity (f) and model validation (g-h). Comparing with some recent discussions in issues of microsimulation
Handbook of Microsimulation Modelling
328
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
(Harding, 2007a; Wolfson, 2009), it is clear that most issues mentioned are still relevant and high on the list several decades later. By comparing the DYNASIM model structure with today’s dynamic microsimulation models, and the challenges faced by the modellers in 1980s and today, what we are seeing are gradual advancements in the methodologies rather than breakthrough in model designs and applications. Improved computer hardware has allowed both improved speed and larger databases. Good quality data and software packages with built-in micro-econometric techniques have improved the sophistication level in individual models (see O’Donoghue, 2001). There has been some improvement also in the incorporation of behavioural responses, which allows the assessment of policy changes in their social economic impact on individuals. In addition, today’s microsimulation modellers have proposed several methods to systematically validate the simulation output (Morrison, 2008). Another major advancement in the past decades is the emergence of generic models or development packages, including Modgen (Wolfson & Rowe, 1998), UMDBS (Sauerbier, 2002), GENESIS (Edwards, 2004, 2010), LIAM (O’Donoghue et al., 2009) and LIAM2 (Bryon, Dekkers, & de Menten, 2011). These can greatly reduce the workload of new modellers by providing commonly used and thoroughly tested microsimulation routines. 10.4.2. Obstacles in the advancement of microsimulation, and some possible solutions While the field of microsimulation has progressed greatly in many aspects since the original paper of Orcutt (1957), the rate of progress in dynamic microsimulation, nonetheless, is arguably slow given that we still share the same model design and face similar problems as early DYNASIM modellers did nearly 40 years ago. There are a number of reasons could be ascribed to this lack of progress, including knowledge transfer, model ownership, unrealistic expectations and funding. This section will discuss these issues and some interesting developments that (potentially) provide a silver lining. One criticism of the knowledge transfer mechanisms within the field is that most of the transfer has been via tacit knowledge rather than codified knowledge. Much important knowledge and methodologies have mainly been codified as ‘documentation,’ with the main aim to facilitate other team members utilising the models. In addition, microsimulation models are mostly developed in governmental or policy institutions, where developing a literature on which a wider group of scientists could build has been a secondary objective at best. Furthermore, the documents are mainly spread with limited books and conference presentations, which may not be easily available for researchers outside of the network. Additionally, the framework of traditional academic publications may not be suitable for complex dynamic
Handbook of Microsimulation Modelling
Dynamic Models
329
microsimulation models. For one, it relies on preparing papers of 5 10 thousand words, which may not be enough to describe a complex model in depth. Additionally, many journals do not allow for (long) pieces of code, while it remains difficult to publish the results of work on and with microsimulation models in applied scientific journals. Thus a significant proportion of the extensive methods used in the field are not formally codified, meaning that to a large extent new models have had to reinvent the wheel and re-develop existing methods over and over again. There are however various reasons why there might be hope that the transfer of knowledge will improve in the (near) future. First of all, researchers these days are much more aware of the actual and potential benefits of collaboration and the exchange of information between teams. Thus, there are many ad-hoc meetings where teams compare their models, results, and general experience. Furthermore, through the use and development of generic platform such as ModGen and LIAM2, information that was previously tacit now becomes codified and broadly available. This has also allowed for the dissemination of pieces of code or even entire models. For example, using a series of European projects, the Belgian model MIDAS (whose first version was based on the Irish model LIAM), has been ‘exported’ to teams in Hungary and Luxembourg. This of course reduced the development costs of the latter groups, while increasing the ‘return on investment’ of the Belgian development team, and allowing them to invest further in the development of LIAM2. Finally, open access to the source code of a model allows the findings to be replicated and could help to locate errors that the development team overlooked. Hence, moving from ‘black box’ modelling to a ‘glass box’ modelling could ease many potential users’ concerns and raise the method’s scientific status (Wolfson, 2009). The availability of less closed model frameworks such as GENESIS, LIAM and LIAM2 can facilitate the development of new models. Another reason for the lack of progress was the perceived ‘failure’ of the earlier models. However this failure to some extent can be attributed to failing to meet unachievably high expectations. Orcutt et al. (1961) focused on the capacity to undertake prediction at a micro-level to facilitate planning. Human behaviour is of such complexity and is endogenous to economic analysis that dynamic microsimulation models cannot hope to make highly accurate predictions. Even well-specified econometric model over or under predicts the outcomes (Duncan & Weeks, 1998) and the explanatory value of especially micro-level logistic models remains modest even in the best of cases. For example, a microsimulation model studying poverty rates may exaggerate the impact of changes or developments, when the policy environment is complex and many individuals in the dataset are close to the poverty threshold. As George Box once said, ‘All models are wrong, some are useful’ (Box & Draper, 1987). This might be true for models that dive beyond the aggregate and aim to shed light to
330
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Handbook of Microsimulation Modelling
the chaos of individual changes over time. In being useful we can hope, by using good theory, data and statistical and computational methods, dynamic microsimulation models can provide a consistent and reasonable framework with which to undertake policies analyses incorporating intertemporal events and the distribution of the population. Funding may also be a major issue facing many microsimulation modellers. Building and using large microsimulation models requires a team of researchers representing different disciplines and experiences. In addition, the scale of the model also suggests the need for long-term funding. These two requirements are almost routinely underestimated by semi-public research institutions, nor do they always fit well into a university department with its normal rotation of people and the three-year funding of research projects. As a consequence, most models are not actively maintained after the initial funding, which makes it difficult for people outside of the original team to utilise the model for other research purposes. 10.4.3. Future directions The applications of microsimulation are widespread as suggested by the list of current and previous models outlined in Table 10.1. With the availability of better modelling tools and greater number of researchers from different fields engaging in microsimulation, the method is now applied in many fields other than the traditional welfare policy research. For instance, using microsimulation model as part of the tools to estimate impact of climate change (Buddelmeyer et al., 2009; Hynes, Morrissey, O’Donoghue, & Clarke, 2009), disease spread (Will et al., 2001), time use simulation (Anderson, Agostini, Laidoudi, Weston, & Zon, 2009), and even to assist personal financial planning (Avery & Morrison, 2011). The use of dynamic microsimulation models can be even further expanded as more micro-level data becomes available. With the better availability of the longitudinal data and administrative data, it is possible to better understand the consequence of ageing. In addition, the raise of the network data could help to model the disease spread and knowledge diffusion in a more realistic way. While large dynamic models have their advantages for providing more comprehensive simulation outputs, the complexity also increases the difficulties in validation, model usages, management and funding. It might be also beneficial to develop some complementary, specialised simple dynamic models. Smaller models could be better validated and make it easier to publish the model details within the length limit of a journal article. These easy-to-validate smaller models could then be absorbed into a more complicated microsimulation model when the need for more complex interactions arises. A problem that remains is how to deal with the sometimes unachievably high expectations, especially in semi-public research institutions where
Handbook of Microsimulation Modelling
Dynamic Models
331
the outcomes of microsimulation models are used to assess the consequences of actual or potential policy measures. These expectations are often fuelled by the requirements for stable funding of enough researchers. A possible solution out of this deadlock might be researchers could focus more on scenario analyses instead of implicitly suggesting an accurate long run simulation. Assumptions are almost by definition more explicit in the former and there is less pressure to be a fortune teller. The changes in economic and politics climate also mean that all the simulations results may become obsolete in relative short time. Focusing on the scenario analyses could be more cost effective and relevant to the debate of contemporary issues. Furthermore, academics might also use dynamic microsimulation to improve the understanding and modelling of inter-temporal behaviours. Traditionally, labour economists do not have access to the longitudinal data that covers the whole life-span of individuals. With the help of microsimulation, it is possible to generate budget constraints for use as input into life-cycle behaviour choice modelling, for example retirement choice as in Li and O’Donoghue (2011). The method would assist us to better understand the many inter-temporal processes, for example fertility decision, education choices, etc. The raising interest from the academic side would benefit the field development and ensures the sustainability of the knowledge. In terms of the methodological development, a primary need is to codify the various methodologies that are currently being used in dynamic microsimulation models and to allow more countries than before access these techniques and models. There are many methods being used in microsimulation, most without any published description or evaluation. As noted above, this can impede the progress of the field. Formally documenting the methods used and publishing in a peer-reviewed journal could improve the knowledge diffusion and increase the public good returns by academics, providing incentive to innovate. Additionally, publications could preserve the knowledge that could have been lost due to the end of project funding. The International Journal of Microsimulation aims to be an important opportunity for citable peer-reviewed publications. Another potentially important way in which information can be codified and distributed is by the dissemination of full models or modules and the further use and development of modelling frameworks like ModGen and LIAM2, both of which are freely available. This allows modellers to make use of previous work and methodologies developed by others at limited costs. There remains room for improvement in understanding on the simulation properties of many algorithms used, including alignment, error term manipulation, complex reweighting and random numbers. In most models, each equation is estimated separately without considering the potential correlations in error terms. This may lead to undesired bias due to inconsistent assumptions when simulating some reforms. In addition,
Handbook of Microsimulation Modelling
332
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
to improve the model credibility, it is worthwhile to pay attention to the testing and validation process of a simulation model. Additionally, papers using microsimulation model typically provide the result of only one-run although some papers, for example Pudney and Sutherland (1994, 1996) found that the microsimulation results could have a wide confidence interval. Dekkers (2000) argues that Monte Carlo approach can be used to assess the statistical significance of simulation results. Given the raising computing capacity available for researchers these days, modellers could potentially provide more information about the simulation, for example the confidence intervals of the result using Monte Carlo techniques. Despite the discussions and the general consensus to improve validation process in microsimulation, there is still little guideline on how dynamic microsimulation models should be validated (Harding, 2007b). While DYNASIM documented many issues involved in the model validation, there are still many areas that need to be explored, such as behaviour responses validation, longitudinal consistency validation and module interactions. Besides the validation from the technical side, it is also worth considering to validate the simulation with historical data, from which we can learn what has been done right, how the simulation performs under different assumptions, etc.
10.5. Conclusion This chapter has discussed some of main issues involved in constructing a dynamic microsimulation model and described some of the choices made by different models in use worldwide. The main issues discussed have covered some of the general model development issues, such as base dataset selection, cohort or population based model structure, programming environment and model validation. The chapter has also discussed some of the technical choices made in model implementation, such as whether the model should be open or closed, whether alignment algorithms should be used, whether the model should incorporate behavioural response to policy changes and links to the ABMs. Over the past decades, microsimulation models have been applied to many different policy areas and a comparison of models as given in Table 10.1 illustrates the scope of application of past and current dynamic microsimulation models. Most dynamic microsimulation models listed can be categorised as discrete models using dynamic ageing approach. For newer models, alignment has become a standard component allowing interactions with macro-aggregates and more recently, simulation packages that are dedicated solely to microsimulation have become a viable option in model development. These packages, together with increased co-operation through meetings and code sharing, could significantly increase the development process.
Handbook of Microsimulation Modelling
Dynamic Models
333
The increasing use of microsimulation models has raised many technical challenges to meet the needs of more complex and accurate policy analyses. For instance, there is a growing interest in integrating CGE into microsimulation models, although the actual implementations of CGE microsimulation are at this stage restricted due to data and technical limitations. Behavioural responses in microsimulation could also be further improved and one should consider more life-cycle models when simulating inter-temporal choices. Microsimulation models could potentially implement some elements from ABM to allow dynamic behaviour interactions and adaptations. In addition, considering different unit of analysis, budget and political constraints, may also broaden the field of microsimulation applications. Furthermore, certain practices within the simulations, such as alignment for complex models, and error term simulation, should be more thoroughly studied. Besides technical challenges, there are also some general issues in the field. The lack of documentations often forces new modellers need to reinvent the wheel; closed sourced models which slow down the knowledge transmission. The unrealistically high expectation in long run simulation may challenge the creditability of the model and make applying for funding more difficult in long run. Future modellers may help to address these issues by publishing model details in academic journals and be more open on the algorithm implementations. Newer modelling platforms attempt to be more open and transparent in the software source code, which would potentially benefit the field development and knowledge transmission. In addition, the field can also explore topics other than taxations and standard government policies. Topics like the impact of climate change, the social consequence of ageing, for instance, could also potentially gain benefits from microsimulation techniques.
References Ahmed, V., & O’Donoghue, C. (2007). CGE-microsimulation modelling: A survey. MPRA Paper. Anderson, B., Agostini, P. D., Laidoudi, S., Weston, A., & Zon, P. (2009). Time and money in space: Estimating household expenditure and time use at the small area level in Great Britain. In A. Zaidi, A. Harding, & P. Williamson (Eds.), New frontiers in microsimulation modelling. Surrey, UK: Ashgate Publishing. Anderson, J. M. (2001). Models for retirement policy analysis. Report to the Society of Actuaries, USA. Ando, A., Brandolini, A., Bruno, G., Cannari, L., Cipollone, P., D’Alessio, G., … Nicoletti Altimari, S. (2000). The bank of Italy’s DYNAMITE: Recent developments. Bank of Italy, Mimeo, Rome.
Handbook of Microsimulation Modelling
334
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Andreassen, L., Fredriksen, D., & Ljones, O. (1996). The future burden of public pension benefits: A microsimulation study. In A. Harding (Ed.), Microsimulation and public policy. Amsterdam: Elsevier. Andreassen, L., & Solli, I. (2000, June). Incorporating overlappinggenerations modelling in a dynamic microsimulation framework. Paper presented to the 6th Nordic Workshop on Microsimulation, Copenhagen. Antcliff, S. (1993). An introduction to DYNAMOD: A dynamic microsimulation model. Technical Paper No. 1. DYNAMOD, NATSEM, Canberra. Avery, M., & Morrison, R. (2011). Microsimulation as a financial literacy tool: Assessing the consequences of decisions to work, save, retire, and spend. IMA 2011 conference paper. Stockholm, Sweden. Baekgaard, H. (1995). Integrating micro and macro models: Mutual benefits. Australia: National Centre for Social and Economic Modelling (NATSEM), University of Canberra. Baekgaard, H. (1998, June). Simulating the distribution of household wealth in Australia: New estimates for 1986 and 1993. Technical Paper No. 14. National Centre for Social and Economic Modelling (NATSEM), University of Canberra, Australia. Baekgaard, H. (2002). Micro-macro linkage and the alignment of transition processes: Some issues, techniques and examples. Technical Paper No. 25. National Centre for Social and Economic Modelling. Ballas, D., Clarke, G., Dorling, D., Eyre, H., Thomas, B., & Rossiter, D. (2005). SimBritain: A spatial microsimulation approach to population dynamics. Population, Space and Place, 11, 13 34. Ballas, D., Clarke, G. P., & Wiemers, E. (2005, May). Building a dynamic spatial microsimulation model for Ireland. Population, Space and Place, 11(3), 157 172. doi:10.1002/psp.359 Bardaji, J., Se´dillot, B., & Walraet, E. (2003). Un outil de prospective des retraites: le mode`le de microsimulation Destinie. E´conomie et pre´vision, 160, 193 214. ˇ Baroni, E., Zamac, J., & O¨berg, G. (2009). IFSIM handbook. Working Paper No. 7. Institute for Future Studies, Stockholm. Blanchet, D., Crenner, E., & Minez, S. (2009). The Destinie 2 microsimulation model: Increased flexibility and adaptation to users’ needs. IMA conference paper. Ottawa, Canada. Boero, R., & Squazzoni, F. (2005). Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. Journal of Artificial Societies and Social Simulation, 8, 6. Bonnet, C., & Mahieu, R. (2000). Public pensions in a dynamic microanalytic framework: The case of France. In L. Mitton, H. Sutherland, M. Weeks (Eds.), Microsimulation in the new millennium. Cambridge: Cambridge University Press.
Handbook of Microsimulation Modelling
Dynamic Models
335
Box, G., & Draper, N. (1987). Empirical model Building and response surfaces. New York, USA: Wiley. Brewer, M., Browne, J., Emmerson, C., Goodman, A., Muriel, A., & Tetlow, G. (2007). Pensioner poverty over the next decade: What role for tax and benefit reform. London, UK: The Institute for Fiscal Studies. Brown, R. J., Caldwell, S. B., & Eklund, S. A. (1992). Microsimulation of dental conditions and dental service utilisation. In J. G. Anderson (Ed.), Proceedings of the simulation in health care and social services conference. San Diego, CA: Society for Computer Simulation. Bryon, G., Dekkers, G., & de Menten, G. (2011). LIAM 2 User Guide. MiDaL project document. Buddelmeyer, H., He´rault, N., Kalb, G., & van Zijll de Jong, M. (2009). Linking a dynamic CGE model and a microsimulation model: Climate change mitigation policies and income distribution in Australia. Melbourne Institute of Applied Economic and Social Research, University of Melbourne. Caldwell, S. B. (1996). Health, wealth, pensions and life paths: The CORSIM dynamic microsimulation model. In A. Harding (Ed.), Microsimulation and public policy. Amsterdam: Elsevier. Caldwell, S., & Morrison, R. (2000). Validation of longitudinal microsimulation models: Experience with CORSIM and DYNACAN. In L. Mitton, H. Sutherland, & M. Weeks (Eds.), Microsimulation in the new millennium. Cambridge: Cambridge University Press. Cassells, R., Harding, A., & Kelly, S. (2006). Problems and prospects for dynamic microsimulation: A review and lessons for APPSIM. NATSEM Discussion Paper no. 63. Canberra, Australia. Che´nard, D. (2000a). Individual alignment and group processing: An application to migration processes in DYNACAN. In L. Mitton, H. Sutherland, & M. Weeks (Eds.), Microsimulation in the new millennium. Cambridge: Cambridge University Press. Che´nard, D. (2000b, June). Earnings in DYNACAN: Distribution alignment methodology. Paper presented to the 6th Nordic Workshop on Microsimulation, Copenhagen. Cockburn, J. (2001). Trade liberalization and poverty in Nepal: A computable general equilibrium micro-simulation approach. Working Paper No. 01 18. CRE´FA, Universite´ Laval. Colombo, G. (2010). Linking CGE and microsimulation models: A comparison of different approaches. International Journal of Microsimulation. 3(1), 72 91. Courtioux, P., Gregoir, S., & Houeto, D. (2009, June). The simulation of the educational output over the life course: The GAMEO model. In 2nd general conference of the International Microsimulation Association, Ottawa, Canada.
Handbook of Microsimulation Modelling
336
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Creedy, J., Duncan, A., Harris, M., & Scutella, R. (2002). Microsimulation modelling of taxation and the labour market: The Melbourne institute tax and transfer simulator. Cheltenham: Edward Elgar Publishing. Creedy, J., & Van de Ven, J. (2001). Decomposing redistributive effects of taxes and transfers in Australia: Annual and lifetime measures. Australian Economic Papers, 40(2), 185 198. Csicsman, N., & Pappne, N. (1987, November). The software developed for the Hungarian micro-simulation system. Proceedings of international workshop for Demographic Microsimulation, International Institute for Applied Systems Analysis, Budapest, Hungary. Curry, C. (1996). PENSIM: A dynamic simulation model of pensioners’ incomes. London: Department of Social Security. Davies, J. B. (2004). Microsimulation, CGE and macro modelling for transition and developing economies. UNU/WIDER Research Paper. De Blander, R., Schockaert, I., Decoster, A., & Deboosere, P. (2013, September). The impact of demographic change on policy indicators and reforms. FLEMOSI Discussion Paper No. 25. Leuven, Belgium. Dekkers, G. (2000). Intergenerational redistribution of income through capital funding pension schemes: Simulating the Dutch civil servants’ pension fund ABP. Maastricht: Shaker Publishing. (ISBN 90-4230133-3). Dekkers, G. (2010). On the impact of indexation and demographic ageing on inequality among pensioners. European workshop on dynamic microsimulation modelling. Brussels, Belgium. Dekkers, G. (2013). What are the driving forces behind trends in inequality among pensioners? Validating MIDAS Belgium using a stylized model. In G. Dekkers, M. Keegan, & C. O’Donoghue (Eds.), New pathways in microsimulation (pp. 287 304). Farnham: Ashgate Publishing Limited.. Dekkers, G., & Belloni, M. (2009). Micro simulation, pension adequacy and the dynamic model MIDAS: An introduction. ENEPRI Research Report No. 65, AIM WP4. Centre for European Policy Studies, Brussels. Dekkers, G., Buslei, H., Cozzolino, M., Desmet, R., Geyer, J., Hofmann, D., … Verschueren, F. (2010). The flip side of the coin: The consequences of the European budgetary projections on the adequacy of social security pensions. European Journal of Social Security, 12(2), 94 120. Dekkers, G., Desmet, R., Fasquelle, N., & Weemaes, S. (2013). The social and budgetary impacts of recent social security reform in Belgium. Paper presented at the Paper presented at the IMPALLA-ESPANET international conference ‘Building blocks for an inclusive society: Empirical evidence from social policy research, Luxembourg’, April 18 19, 2013; and the colloquium ‘Microsimulation: Expe´riences et
Handbook of Microsimulation Modelling
Dynamic Models
337
perspectives’, L’Acoss, la Cnaf, l’E´rudite and Insee, Paris, France, May 23, 2013. Dekkers, G., Nelissen, J., & Verbon, H. (1993). The macro model programme sector of the microsimulation model NEDYMAS. WORC paper 93.08.016/2. Katholieke Universiteit Brabant, Tilburg. Dekkers, G., & Van Camp, G. (2011). The simulation properties of microsimulation models with static and dynamic ageing A guide into choosing one type of model over the other. Brussels, Belgium: Federal Planning Bureau. Deville, J. C., Sarndal, C. E., & Sautory, O. (1993). Generalized raking procedures in survey sampling. Journal of the American Statistical Association, 88, 1013 1020. Duncan, A., & Weeks, M. (2000). Simulating transitions using discrete choice models. In L. Mitton, H. Sutherland, & M. Weeks (Eds.), Microsimulation modelling for policy analysis: Challenges and innovations. Cambridge: Cambridge University Press. Edwards, S. (2004). GENESIS: SAS based computing environment for dynamic microsimulation models. Department of Work and Pensions, Mimeo, London. Edwards, S. (2010). Techniques for managing changes to existing simulation models. International Journal of Microsimulation, 3(2), 80 89. Eliasson, G. (1977, September 19 22). A micro-to-macro model of the Swedish economy: Papers on the Swedish model from the symposium on micro simulation methods. Stockholm: Almqvist & Wiksell. Eliasson, G. (1991). Modelling the experimentally organized economy Complex dynamics in an empirical micro-macro model of endogenous economic growth. Journal of Economic Behaviour and Organization, 16(1 2), 153 182. Emmerson, C., Reed, H., & Shephard, A. (2004). An assessment of PenSim2. IFS Working Papers W04/21. London, UK. Ericson, P., & Hussenius, J. (1999, November). Distributional effects of public student grants in Sweden A presentation and an application of the dynamic microsimulation model SESIM. Paper presented to the APPAM seminar Public Policy Analysis and Management: Global and Comparative Perspectives, Washington DC. Falkingham, J., & Johnson, P. (1995). A unified funded pension scheme (UFPS) for Britain. In J. Falkingham & J. Hills (Eds.), The dynamic of welfare: The welfare state and the life cycle. New York, NY: Prentice-Hall. Favreault, M., & Smith, K. (2004). A primer on the dynamic simulation of income model (DYNASIM3). The Urban Institute Discussion Paper. Washington D.C., USA. Flood, L. (2007). Can we afford the future? An evaluation of the new Swedish pension system. Modelling our Future: Population Ageing, Social Security and Taxation, 33.
Handbook of Microsimulation Modelling
338
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Fo¨lster, S. (2001). An evaluation of social insurance savings accounts. Public Finance and Management, 1(4), 420 448. Galler, H. P. (1990). Microsimulation of tax-transfer systems. In J. K. Brunner & H. G. Petersen (Eds.), Simulation models in tax and transfer policy (pp. 279 300). Campus, Frankfurt/M. Galler, H. P. (1997). Discrete-time and continuous-time approaches to dynamic microsimulation reconsidered. Discussion Paper. NATSEM, University of Canberra. Galler, H. P., & Wagner, G. (1986). The microsimulations model of the Sfb3 for the analysis of economic and social policies. In G. H. Orcutt, J. Merz, & H. Quinke (Eds.), Microanalytic simulation models to support social and financial policy. Amsterdam: NorthHolland. Hain, W., & Helberger, C. (1986). Longitudinal simulation of lifetime income. In G. Orcutt, J. Merz, & H. Quinke (Eds.), Microanalytic simulation models to support social and financial policy. New York, NY: North-Holland. Hammel, E. A. (1990). SOCSIM II. Working Paper No. 29. Graduate Group in Demography. University of California at Berkeley, Berkeley. Harding, A. (1993). Lifetime income distribution and redistribution: Applications of a microsimulation model, contributions to economic analysis (Vol. 221). Amsterdam: North Holland. Harding, A. (2007a). APPSIM: The Australian dynamic population and policy microsimulation model. Harding, A. (2007b). Challenges and opportunities of dynamic microsimulation modelling. Hassan, S., Pavon, J., & Gilbert, N. (2008). Injecting data into simulation: Can agent-based modelling learn from microsimulation. Heike, H. D., Hellwig, O., & Kaufmann, A. (1987, November). Experiences with the Darmstadt microsimulation model (DPMS). Proceedings of international workshop for Demographic Microsimulation, International Institute for Applied Systems Analysis, Budapest, Hungary. Holm, E., Holme, K., Ma¨kila¨, K., Mattsson-Kauppi, M., & Mo¨rtvik, G. (2006). The SVERIGE Spatial Microsimulation Model: Content, Validation, and Example Applications. Sweden: Spatial Modelling Centre. Hoschka, P. (1986). Requisite research on methods and tools for microanalytic simulation models. In G. H. Orcutt, J. Merz, & H. Quinke (Eds.), Microanalytic simulation models to support social and financial policy. Amsterdam: North-Holland. Hynes, S., Morrissey, K., O’Donoghue, C., & Clarke, G. (2009). A spatial micro-simulation analysis of methane emissions from Irish agriculture. Ecological Complexity, 6, 135 146.
Handbook of Microsimulation Modelling
Dynamic Models
339
Janssen, M. A., & Ostrom, E. (2006). Empirically based, agent-based models. Ecology and Society, 11, 37. Joyeux, C., Plasman, R., & Scholtus, B. (1996, June). A model of the evolution of pensions expenditures in Belgium. Paper presented to a meeting of the European HCM Network on Socio-Economic Modelling, Cambridge. Keister, L. (2000). Wealth in America: Trends in wealth inequality. Cambridge: Cambridge University Press. Kelly, S., & Percival, R. (2009). Longitudinal benchmarking and alignment of a dynamic microsimulation model. IMA conference paper. Ottawa, Canada. Kennell, D. L., & Sheils, J. F. (1990). PRISM: Dynamic simulation of pension and retirement income. In G. H. Lewis & R. C. Michel (Eds.), Microsimulation techniques for tax and transfer analysis. Washington DC: The Urban Institute Press. Klevmarken, N. A. (1997). Behavioural modelling in micro simulation models a survey. Working Paper No. 31. Department of Economics, Uppsala University, Sweden. Klevmarken, N. A. (2002). Statistical inference in micro-simulation models: Incorporating external information. Mathematics and Computers in Simulation, 59(1), 255 256. Klevmarken, N. A., Bolin, K., Eklo¨f, M., Flood, L., Fransson, U., Hallberg, D., … Lagergren, M. (2007). Simulating the future of the Swedish baby-boom generations. Working Paper No. 26. Department of Economics, Uppsala University. Klevmarken, N. A., & Lindgren, B. (Eds.). (2008). Simulating an ageing population: A microsimulation approach applied to Sweden (Vol. 285). Contributions to Economic Analysis. Bingley, UK: Emerald Group Publishing Limited. Klevmarken, N. A., & Olovsson, P. (1996). Direct and behavioural effects of income tax changes Simulations with the Swedish model MICROHUS. In A. Harding (Ed.), Microsimulation and public policy. Amsterdam: Elsevier Science Publishers. Lawson, T. (2011). An agent-based model of household spending using a random assignment scheme. IMA conference, Stockholm. Le´gare´, J., & De´carie, Y. (2011). Using statistics Canada lifepaths microsimulation model to project the disability status of Canadian elderly. International Journal of Microsimulation, 4(3), 48 56. Li, J., & O’Donoghue, C. (2011). Household retirement choice simulation with heterogeneous pension plans. IZA discussion Paper No. 5866. Bonn. Li, J., & O’Donoghue, C. (2012). Simulating histories within dynamic microsimulation models. International Journal of Microsimulation, 5(1), 52 76.
Handbook of Microsimulation Modelling
340
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 3 55. Li, J., & O’Donoghue, C. (2014). Evaluating binary alignment methods in dynamic microsimulation models. Journal of Artificial Society and Simulation, 17(1), 15. Lutz, W. (1997). FAMSIM Austria: Feasibility study for a dynamic microsimulation model for projections and the evaluation of family policies based on the European family and fertility survey. Vienna: Austrian Institute for Family Studies. Macal, C. M., & North, M. J. (2010). Tutorial on agent-based modelling and simulation. Journal of Simulation, 4, 151 162. Mantovani, D., Papadopoulos, F., Sutherland, H., & Tsakloglou, P. (2006). Pension incomes in the European Union: Policy reform strategies in comparative perspective. In O. Bargain (Ed.), Microsimulation in action (Vol. 25, pp. 27 71). Research in Labor Economics. Bingley, UK: Emerald Group Publishing Limited. McKay, S. (2003). Dynamic microsimulation at the US Social Security administration, International conference on population ageing and health: Modelling our future, Canberra (pp. 7 12). Morand, E., Toulemon, L., Pennec, S., Baggio, R., & Billari, F. (2010). Demographic modelling: The state of the art. SustainCity Working Paper No. 2.1a, Ined, Paris. Morrison, R. (2000, June). Assessing the quality of DYNACAN’s synthetically-generated earnings histories. Paper presented to the 6th Nordic Workshop on Microsimulation, Copenhagen. Morrison, R. (2006). Make it so: Event alignment in dynamic microsimulation. DYNACAN paper. Morrison, R. (2008). Validation of longitudinal microsimulation models: DYNACAN practices and plans. DYNACAN Team Working Paper No. 8. Ottawa, Canada. Nelissen, J. H. M. (1996). Social security and lifetime income redistribution: A microsimulation approach. In A. Harding (Ed.), Microsimulation and public policy (pp. 267 292). Amsterdam: North-Holland. O’Donoghue, C. (2001). Dynamic microsimulation: A survey. Brazilian Electronic Journal of Economics, 4(2) 77pp. O’Donoghue, C., Lennon, J., & Hynes, S. (2009). The life-cycle income analysis model (LIAM): A study of a flexible dynamic microsimulation modelling computing framework. International Journal of Microsimulation, 2(1) 16 31. O’Donoghue, C., Loughrey, J., & Morrissey, K. (2011). Modelling the impact of the economic crisis on inequality in Ireland, IMA 2011 conference paper, Stockholm, Sweden.
Handbook of Microsimulation Modelling
Dynamic Models
341
Oharra, J., Sabelhaus, J., & Simpson, M. (2004). Overview of the congressional budget office long-term (CBOLT) policy simulation model. Washington, DC: Congressional Budget Office. Orcutt, G. (1957). A new type of socio-economic system. The Review of Economics and Statistics, 39, 116 123. (Reprinted in the International Journal of Microsimulation, 2007, 1(1), 3 9). Orcutt, G. H., Caldwell, S., & Wertheimer, R. F. (1976). Policy exploration through microanalytic simulation. Washington, USA: The Urban Institute. Orcutt, G. H., Greenberger, M., Kobel, J., & Rivlin, A. M. (1961). Microanalysis of socioeconomics systems: A simulation study. New York, NY: Harper and Brothers. Parisi, V. (2003). A cross country simulation exercise using the DIECOFIS corporate tax model. European Commission IST Programme DIECOFIS, Work Package, 7. Parker, D. C., Manson, S. M., Janssen, M. A., Hoffmann, M. J., & Deadman, P. (2003). Multi agent systems for the simulation of land use and land cover change: A review. Annals of the Association of American Geographers, 93, 314 337. Peichl, A., Schneider, H., & Siegloch, S. (2010). Documentation IZAΨMOD: The IZA policy simulation model. IZA Working Paper 4865. Bonn, Germany. Phillips, B., & Kelly, S. (2006). HouseMod: A regional Microsimulation Projection Model of Housing in Australia. Australian Housing Research conference paper 2006. Adelaide, Australia. Pudney, S., & Sutherland, H. (1994). The statistical reliability of microsimulation estimates: Results for UK tax-benefit model. Journal of Public Economics, 53, 327 365. Pudney, S., & Sutherland, H. (1996). Statistical reliability in microsimulation models with econometrically-estimated behavioural responses. In A. Harding (Ed.), Microsimulation and public policy (pp. 473 504). Amsterdam: Elsevier. Richiardi, M. (2014). The missing link: AB models and dynamic microsimulation. In S. Leitner & F. Wall (Eds.), Artificial economics and self organization. Switzerland: Springer International Publishing. Robilliard, A.-S., & Robinson, S. (2003). Reconciling household surveys and national accounts data using a cross entropy estimation method. Review of Income and Wealth, 49, 395 406. Rowe, G., & Wolfson, M. (2000). Public pensions Canadian analyses based on the lifepaths generational accounting framework. Paper prepared for the Nordic Microsimulation Workshop, Copenhagen. Sauerbier, T. (2002). UMDBS-a new tool for dynamic microsimulation. Journal of Artificial Societies and Social Simulation, 5(2), 5.
Handbook of Microsimulation Modelling
342
Jinjing Li, Cathal O’Donoghue and Gijs Dekkers
Savard, L. (2003). Poverty and income distribution in a CGE-household micro-simulation model: Top-down/bottom-up approach. CIRPE´E Working Paper 03-43. Montre´al Canada. Scott, A. (2001). A computing strategy for SAGE: 1. Model options and constraints. Technical Note 2. ESRC-Sage Research Group, London. Shiraishi, K. (2008). The use of microsimulation models for pension analysis in Japan. CIS Discussion Paper No. 409. Center for Intergenerational Studies, Institute of Economics, Hitotsubashi University. SOA. (1997). Paper 5 on CORSIM. Society of Actuaries. Retrieved from http://www.soa.org/files/pdf/Paper_5.pdf Spielauer, M. (2009). Ethno-cultural diversity and educational attainment: The modelling of education in the Canadian DemoSim population projection model. Paper presented at the 2nd General Conference of the International Microsimulation Association, Ottawa, Canada. Stroombergen, A., Rose, D., & Miller, J. (1995). Wealth accumulation and distribution: Analysis with a dynamic microsimulation model. Wellington: Business and Economic Research Ltd. Tanton, R., Vidyattama, Y., Nepal, B., & McNamara, J. (2011). Small area estimation using a reweighting algorithm. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 931 951. Tedeschi, S. (2011). T-DYMM: Background and challenges. Intermediate conference. Rome, Italy. Tedeschi, S., Pisano, H., Mazzaferro, C., & Morciano, M. (2013). Modelling private wealth accumulation and spend-down in the Italian microsimulation model CAPP_DYN: A life-cycle approach. International Journal of Microsimulation, 6(2), 76 122. Van Sonsbeek, J. (2009). Micro simulations on the effects of ageingrelated policy measures. The social affairs department of the Netherlands ageing and pensions model. Vencatasawmy, C. P., Holm, E., Rephann, T., Esko, J., Swan, N., Ohman, M., … Siikavaara, J. (1999). Building a spatial microsimulation model. SMC internal Discussion Paper. Spatial Modelling Centre, Kiruna. Wertheimer, R., Zedlewski, S. R., Anderson, J., & Moore, K. (1986). DYNASIM in comparison with other microsimulation models. In G. Orcutt, J. Merz, & H. Quinke (Eds.), Microanalytic simulation models to support social and financial policy. Amsterdam: NorthHolland. Will, B., Berthelot, J., Nobrega, K., Flanagan, W., & Evans, W. (2001). Canada’s population health model (POHEM) a tool for performing economic evaluations of Cancer control interventions. European Journal of Cancer, 37, 1797 1804. Willekens, F. (2006). Description of the micro-simulation model (continuous-time micro-simulation). Deliverable D8 (first part),
Handbook of Microsimulation Modelling
Dynamic Models
343
MicMac Bridging the micro-macro gap in population forecasting, NIDI, The Netherlands. Williamson, P. (2007). The role of the international journal of microsimulation. International Journal of Microsimulation, 1(1), 1 2. Winder, N. (2000). Modelling within a thermodynamic framework: A footnote to Sanders (1999). Cybergeo: European Journal of Geography, article 138. Wolfson, M. (2009). Preface Orcutt’s vision 50 years on. In A. Zaidi, A. Harding, & P. Williamson (Eds.), New frontiers in microsimulation modelling. Ashgate Publishing. Surrey, UK. Wolfson, M., & Rowe, G. (1998). LifePaths Toward an integrated microanalytic framework for socio-economic statistics. Paper presented to the 26th general conference of the international association for Research in Income and Wealth, Cambridge, UK. Wu, B., Birkin, M., & Rees, P. (2008). A spatial microsimulation model with student agents. Computers, Environment and Urban Systems, 32, 440 453. Zaidi, A., & Rake, K. (2001). Dynamic microsimulation models: A review and some lessons for SAGE. Discussion Paper 2. The London School of Economics, Simulating Social Policy in an Ageing Society (SAGE). Zaidi, A., & Scott, A. (2001). Base dataset for the SAGE model. Sage Technical Note. Zinn, S. (2012). A mate matching algorithm for continuous time microsimulation models. International Journal of Microsimulation, 5(1), 31 51. Zucchelli, E., Jones, A. M., & Rice, N. (2012). The evaluation of health policies through dynamic microsimulation methods. International Journal of Microsimulation, 5, 2 20.
CHAPTER 11
Demographic Models Carl Mason
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.1. Introduction Demographic microsimulation is at once compellingly simple and devilishly complex. It is simple because it seeks to simulate familiar human demographic processes at the individual level: people are born, they may marry, reproduce, migrate, and then they die. Its complexity, on the other hand, arises from the nature of those familiar behaviors. In order to simulate a demographic event, we must specify not only the age, sex, marital status, and parity specific rates at which the event occurs to members of each identified population subgroup, but also for some events, a set of decision rules which may involve more than one individual and/or a stochastic mechanism for adjusting individual level heterogeneity possibly in a heritable manner. In this chapter, I use the term demographic microsimulation to refer to dynamic stochastic microsimulation programs wherein the unit of observation is the individual, and the main purpose is to create simulated populations with kinship networks in order to answer questions of interest to demographers and other social scientists. Although many economic or population related questions can be addressed using static models (in which time plays no role) or deterministic models (in which only one outcome of the simulation is possible), simulating kinship requires that uncertainty of events and the passage of time be taken seriously.1 The methodological consequences of this are significant and are discussed in Section 11.3.
1
A notable exception is Goldstein (1999) which uses a hot-decking algorithm to simulate kinship networks at two distinct points in time.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293010
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
346
Carl Mason
The demographic microsimulation models with which this chapter is concerned do contribute to policy discussion, but generally the contribution is indirect. It is surely useful, in designing a pension system, to understand the possible sources of support for the elderly many generations in the future, and demographic microsimulation has much to contribute here. But precise questions about the optimal level of taxation and expenditure on public goods are generally answered more efficiently with models in which people are accounting units with behaviors that follow from macroeconomic conditions. Demographic events and resulting kinship networks are at most a small part of these projects. For an excellent review of 27 microsimulation projects addressing health care, education, and pension systems, see Spielauer (2007). For a broad typology of population related microsimulation and agent-based models with a focus on models oriented toward land use issues, see Morand, Toulemon, Pennec, Baggio, and Billari (2010). For an overview of developments during the last decade in microsimulation methods which focus on economics, see Li and O’Donoghue (2013). 11.1.1. Agent-based modeling? Whether demographic microsimulation is a form of agent-based modeling depends on both the definition one chooses for the term “agent-based” and the research question being addressed. Agent-based modeling is not precisely defined. A survey of recent papers that use some form of agent-based modeling (Niazi & Hussain, 2011) includes a vast array of topics in disciplines that range from the physical and social sciences to engineering and medicine and which “… have, at times perhaps nothing other than the notion of an ‘agent’ in common with each other.” Attempts to define “agent-based” even for a single discipline (Castle & Crooks, 2006) require several pages. Autonomy, rationality, heterogeneity, adaptability, and the goal orientation of the “agents” are all under contention. A common notion among most attempts to define “agent-based” is of atomistic agents whose interactions and reactions within a shared environment determine the overall behavior of the system. Demographic microsimulation can fit that description, but there is more to agent-based modeling than simply the choice of tools. The question being asked is as important as the tools being used. In the social sciences, “agent-based computational models presuppose (realistic) rules of behaviour, and try to challenge the validity of the rules by showing whether they can or cannot explain macroscopic regularities” (Billari, Ongaro, & Prskawetz, 2003). In other words, agent-based social science is an alternative to “scientific reasoning by statistical models.” An interesting example (because it predates the use of the term) is Thomas Schelling’s segregation model (Schelling, 1971).
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
Demographic Models
347
Demographic microsimulation also predates the term “agent-based” but nonetheless can and has been harnessed for this type of scientific reasoning. If the goal of the research is to use a macroscopic regularity such as the observed Slavonian Census of 1698 (Hammel & Wachter, 1996) or the observed proportions of extended families in Victorian England (Ruggles, 1987) to determine whether ethnographically reasonable rules might have applied, that seems like agent-based reasoning. Using demographic microsimulation to test whether or not anthropologist’s accounts of incest taboos are consistent with the survival of small preagricultural populations (Hammel, McDaniel, & Wachter, 1979) also seems to fit most definitions of “agent-based.” Similarly, the use of Bayesian Melding techniques (Zagheni, 2010, 2011) to find plausible demographic rates/rules that are consistent with macroscopic Bayesian priors also fits the definition of agent-based modeling. On the other hand, applications of demographic microsimulation wherein the goal is to explore the implications of assumed demographic rates and rules of behavior may be too “top down” to fit comfortably within many candidate definitions of the term “agent-based” (Zagheni, 2014). Most of the papers cited in Section 11.2 are in this category. Although the “agents” in these microsimulations act independently, the questions of interest are largely determined by demographic rates which are validated externally and set by the investigator. 11.1.2. Demographic macrosimulation Macrosimulation, when applied to demography, generally means cohort component models. In this type of simulation, the population at a point in time is represented as a table wherein each cell contains the number of people in the population who are of a particular age and sex and who belong to any number of other mutually exclusive categories: marital or health status, parity, race or perhaps socioeconomic categories such as income, religion, location, or the duration of any of these characteristics. To project the population from one time period to the next, the population matrix is multiplied by a transition matrix, the cells of which contain the probability that a person in cell (i,j) will be represented in cell (i0 , j0 ) of the next period’s population table. Although the transition matrix contains probabilities, macrosimulation models of this sort are in a sense deterministic. For a given population state in time t, only one state in time t + 1 is possible. Another way of saying this is that macrosimulations only project the central tendencies of the population. While it may be theoretically possible to deduce standard errors from macrosimulation models, the need for the complete distributions of population characteristics is a compelling reason to consider microsimulation. Since the output of microsimulation is an entire simulated population, calculating higher moments of even very complex population
348
Carl Mason
characteristics is straightforward. Further, microsimulation naturally produces multiple stochastically equivalent realizations of entire populations; so even the distributions of higher moments of population characteristics are readily available. A second compelling reason for choosing microsimulation over macrosimulation is that as the number of population attributes increases, macrosimulation models quickly become unwieldy. Microsimulation is often computationally less demanding. Van Imhoff and Post (1998) describe a realistic macrosimulation model for France where the number of cells in the population matrix exceeds, by an order of magnitude, the total population of France.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.2. Demographic microsimulation: applications The main use of demographic microsimulation programs is to test and quantify the implications of changes in kin availability as they relate to four overlapping areas: • Long term changes in fertility and mortality, such as the aging population resulting from contemporary and future low fertility rates, the demographic transition, or deeper historical processes. • Rules, policies, and preferences such as incest taboos or pro or antinatalist policies. • HIV/AIDS. • Developing indirect methods of estimating demographic quantities, particularly when vital registration data are absent. 11.2.1. Long term changes in fertility and mortality Demographers possess excellent tools for finding the mean numbers of various types of kin under stable population assumptions. Unfortunately, stable population assumptions never hold in the long run, and many questions can only be answered if the entire distribution is known. These sorts of questions motivated the two earliest demographic microsimulation projects, SOCSIM and CAMSIM, both of which enjoyed initial guidance from Peter Laslett in the 1970s (Zhao, 2006), and both of which remain in use today. One of the earliest papers to use demographic microsimulation to estimate kinship resources (and the first to use non-stable demographic rates) is Hammel, McDaniel, and Wachter (1981), in which it was projected that the kinship resources available to the elderly in United States in 2000 would be greater than those available to the elderly of 1980. With varying assumptions and time horizons, Hammel, Mason, Wang, and Yang (1991), Yang (1992), Lin (1994, 1995), Smith (1991), Zhao (1994, 1998) all investigate the implications of fertility and mortality decline and population aging in China.
Demographic Models
349
Murphy (2010, 2011) investigated the effects of the demographic transition and the “second demographic transition” (Lesthaegh, 1995) on contemporary England with special attention to the effect on the “aging of generational relationships,” such as the age at which different cohorts experience parental death. Investigations of historical demographic processes other than the demographic transition include: Hammel (2005) who investigates the effect on politically useful kinship resources in the wake of mortality and fertility shocks on hypothetical populations of anthropological interest and Smith and Oeppen (1993) who traces the availability of a very broad spectrum of kin for male English birth cohorts of 1550, 1650, and 1750.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.2.2. Rules, preferences, and household formation Rules or preferences toward behaviors which bear on demographic events make seductive targets for microsimulation. Often such rules are difficult to model analytically because they depend on complex orderings of large sets of kin which may or may not exist at particular ages. In many cases, rules and preferences regarding demographically interesting behaviors have implications for kinship networks which run in both directions. Preferences and prohibitions toward marriage between certain types of kin are very common, and there is good reason to believe that they have persisted in many cultures over millennia (Bittles, 2008). But the degree to which taboos against consanguineous marriage can have been upheld in small preagricultural populations is not measurable directly. Hammel et al. (1979) uses demographic microsimulation to quantify the relationship between population size and fertility loss due to delayed marriage resulting from incest taboos and discovers that “flexibility” with regards to incest was necessary for the survival of small populations. The role of individual and societal preferences for extended versus nuclear families during the 19th Century is central to Ruggles (1987). This work provides a detailed analysis of kinship networks in rich historical context to show that changing demography was a necessary but not sufficient condition for the increasing prevalence of extended families in Victorian England and the United States in the 19th Century. Just as in Victorian England, both demography and preferences play important roles in the formation and dissolution of complex Balkan households (known as zadruga). In Hammel (1990), a set of ethnographically reasonable rules for formation and dissolution of zadruga households are integrated within a demographic microsimulation. The results quantify the relationship between population growth and the prevalence and duration of these households. Divorce and (re)marriage is another rich area of investigation where preferences and behaviors affect kinship networks in profound ways. Murphy (2011), cited above, pays a great deal of attention to changing
350
Carl Mason
patterns of marriage and their consequent effect on England kinship networks (although that is not the central focus of the work). Wachter (1997) (for the United States) and Bartlema (1988) (for the Netherlands) consider the effects of serial monogamy on the frequency of step kin. Both find that numerically, increases in step kin will compensate for decreases in biological kin as fertility falls and remarriage rises. It is notable that Wachter (1997) microsimulates immigration as well as birth, death, marriage, and divorce.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.2.3. HIV Diseases such as HIV can have devastating effects not only on the person who contracts the disease but also on her entire kinship network. The effect on kinship networks is particularly strong in poor countries where insurance markets and social safety nets are not well developed. The financial, emotional, and opportunity costs of caring for family members with HIV/AIDS and their dependents are clearly enormous; certain aspects of it are also difficult to measure with aggregate demographic techniques. Wachter, Knodel, and VanLandingham (2002) measure the burden of care and sorrow that falls to parents of adult AIDS sufferers in northern Thailand. The main finding is that the proportion of elderly who lose two or more children to AIDS is surprisingly high given the infection rate. Zagheni (2010, 2011) concentrates on the other end of the age spectrum, measuring the prevalence of maternal and double orphanhood resulting from HIV/AIDS in Zimbabwe between 1980 and 2050. This project is of methodological as well as substantive interest, as it uses a Bayesian melding technique based on Sevcikova, Raftery, and Waddell (2007) to estimate HIV infection rates. 11.2.4. Indirect estimation of demographic quantities Demographic microsimulation has proved useful in both estimating population related quantities from imperfect information and also for testing practical methods against violations of their assumptions. Hammel and Wachter (1996) use demographic microsimulation to correct the Slavonian Census of 1698. Using ethnographically informed behavior rules and demographic rates, Hammel and Wachter were able to estimate the numbers of people who were present but excluded either systematically or stochastically from the Census records. In settings where vital registration is poor, information about existing kinship networks is often easier to obtain and more reliable than fertility histories. Building on the work of Goodman, Keyfitz, and Pullum (1974) and Goldman (1978) who developed theoretical methods of measuring population growth and other quantities based on the frequencies of older
Demographic Models
351
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
and younger siblings of living persons, Wachter (1980) and McDaniel and Hammel (1984) use microsimulation to characterize the performance of these measures and to develop new measures which perform better under some realistic circumstances. Methods for estimating mortality rates in the absence of vital registration data are of great interest to demographers and others. Here again, information on existing kinship networks is generally the best (or only) information that can be brought to bear. The literature on various ways to use information on parents, spouses, siblings, and neighbors is quite large (Gakidou & King, 2006). Masquelier (2013) uses demographic microsimulation to investigate an important issue regarding selection bias in the use of sibling survival data to estimate adult mortality rates. 11.3. Methodological issues 11.3.1. SOCSIM This section illustrates the power and the pitfalls of demographic microsimulation by highlighting several methodological issues and describing how they are dealt with in SOCSIM, a free extensible and open source demographic microsimulation program (University of California Berkeley, 2014). SOCSIM was originally developed in Cambridge and Berkeley in the 1970s by Kenneth Wachter and Eugene Hammel for the purpose of studying household formation and kinship in early modern England (Wachter, Hammel, & Laslett, 1978), but has been applied to many other problems over the past 40 years. 11.3.2. Closed and open populations SOCSIM models populations as “closed” in the sense that, aside from the initial population, individuals only enter the simulation when an already existing female gives birth. The implications of this choice are significant. All simulations must begin with an initial population. In an open simulation, such as CAMSIM (Smith, 1987), the initial population generally consists of a set of unrelated individuals who can be thought of as a birth cohort, each of whose kin network will be built up independently. In a closed simulation, the initial population is simply a population generally of unrelated individuals. If appropriate, the initial population can be constructed from survey data or fragments of historical censuses to represent a real or theoretically interesting population. In other cases it can be a set of individuals with an arbitrary age and sex distribution with or without ancestry or marriages specified. In the latter case, simulating a few hundred years under constant demographic rates will produce a stable population with a deep genealogy. This is a good starting point for many investigations.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
352
Carl Mason
Closed simulations retain the genealogy of the entire population. Thus, the kinship relationship between any two individuals can be reckoned provided that both are sufficiently removed from the initial population. In other words, the entire genealogy of the population can be known with certainty and analyzed as an entire population of related and unrelated individuals. With “open” simulations where individuals can enter the population by means other than simulated birth, the population must be treated more carefully. In an open simulation, the complete ancestry (meaning the line of descent to the individual from a member of the initial population) is known only for a subset of the population, thus many demographically interesting quantities are not easily recovered. An advantage of open population simulations over closed ones is that in an open simulation the investigator can specify multiple age schedules and even joint age distributions for events that involve more than one person whereas in a closed simulation this is quite difficult. In an open simulation such as CAMSIM, it is possible to achieve precisely the desired distributions of both male and female ages at marriage as well as the distribution of the age difference at marriage. This is so, because in an open simulation, brides or grooms with the necessary characteristics are simply created as needed. In a closed simulation, where individuals only enter by birth, a suitable spouse must be selected from those which have already been born. As a consequence, it is generally only possible to achieve designated age specific marriage rates for both sexes and/or some designated pattern of homogamy if the population is large and marriage is non-universal. In general, in closed simulated populations, compromise is necessary. For detailed comparison of SOCSIM and CAMSIM, see Zhao (2006). 11.3.3. Time Dynamic microsimulation models can be constructed using either a discrete or continuous concept of time. While discrete time models enjoy greater simplicity and lower computing costs, their chief drawback is that some events must happen simultaneously. If the number of simultaneous events is small then only theoretical difficulties emerge. If the number is large, then practical difficulties in interpreting the simulated life histories emerge as well. This problem as well as the possible problems inherent in not being able to measure inter-event times with adequate precision are all neatly dispatched by using a continuous time model instead (Willekens, 2009). Purely continuous time simulations present some problems of their own, however, an example of which is somewhat ironically, the impossibility of simultaneous events such as marriage which is dealt with in Zinn (2012). Demographic microsimulation models, like humans, often use both continuous and discrete concepts of time together. For example, SOCSIM models time as continuous, using piecewise exponential distributions of
Demographic Models
353
waiting times in a competing risks framework but records it in integer time units (generally months). Events scheduled for the same time unit are executed in random order. This is consistent with the way most demographers see the world: models conceptualize time as continuous, but data seldom report demographic events in a time scale finer than days. Life tables are an example of this sort of blending of continuous and discrete time concepts. The impurity of this scheme does mean that inter-event waiting times can only be measured imprecisely, but the precision can be increased by simply reducing the duration of the integral time unit.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.3.4. Rates over time Many demographic models, particularly those that rely on stable population assumptions, become analytically intractable if the underlying demographic rates change frequently in arbitrary ways. Demographic microsimulation is attractive in this situation because in microsimulation, rate changes pose only a small computational challenge: when the underlying demographic rates change during a simulation, events that are scheduled for the future become invalid because they were generated using obsolete rates. For microsimulation programs such as SOCSIM, which maintain a calendar of future events,2 this means that each rate change triggers a new event lottery for each individual. Event lotteries are discussed in Section 11.3.5.1. It is also possible to use a Monte-Carlo algorithm to decide in each month whether or not each event happens to each person. 11.3.5. Events A demographic microsimulation must simulate all of the usual demographic events. At a minimum these include birth, marriage, divorce, and death but it is also useful to include as demographic events, transitions between arbitrarily defined subpopulations. Each event type has its own peculiar features, but SOCSIM treats each event as one risk in a competing risk framework. Section 11.3.5.1 describes how waiting times are generated. 11.3.5.1. Event lottery At any point in the simulation, every individual must either be having an event executed or have a next scheduled event for which she/he is waiting.
2
A calendar of future events is not a necessary part of a demographic microsimulation.
354
Carl Mason
Consequently an individual must have a new event generated only at each of the following times:
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
• When the individual is born. • Immediately after the individual has an event executed. • When a related individual has an event that triggers a status change. For example, when a spouse dies, an individual becomes widowed, and widowed people may have different event rate schedules than married people. When a new event is needed, SOCSIM conducts an event lottery. Separate event lotteries are conducted for each individual using rates that may be tailored for that individual as described in Section 11.3.8. In an event lottery, a tentative waiting time, w, is generated for each event for which the individual is at risk. The waiting time, wj, for the jth event is constructed according to Eq. (11.1) where u is a random variable with uniform distribution over (0 : 1), age is the individual’s current month of age, a is an index that runs from age to the maximum age allowable in the simulation and λa is the (constant) hazard rate for the event for individuals while they are months of age. Recall that rates for each event vary not only by age but also by sex, marital status, parity, and population subgroup, but also vary individually through the use of hazard rate multipliers as described in Section 11.3.
wj = A~ þ
lnðuÞ −
A~ P a
λA~ þ 1
λa
ð11:1Þ
where A~ = maxðaÞ s:t: lnðuÞ >
ð11:2Þ A X
a = age
λa
Once all wj for an individual have been generated, the smallest (soonest) is chosen and the rest are discarded. The winning event is then added to the calendar of future events. The event lottery thus functions as a competing risks model in which all risks have piecewise constant hazards. While SOCSIM generates waiting times and schedules events as described above, a mathematically equivalent, but computationally more demanding strategy is to stochastically determine at each point in time, whether or not each individual has each type of event for which s/he is at risk in the current month and then executing the scheduled events in random order.
355
Demographic Models
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.3.5.2. Birth The most significant idiosyncrasy of the birth event is the birth interval. At a minimum, the birth interval must be sufficient to account for gestation, but it can be longer. However, fertility rates as tabulated by demographers are ignorant of birth intervals. Fertility rates specify simply the number of births per woman year of exposure. Even at low fertility rates, simply generating piecewise exponentially distributed waiting times can easily produce siblings born say four or five months apart. If we simply disregard births that are scheduled to occur sooner than the specified minimum interval since a woman’s previous birth, then the realized fertility rates will be lower than the specified rates. In order to generate waiting times between births which respect a minimum birth interval and still produce sufficient births to match the specified birth rates requires adjusting fertility rates upward. Before describing the fertility rate adjustment, it is useful to first define the age specific fertility rate in terms of hazard rates. Because SOCSIM models event times as piecewise exponentials, the arrival times of births to each woman is a non-homogeneous Poisson process. In a non-homogeneous Poisson process, the expected number of events by time t, Λ(t) is given by: Z t ΛðtÞ = λðtÞdt ð11:3Þ 0
where λ(t) gives the rate parameter at time t. Since hazard rates in SOCSIM do not change within a month, λ(t) is a step function. We can denote the expected number of births that a woman will experience while she is age a as λa. In terms of the poisson process this is also the arrival rate during age a. Since for each woman, the expected number of births during month of age a is λa, it follows that the expected number of births to all women of age a is Waλa, where Wa is the number of woman aged a. The task at hand is to modify λa for all ages, a so as to preserve the expected numbers of births to women at each age. That is such that _
_
W a λa = W a λ a
ð11:4Þ
_
_
where λ a denotes the modified value of λa and W a denotes the number of women who have not had a birth during the previous minimum birth interval, b, months. It follows that _
λa =
Wa −
W a λa Pa − 1
j=a−b
W j λj
≈
1−
λa Pa − 1
j = a − b λj
ð11:5Þ
The above approximation rests on the near equality of Wj during the preceding minimum birth interval. For modest minimum birth intervals and
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
356
Carl Mason
reasonable mortality levels, this approximation works very well over the entire age range. However if fertility rates are parity or marital status specific, then the modification tends to overcorrect during the earliest ages of risk of birth. This is because women may enter the risk pool by marriage or prior birth more slowly than by simply aging in. For example, if marital fertility rates are positive after age 18, but marriage rates are positive only beginning at age 17 years and 11 months, then the modification as specified above will adjust for b − 1 months of risk that are never experienced. One invisible consequence of this procedure is that even though users may specify fertility rates that change only annually or less frequently, the adjusted rates, because they are affected by a moving window of previous rates, generally will differ from one month of age to the next. Although the minimum birth interval adjustment generally works well, aside from the second order issues noted above, it is easy to specify high fertility rates and long birth intervals which make simulation impossible or which make birth intervals improbably regular. The adjustment works best when fertility is moderate and minimum birth intervals are short. In addition to the adjustment for birth intervals, which takes place before the simulation begins, fertility rates are also be adjusted for each woman in order to increase heterogeneity of sibling set size and possibly for other purposes specific to a particular investigation. The mechanism that SOCSIM implements for this is addressed later. 11.3.5.3. Marriage Marriage or cohabitation is a particularly difficult event to simulate because it involves two participants each of which may have a rate schedule which must be matched in the simulation. In addition, marriage is generally thought to depend on a host of characteristics of both spouses. At a minimum, this would include their ages, but it could also include their common ancestry, or any socioeconomic characteristic of interest to the investigator. In a closed simulation, incest restrictions must also be implemented. It is not possible in a closed simulation (see above), reliably and precisely to achieve desired distributions of female age at marriage, male age at marriage and some other marriage related criterion, for example, the distribution of age difference at marriage, or educational, religious, or socioeconomic homogamy. Since achieving fertility rate targets often depends on achieving female age at marriage targets, it is often a reasonable strategy to execute female marriage events according to the specified rates and then attempt to select a spouse for her. How the spouse is selected depends on whether the distribution of male age at marriage or some other marriage preference criterion is deemed more important. If the latter, then the program can rank order all marriage eligible males in the entire population and select the optimal match.3 If, on the other hand, the male age distribution at
Demographic Models
357
marriage is deemed more important, then choice set over which marrying females select spouses can be narrowed to include only those males for whom a marriage event has been scheduled. Not surprisingly, unless populations are quite large, limiting partner choices to males who have had a coincident marriage event scheduled does not work perfectly. To implement a marriage scheme that seeks to match both male and female marriage age distributions, SOCSIM maintains a “marriage queue” for each sex.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.3.6. Marriage queues Marriage queues in SOCSIM are lists of individuals for whom a marriage event has come due, but for whom a suitable spouse has not yet been found. Separate male and female marriage queues are maintained. In a sense, the marriage event can thus be thought of as the beginning of a marriage search which ends some time later when a suitable spouse is found. Unfortunately demographers seldom have access to rates of marriage search initiation so redefining the simulated event does not solve the fundamental problem: if a suitable marriage partner cannot be found when an individual has a “marriage” event, then the specified marriage rates are not being achieved. A non-empty marriage queue implies that target marriage rates for the corresponding sex are not being achieved. As noted above, when both the male and female marriage age distributions are to be matched, then some males and some females will invariably spend time on their respective marriage queue waiting for a suitable spouse, and thereby depressing the achieved age specific marriage rates relative to the specified target rates. In the simpler case, when something other than the distribution of male marriage ages is the second goal, then males are effectively on their marriage queue whenever they are not dead or married (or ineligible for some other reason). Less obvious in this case is the necessity of maintaining a female marriage queue. This is the case because it is always possible that no suitable spouse can be found even if the entire male population is marriage eligible.
11.3.7. Homogamy and spousal age difference A compromise that many SOCSIM users find reasonable is to match user specified age specific rates of marriage of females and a specified distribution of spousal age difference.
3
Some day when computing becomes even faster and cheaper, it will be feasible to use a mathematical programming method optimize matches over the set of possible matches at all points in time, but at present experiments with linear programming have proved too slow.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
358
Carl Mason
The algorithm that SOCSIM uses to do this could be applied to any characteristic that can be described by a single number, such as income or years of education. What’s needed is a target distribution of the difference between partners. By default, SOCSIM attempts to achieve a distribution of spousal age difference that is Gaussian with user specified mean and standard deviation. The choice of the Gaussian distribution is one of convenience. However, most published age distributions of spousal age difference appear to be symmetric and also surprisingly stable over time (see, e.g., Bhrolchin, 2005). SOCSIM’s algorithm for achieving the desired distribution of marriage age homogamy is to execute female marriage events according to age specific rates, and upon execution of a female’s marriage event to calculate for each potential spouse the impact on the absolute sum of the deviations between the desired proportion of marriages with each age difference and the observed distribution of age differences. The spouse chosen is the one who will bring about the largest reduction in ε in Eq. (11.6), where f(x) is the density function of the target distribution of spousal age difference, na is the number of marriages executed where the spousal age difference is a and N is the total number of marriages executed so far in the simulation. ɛ=
X na Z − N
a= −A
aþ1 a
f ðxÞdx
ð11:6Þ
Although the Gaussian distribution does not capture all of the known nuances of marriage age distribution, in particular it does not take account of increasing mean age differences with increasing age at marriage. In practice, investigators are unlikely to be able to specify with any certainty these second order effects. For those who wish to, SOCSIM can be modified to use any target distribution.
11.3.8. Heterogeneity The signature characteristic of demographic microsimulation is that otherwise identical simulated individuals can have different life courses. This contrasts with macrosimulation models and underlies one of the main justifications for choosing micro over macrosimulation strategies. However, the extent of population heterogeneity that arises from the action of the Monte-Carlo process alone is not always adequate for every investigation. In demographic microsimulation, it is often useful to add heterogeneity in ways that correspond to biological of sociological processes. SOCSIM introduces heterogeneity in two ways: first by providing for the inclusion of rate independent population subgroups, and second from the inclusion hazard rate multipliers. Rate multipliers provide a flexible mechanism for
Demographic Models
359
increasing heterogeneity in the likelihoods of all events at the individual level. These multipliers can be driven by either deterministic or stochastic mechanisms.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
11.4. Rate independent population subgroups Demographic microsimulation is greatly enhanced by the ability to specify population subgroups whose lives are governed by distinct sets of demographic rates. A great deal of population heterogeneity can be modeled by dividing the population into subgroups especially subgroups which are recognized by data collecting authorities. Subgroups can represent geographical areas, ethnic identities, disease states, or any characteristic that might affect demographic rates. Farther, transitions between subpopulations can be modeled as demographic events in and of themselves. Zagheni (2011) and Wachter et al. (2002) model the progression of HIV/AIDS as a series of transitions between population subgroups. SOCSIM allows the user to specify up to 64 population subgroups. Members of each subgroup have events scheduled according to that subgroup’s complete rates set. Each subgroup has its own independent age, sex, marital status, and parity specific set of demographic rates. In addition, the rates of transition from each subgroup to every other subgroup can vary either by age or by duration of the individual’s stay in the subgroup. Transition rate multipliers exist so that additional individual level heterogeneity in subgroup transition rates can be included. SOCSIM assigns subgroup membership at birth by inheritance from either mother, father, same sex, or opposite sex parent. 11.4.1. Rate multipliers Hazard rate multipliers can be used to increase heterogeneity across individuals in a demographic microsimulation. Because the distributions of waiting times for all events in SOCSIM are piecewise exponential, proportional shifts in each individual’s hazard rate schedule are easy to implement by simply rescaling the event’s hazard rate schedule. With piecewise exponential waiting times, hazard rate multipliers are appealing theoretically because the type of heterogeneity that they induce is the sort captured by the Cox Proportional Hazard model (Cox, 1972). If one thinks of the specified demographic rates as determining the baseline hazard, then rate multipliers provide a way of specifying the linear part of a Cox model. Individual level characteristics either demographic or socioeconomic can be used to determine the value of a demographic rate multiplier and that multiplier then brings about a proportional shift in the entire hazard rate schedule to which the individual is subject.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
360
Carl Mason
This is an appealing mechanism for simulating social science models, which are often conceptualized in terms of the Cox model. Note that if individual level characteristics alone determine the value of the hazard rate multiplier, then observationally equivalent individuals will face identical hazard rates. In this sense the hazard multiplier might be called “deterministic”. The Monte-Carlo process, of course, can still produce different outcomes for identical individuals with identical hazard rate multipliers. Biological processes such as fertility call for a different sort of hazard rate multiplier specifically one that adds variability across observationally identical individuals rather than simply shifting the effective hazard rates up or down. Fertility hazard rate multipliers might be called “stochastic” in the sense that observationally identical individuals may have different fertility multipliers. Stochastic hazard rate multipliers add variation to the hazard rates faced by observationally equivalent individuals. In the case of fertility, this serves the purpose of increasing the variance of sib-set size, which in turn can affect many aspects of the kinship network (Ruggles, 1993). Unless disabled by the user, each female in a SOCSIM simulation receives a fertility multiplier, fmult, at birth which stays with her for her entire life. Females in the initial population can have multipliers assigned by the investigator or generated by SOCSIM. SOCSIM generated fertility multipliers have a mean of 1.0 and a variance of 0.416 (Lachenbruch, Klepfer, & Rhodes, 1968). Consequently, fertility multipliers do not affect the average fertility, so specified fertility rates can be achieved but with a higher variance of sib-set sizes. Since it may be desirable to model fertility as heritable, SOCSIM offers a two parameter scheme for determining the degree of inheritance of fertility. The two user-determined parameters are α and β in Eq. (11.7) α and β are 0 and 1, respectively, by default. γ in Eq. (11.7) is Beta distributed pseudo-random number with mean 1 and variance 0.416. x = αfmultmother þ ð1 − αÞγ
ð11:7Þ
fmultdaughter = 2:5expβlogð2:5Þ
ð11:8Þ
x
11.5. Future directions Demographic microsimulation has been a tool of social scientists since the 1970s. For much of that time, microsimulation was among the most computationally demanding tasks that demographers performed. With the rapid increase in accessibility of computing power, however, microsimulation no longer requires any special computing hardware.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
Demographic Models
361
The limiting factor in the production of demographic microsimulation results today is the availability of complete rate sets. As noted above, demographic microsimulation programs like SOCSIM allow the investigator to specify an elaborate set of demographic rates, all of which may change over time. This is not problem for the computer, but developing an appropriate rate set can be a research project in and of itself. Fortunately the paucity of demographic rates is being addressed in two ways: first in new online archives of carefully constructed demographic rates, and second in new statistical techniques for rate “calibration.” The Human Mortality Database (http://mortality.org) and the Human Fertility Database (http://www.humanfertility.org) each provide demographic rates for many countries by single year of age over long periods both in cohort and period terms. Both projects provide detailed descriptions of their methods and standards thus facilitating international comparison of simulation results. An emerging field of research in microsimulation is in “rate calibration,” that is, in developing rate sets that have properties that make them good substitutes for rates derived from survey or census data. Calibration techniques are generally computation-intensive relying Bayesian approaches such as Bayesian Melding. Bayesian Melding was introduced in Poole and Raftery (2000) and Raftery, Givens, and Zeh (1995) for deterministic simulation; applied to stochastic simulation in Sevcikova et al. (2007) and to rate calibration for demographic microsimulation using SOCSIM in Zagheni (2010). The flexibility of this approach derives from the possibility of using prior knowledge of various sorts including qualitative as well as datadriven. The elegance of this approach derives in part from its use of the microsimulation program itself to derive rates for microsimulation.
References Bartlema, J. (1988). Modelling step-families. Revue Europenne de Dmographie. [European Journal of Population], 4(3), 197221. Bhrolchin, M. N. (2005). The age difference at marriage in England and Wales: A century of patterns and trends. Population Trends, 120, 714. Billari, F. C., Ongaro, F., & Prskawetz, A. (2003). Introduction: Agentbased computational demography. In F. C. Billari & A. Prskawetz (Eds.), Agent-based computational demography: Using simulation to improve our understanding of demographic behaviour (pp. 117). New York, NY: Springer. Bittles, A. H. (2008). A community genetics perspective on consanguineous marriage. Community Genetics, 11(6), 324330. PMID: 18690000.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
362
Carl Mason
Castle, C. J. E., & Crooks, A. T. (2006). Principles and concepts of agentbased modelling for developing geospatial simulations. Working/ discussion paper (CASA Working Papers). Centre for Advanced Spatial Analysis (UCL), UCL (University College London), Centre for Advanced Spatial Analysis (UCL): London, UK. Retrieved from http://eprints.ucl.ac.uk/3342/. Accessed in September 2006. Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society. Series B (Methodological), 34(2), 187220. ArticleType: research-article/Full publication date: 1972/Copyright 1972 Royal Statistical Society for Demographic Research, M. P. I. and of Demography, V. I. (2014). Human fertility database. Gakidou, E., & King, G. (2006). Death by survey: Estimating adult mortality without selection bias from sibling survival data. Demography, 43(3), 569585. Goldman, N. (1978). Estimating the intrinsic rate of increase of population from the average numbers of younger and older sisters. Demography, 15(4), 499507. Goldstein, J. R. (1999). Kinship networks that cross racial lines: The exception or the rule? Demography, 36(3), 399407. Goodman, L. A., Keyfitz, N., & Pullum, T. W. (1974). Family formation and the frequency of various kinship relationships. Theoretical Population Biology, 5(1), 127. Hammel, E. (1990). Demographic constraints on the formation of traditional Balkan households. Dumbarton Oaks Papers, 44, 173186. ArticleType: research-article/Full publication date: 1990/Copyright 1990 Dumbarton Oaks, Trustees for Harvard University. Hammel, E., Mason, C., Wang, F., & Yang, H. (1991). Rapid population change and kinship: The effects of unstable demographic changes on Chinese kinship networks 17502250. In Consequences of rapid population growth in developing countries. New York, NY: Taylor & Francis. Hammel, E., McDaniel, C., & Wachter, K. W. (1981). The kin of the aged in A.D. 2000: The chickens come home to roost. In Aging and social change (pp. 1140). New York, NY: Academic Press. Hammel, E., & Wachter, K. W. (1996). Evaluating the Slavonian census of 1698 part II: A microsimulation test and extension of the evidence. Revue Europenne de Dmographie. [European Journal of Population], 12(4), 295326. ArticleType: research-article/Full publication date: December, 1996/Copyright 1996 Springer. Hammel, E. A. (2005). Demographic dynamics and kinship in anthropological populations. Proceedings of the National Academy of Sciences of the United States of America, 102(6), 22482253. PMID: 15677714. Hammel, E. A., McDaniel, C. K., & Wachter, K. W. (1979). Demographic consequences of incest taboos: A microsimulation analysis. Science, 205(4410), 972977. PMID: 17795542.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
Demographic Models
363
Human Fertility Database. Max Planck Institute for Demographic Research (Germany) and Vienna Institute of Demography (Austria). Retrieved from www.humanfertility.org. Accessed on August 28, 2014. Lachenbruch, P., Klepfer, J., & Rhodes, S. (1968). Demographic microsimulation model POPSIM i: Manual for program ms to generate vital events, closed core model. Technical Report 3, Research Triangle Institute. Lesthaegh, R. (1995). The second demographic transition in western countries: An interpretation. In K. Mason, & A. Jenson (Eds.), Gender and family change in developed societies (pp. 1762). Oxford: Clarendon Press. Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 355. Lin, J. (1994). Parity and security: A simulation study of old-age support in rural China. Population and Development Review, 20(2), 423448. ArticleType: research-article/Full publication date: June, 1994/ Copyright 1994 Population Council. Lin, J. (1995). Changing kinship structure and its implications for old age support in urban and rural China. Population Studies, 49(1), 127145. ArticleType: research-article/Full publication date: March, 1995/Copyright 1995 Population Investigation Committee. Masquelier, B. (2013). Adult mortality from sibling survival data: A reappraisal of selection biases. Demography, 50(1), 207228. ArticleType: research-article/Full publication date: February 2013/ Copyright 2013 Population Association of America. McDaniel, C. K., & Hammel, E. A. (1984). A kin-based measure of r and an evaluation of its effectiveness. Demography, 21(1), 4151. ArticleType: research-article/Full publication date: February, 1984/ Copyright 1984 Population Association of America. Morand, E., Toulemon, L., Pennec, S., Baggio, R., & Billari, F. (2010). Demographic modelling: The state of the art. Working Paper No. 2.1a, Ined, Paris. Murphy, M. (2010). Changes in family and kinship networks consequent on the demographic transitions in England and Wales. Continuity and Change, 25(01), 109136. Murphy, M. (2011). Long-term effects of the demographic transition on family and kinship networks in Britain. Population and Development Review, 37, 5580. ArticleType: research-article/Issue Title: Demographic Transition and Its Consequences/Full publication date: 2011/Copyright 2011 Population Council. Niazi, M., & Hussain, A. (2011). Agent-based computing from multiagent systems to agent-based models: A visual survey. Scientometrics, 89(2), 479499.
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
364
Carl Mason
Poole, D., & Raftery, A. E. (2000). Inference for deterministic simulation models: The Bayesian melding approach. Journal of the American Statistical Association, 95(452), 12441255. ArticleType: researcharticle/Full publication date: December, 2000/Copyright 2000 American Statistical Association. Raftery, A. E., Givens, G. H., & Zeh, J. E. (1995). Inference from a deterministic population dynamics model for bowhead whales. Journal of the American Statistical Association, 90(430), 402416. ArticleType: research-article/Full publication date: June, 1995/Copyright 1995 American Statistical Association. Ruggles, S. (1987). Prolonged Connections: The Rise of the Extended Family in Nineteenth-century England and America. University of Wisconsin Press, WI, US. Ruggles, S. (1993). Confessions of a microsimulator. Historical Methods: A Journal of Quantitative and Interdisciplinary History, 26(4), 161169. Schelling, T. C. (1971). Dynamic models of segregation. The Journal of Mathematical Sociology, 1(2), 143186. Sevcikova, H., Raftery, A., & Waddell, P. (2007). Assessing uncertainty in Urban simulations using bayesian melding. Transportation Research, Part B(41), 652669. Smith, J. (1991). Aging together, aging along. In F. Ludwig (Ed.), Life span extension: Consequences and open questions (pp. 8193). New York, NY: Springer. Smith, J., & Oeppen, J. (1993). Estimating the number of kin in historical England using demographic microsimulation. In Old and new methods in historical demography. Oxford: Clarendon Press. Smith, J. E. (1987). The computer simulation of kin sets and kin counts. In J. Bongaarts, K. W. Wachter, & T. Burch (Eds.), Family demography: Methods and their application (pp. 249267). Oxford: Oxford University Press. Spielauer, M. (2007). Dynamic microsimulation of health care demand, health care finance and the economic impact of health behaviours: Survey and review. International Journal of Microsimulation, 1(1), 3553. University of California Berkeley. (2014). SOCSIM. Available at www. mortality.org or www.humanmortality.de University of California Berkeley and Max Planck Institute for Demographic Research. (2014). Human mortality database. Available at www.mortality.org or www.humanmortality.de Van Imhoff, E., & Post, W. (1998). Microsimulation methods for population projection. Population. English Selection, 10(1), 97138. PMID:12157954. Wachter, K. W. (1980). The sisters’ riddle and the importance of variance when guessing demographic rates from kin counts. Demography,
Downloaded by RMIT University At 12:01 31 January 2016 (PT)
Demographic Models
365
17(1), 103114. ArticleType: research-article/Full publication date: February,1980/Copyright 1980 Population Association of America. Wachter, K. W. (1997). Kinship resources for the elderly. Philosophical Transactions: Biological Sciences, 352(1363), 18111817. ArticleType: research-article/Issue Title: Ageing: Science, Medicine, and Society/Full publication date: December 29, 1997/Copyright 1997 The Royal Society. Wachter, K. W., Hammel, E. A., & Laslett, P. (1978). Statistical studies of historical social structure (p. 23). Waltham, MA: Academic Press. Wachter, K. W., Knodel, J. E., & VanLandingham, M. (2002). AIDS and the elderly of Thailand: Projecting familial impacts. Demography, 39(1), 2541. ArticleType: research-article/Full publication date: February, 2002/Copyright 2002 Population Association of America. Willekens, F. (2009). Continuous-time microsimulation in longitudinal analysis. In A. Zaidi, A. Harding, & P. Williamson (Eds.), New frontiers in microsimulation modelling (pp. 413436). Farnham, England: Ashgate. Yang, H. (1992). Population dynamics and kinship of the Chinese rural elderly a microsimulation study. Cross-Cultural Gerontology, 2, 135150. Zagheni, E. (2010). The impact of the HIV/AIDS epidemic on orphanhood probabilities and kinship structure in Zimbabwe. Ph.D. dissertation, University of California, Berkeley. Zagheni, E. (2011). The impact of the HIV/AIDS epidemic on kinship resources for orphans in Zimbabwe. Population and Development Review, 37(4), 761783. ArticleType: research-article/Full publication date: December 2011/Copyright 2011 Population Council. Zagheni, E. (2014). Microsimulation in demographic research. In J. D. Wright (Ed.), International encyclopaedia of the social and behavioural sciences (2nd ed.). Elsevier. Zhao, Z. (1994). Rapid demographic transition and its influence on kinship networks, with particular reference to China. In N. Cho (Ed.), Low fertility in East and South Asia: Issues and policies (pp. 2858). Seoul, South Korea: Korean Institute for Health and Social Affairs. Zhao, Z. (1998). Demographic conditions, microsimulation, and family support for the elderly: Past, present and future. In P. Horden & R. Smith (Eds.), The locus of care: Families, communities, and institutions in history (pp. 259279). London: Routledge. Zhao, Z. (2006). Computer microsimulation and historical study of social structure: A comparative review of SOCSIM and CAMSIM. Revista de Demographia Historica, 24(II), 5988. Zinn, S. (2012). A mate-matching algorithm for continuous-time microsimulation models. International Journal of Microsimulation, 5(1), 3151.
CHAPTER 12
Spatial Models
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Robert Tanton and Graham Clarke
12.1. Introduction This chapter describes spatial microsimulation models, a class of microsimulation model that has been developed over the last 40 years. The chapter starts with a discussion of the differences and similarities between microsimulation and spatial microsimulation, and then describes what spatial microsimulation can be used for. We then discuss different methods for spatial microsimulation before describing how spatial microsimulation models can be validated. A conclusion and future directions of spatial microsimulation finishes the chapter.
12.2. Context Spatial microsimulation is a class of microsimulation that operates at a spatial scale. Spatial indicators are an important part of policy analysis and development, and statistical research on developing spatial indicators has progressed significantly over the last few years. Policy makers want to know indicators for electorates (Department of Parliamentary Services, 2009), researchers recognise the importance of spatial disadvantage (Pfefferman, 2002; Procter, Clarke, Ransley, & Cade, 2008; Tanton, Harding, & McNamara, 2010) and the public want to know how their community is going in terms of indicators like income, poverty rates, or inequality. One of the drivers behind spatial microsimulation models is a lack of small area geographic information. Most countries use a Census to derive small area social and demographic data, but a Census is expensive to
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293011
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
368
Robert Tanton and Graham Clarke
conduct and is usually conducted every 10 years (The United Kingdom) or at best every 5 years (Australia). This means that before a Census comes out, the demographic data being used by researchers and planners in the United Kingdom is at least 10 years old. Spatial microsimulation can be used to fill this gap in spatial information, but because the geographic data comes from a model, the model can also be used to derive demographic results for different scenarios for example, different birth rates in a particular area. These small area estimates are important for planning in urban areas (see Harding, Vidyattama, & Tanton, 2011) and for identifying areas that may be suffering from drought or other issues in regional areas. The spatial scale of the final estimates will depend on the spatial scale of any benchmarks being used smaller areas could be used in more populous cities, with larger areas used in regional areas. The technique was developed in the mid-1980s to assist in health care planning (Clarke, Forte, Spowage, & Wilson, 1984). In fact, the School of Geography at the University of Leeds was involved in many of the early spatial microsimulation models which were developed to provide up to date geographic information. These early models included Synthesis (Birkin & Clarke, 1988) and a model of demand for water consumption (Clarke, Kashti, McDonald, & Williamson, 1997). Because spatial microsimulation uses a microsimulation approach, it creates a unit record data file for each small area. This means that cross tabulations can be calculated so a user can look at poverty by age group and family type, for example (see Tanton, 2011 for estimates of poverty for single, older people). This is a significant advantage of spatial microsimulation over other small area estimation methods (see Pfeffermann, 2002). Because a unit record file is created for each geographical area, a spatial microsimulation model can potentially be linked to other models to derive small area estimates from the other model. An example is deriving small area estimates from a tax/transfer microsimulation model (see Tanton, Vidyattama, McNamara, Vu, & Harding, 2009). This linking of a tax/transfer microsimulation model which provides estimates of policy change with a spatial microsimulation model can then provide small area estimates of a tax/transfer policy change. This is useful for politicians wanting to know which electorate is going to be affected by a tax/transfer policy change. It is the flexibility derived from having a set of records that provides estimates for each small area that makes spatial microsimulation a powerful tool for researchers and policy makers. 12.3. Methodological characteristics and choices This section outlines the different methods that can be used for spatial microsimulation. We start with a classification of spatial microsimulation
369
Spatial Models
models, to provide some context to the methodological discussion. Most of the methods require a set of small area benchmarks and these are then discussed. Each method has advantages and disadvantages and these are also discussed.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
12.3.1. Classification of spatial microsimulation models Any attempt to classify a set of methods is going to be difficult, as methods can cross boundaries. Spatial microsimulation methods can be classified into dynamic and static models as shown in Figure 12.1. As outlined in Chapter 10, dynamic microsimulation models take a base dataset, age this dataset over time and model certain life events including marriage, deaths and births. They usually use probabilities to model these life events. For dynamic spatial microsimulation models, the life events need to be available or they need to be modelled for the spatial units required. This means that dynamic spatial microsimulation models are usually very large, complex and data hungry. One of the biggest dynamic spatial microsimulation models is SVERIGE, which uses spatial registry data for Sweden to model births, deaths and migration (see Rephann, 2004; Vencatasawmy et al., 1999). Static microsimulation models (see Chapter 3) do not model life events so, for example, the proportion of people married (and in fact, the people married on the base dataset) does not change. A static microsimulation model can however recalculate certain attributes, like incomes or eligibility for legal-aid services, using the unit record data on the base dataset as the starting point for a number of potential what-if scenarios. Static
Figure 12.1.
Classification of spatial microsimulation models Spatial Microsimulation
Dynamic
Static
Synthetic Reconstruction
Reweighting
Probabilistic
Combinatorial Optimisation
Deterministic
Generalised Regression
IPF
IPF
370
Robert Tanton and Graham Clarke
microsimulation models are usually used to examine the impacts of various rule based systems, where it is easy to use a rule to calculate pension eligibility or the amount of tax a household should pay based on a set of rules (which can then change following a policy change). A static spatial microsimulation model is a lot easier to programme than a dynamic spatial microsimulation model as there are fewer parameters and characteristics to model. This chapter provides only a brief description of the methods. More information on the methods can be found in Tanton and Edwards (2013) and Tanton (2014).
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
12.3.2. Benchmarks Most spatial microsimulation models require some known aggregates to be available to benchmark to. These aggregates can come from a Census; from administrative data; or from a mixture of these two datasets. These known aggregates are then used to create records for each small area whereby when these records are summed, they reflect the aggregate data. These records are either synthetic or taken from a sample survey. Information on preparing sample surveys for spatial microsimulation can be found in Cassells, Miranti, and Harding (2013). These known aggregates include demographic details like age, sex or family type; economic variables like employment status, incomes, rent payments or mortgage payments, if available; or social variables, like deprivation, which may then include income, employment, health/disability, education, housing, crime and living environment. The SimObesity model uses an index of deprivation as a benchmark, or constraint, variable (see Edwards & Clarke, 2013). The aggregates can be cross tabulated tables, as used for the SpatialMSM model in Australia (see Tanton, Vidyattama, Nepal, & McNamara, 2011), or one way tables. An exception to this is large scale dynamic spatial microsimulation models which, rather than create a synthetic file or use sample surveys, use administrative data for the full population in each small area and then simulate births, deaths and migration to project the population in these small areas in the future. An example of this large scale model is SVERIGE (see Vencatasawmy et al., 1999). 12.3.3. Synthetic reconstruction methods Figure 12.1 shows that static models can be split according to whether they use a reconstruction approach to reconstruct the population for an area or a reweighting approach to reweight a survey file to the population of an area. A synthetic reconstruction method creates an imaginary list of individuals and households where the characteristics of the individuals, when aggregated, will match known aggregates from another data source in the area being estimated. The starting point may be to create a
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Spatial Models
371
population that matches a known age/sex distribution; then this population might be adjusted to reflect a known labour force distribution; and then occupation or industry could be added. These characteristics are matched sequentially, rather than all at once. There are a few different methods used for synthetic reconstruction (Clarke & Spowage, 1994; Williamson, Clarke, & McDonald, 1996), but the main method used is Iterative Proportional Fit (IPF) (Birkin & Clarke, 1988, 1989). The IPF method can either build up a synthetic dataset for a small area created using Census tables and create an entirely synthetic dataset or it can use a national sample from a survey to select records to fill a particular small area, subject to constraint tables from the Census, in which case it is a reweighting method. The first method was used in early work by Birkin and Clarke, and the second method was a later development. Models that use the IPF method include SimLeeds (Ballas & Clarke, 1999; Ballas, Clarke, & Dewhurst, 2006). A key advantage of this approach is that the model starts with a household in a specific location it then uses the census data in that locality to drive the probability of that household having other attributes. Another advantage of the IPF method is that it does not require a microdata set, as it creates a synthetic microdata set using such conditional probabilities. 12.3.4. Reweighting approaches A reweighting approach takes an already available survey or sample dataset and reweights it to known small area characteristics. There are a number of reweighting approaches based on different methods and some of them start with weights already on surveys that correct for sample design and adjust these weights to area level benchmarks whilst some select observations from a survey until some known benchmark totals are reached. One method that selects observations from a survey is the combinatorial optimisation (CO) method. CO is a mathematical process that finds an optimal object from a finite set of objects. Applied to spatial microsimulation, the process is used to choose which records from a survey best represent a small area (Williamson et al., 1996). This method is an iterative approach which selects a combination of households from the microdata to reproduce, as closely as possible, the population in the small area. The process starts with a random selection from the microdata and then looks at the effect of replacing one household at a time. If the replacement improves the fit to some small area benchmarks, then it is chosen; if not, the replacement is rejected and another household is chosen to replace it. This process is repeated, with the aim of gradually improving the fit. An assessment of this technique by Voas and Williamson (2000) found that the results were reasonable for any variables that were part of the set of constraint tables. Even better results can be achieved if the choice of
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
372
Robert Tanton and Graham Clarke
the constraints is allowed to vary spatially depending on the geodemographics of the area being simulated (Smith, Clarke, & Harland, 2009). However, estimating cross tabulations for variables that were not in the list of constraints could result in a much poorer fit. The worst case scenario for this method is that every single combination of households has to be assessed to find the best fit. This maximises the time taken for the procedure to run. Further developments of the CO techniques built some intelligence into the searching for records to select from the microdata, rather than simply randomly selecting records (Williamson, Birkin, & Rees, 1998). A method using CO with a simulated annealing (SA) technique to achieve a ‘smarter’ choice of households has been used by Williamson et al. (1996) and Hynes et al. in a static version of the SMILE spatial microsimulation model for Ireland (Hynes, Morrissey, & O’Donoghue, 2006; Hynes, O’Donoghue, Morrissey, & Clarke, 2009) and in a model called Micro-MaPPAS, which is an extension of SimLeeds, and a model to estimate crime (Kongmuang, Clarke, Evans, & Jin, 2006). An alternative method is to use a CO method with a deterministic routine rather than a probabilistic routine (Edwards & Clarke, 2013). The advantage of a deterministic routine rather than a probabilistic one is that the same results will be given each time the model is run, but it is no longer a reweighting approach (which the probabilistic routine is). A deterministic CO method uses a formula to choose the records from the original dataset rather than randomly selecting them. In its calculations it is very similar to the IPF method described above. The method is described in Ballas, Rossiter, Thomas, Clarke, and Dorling (2005) and has been used by Edwards and Clarke (2013) for estimating obesity, and Anderson (2007a, 2007b) for estimating poverty in England and Wales. Another reweighting method (used for the SMILE model in Ireland) is quota sampling. This is a probabilistic reweighting method which operates in a similar way to a CO approach with SA early versions of the SMILE model used CO with SA (see Hynes et al., 2009). The development, described in Farrell, Morrissey, and O’Donoghue (2013), amends the sampling procedure to improve computational efficiency. The main computational efficiency is that once a household is selected, it is not replaced so it uses sampling without replacement. This avoids repeatedly sampling a household that can occur in CO with SA. This method has been used to estimate inequality and income redistribution in Ireland (Farrell et al., 2013). Another reweighting method that is deterministic but does not select records from the survey dataset, as the previous methods have, uses a generalised regression reweighting methodology to reweight survey data to aggregate reliable data. This method is described in Tanton et al. (2011) and Tanton, Harding, and McNamara (2013).
Spatial Models
373
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
The generalised regression procedure uses a regression model to calculate a new set of weights, given the constraints provided for each small area. These weights are limited to being positive weights only, which means the procedure must iterate a number of times. The process takes an initial weight from the survey and continually adjusts this weight until reasonable results are achieved or until a maximum number of iterations have been reached. This method has been used to derive estimates of poverty (Tanton, 2011; Tanton et al., 2010), housing stress (Phillips, Chin, & Harding, 2006a, 2006b; Tanton et al., 2009), wealth (Vidyattama, Cassells, Harding, & Mcnamara, 2013) and disability (Lymer, Brown, Harding, & Yap, 2009; Lymer, Brown, Yap, & Harding, 2008). 12.3.5. Choosing a method The choice of method will very much come down to what data are available and what programming languages are available. If no unit record file is available, then a synthetic reconstruction method will be required; otherwise the unit record file can be used to select records into an area based on benchmarks, so a reweighting method can be used. There has been no systematic comparison of all spatial microsimulation methods, but a CO and generalised regression method have been compared in Tanton, Williamson, and Harding (2014), and a comparison of some methods has been reported in Harland, Heppenstall, Smith, and Birkin (2012), and Tanton (2014). Tanton et al. (2014) found little difference in the results from each method, but the CO method was able to provide estimates for all areas, whereas the generalised regression method could not at the time the analysis was conducted. In terms of accessing code for each method, the CO code is freely available in FORTRAN (Williamson, 2014). The generalised regression procedure requires access to SAS and the GREGWT macro from the Australian Bureau of Statistics, so this limits its availability. A SA method has been incorporated into an R routine (Kavroudakis, 2014), making it freely available. 12.3.6. Validation Spatial microsimulation models are difficult to validate simply because the modelling is being conducted to derive estimates for indicators that do not exist at the spatial scale required, so there is nothing to validate the models against (the purpose often being to estimate ‘missing data’). A detailed description of validation methods is given in Edwards and Tanton (2013), so only a short description of the methods is provided here. One way to validate against the known benchmarks is to use the Total Absolute Error, or the error of the modelled results compared to the
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
374
Robert Tanton and Graham Clarke
benchmarks. This provides an indication of how well the model has estimated the benchmarks (e.g. age, sex); but no idea of how well the model is predicting the indicator required (e.g. income or poverty rates). Another method is to calculate an indicator from the model that is similar to the final indicator required, but which can be replicated using some reliable small area data, and then calculate the Standard Error Around Identity (the SEI). This is similar to the correlation coefficient, R2, but is calculated around a diagonal line. If this similar indicator validates well then it can be assumed that the final indicator will also validate well. An example of this in Australia (Tanton et al., 2011) is using a poverty line that is available on the small area Census data and comparing poverty rates using this poverty line from both the Census and the model before calculating poverty rates from the model using a half median equivalised disposable income poverty line. Rahman et al. also describes a Z score for validation purposes (see Rahman, Harding, Tanton, & Liu, 2013) and Edwards and Tanton (2013) describe a method of aggregating the small area data so it can be compared to a reliable estimate from the survey used for the spatial microsimulation and running the spatial microsimulation model at a spatial scale where reliable estimates are available from the survey and comparing the results. All these methods were used and compared in Tanton et al. (2011). All these validation methods show that spatial microsimulation models generally provide reasonable results for indicators that are not rare events on the survey dataset. Occasionally data does exist to calibrate microsimulation models. Smith, Pearce, and Harland (2011) compare their simulated smoking rates to actual data in New Zealand. With the exception of under-predicting smoking in a few zones with high number of Maori residents, the simulated results showed an excellent fit to reality. For rarer events (such as death or disability), spatial microsimulation may not provide such reliable estimates, as outlined in the final section of this chapter. Also, for areas that are not similar to other areas sampled in the survey being used for the spatial microsimulation, the method can struggle to find enough observations in the survey to fill the area with or reweight the current observations (Birkin & Clarke, 2012). If the method cannot provide estimates for all areas, through either not converging for an area or the validation shows the area is poorly estimated, then there are two options. One is to use sample observations from one area to estimate the smaller areas within that area so for example, using observations from London to estimate London small areas. Tanton and Vidyattama (2010) show that this provides better results for some capital cities in Australia and Anderson also uses this method (Anderson, 2007a). The other is to reduce the number of benchmarks for the areas that are not producing estimates. This has worked in recent work using the SpatialMSM model in Australia. A caveat then needs to be put on these areas as they do not
Spatial Models
375
have the same number of benchmarks as other areas, so will not be as accurately estimated.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
12.4. Uses and applications Spatial microsimulation models have mainly been used to derive estimates of characteristics for small areas. The advantage of spatial microsimulation models is that as they create a synthetic dataset for each small area, and they can then be used to derive cross tabulations, so for example, income by family type can be calculated. They can also be used to derive small area estimates of a characteristic that is not available on a national census for example, income in the United Kingdom. Examples of this application of spatial microsimulation modelling includes estimates of incomes (Anderson, 2007a, 2007b; Birkin & Clarke, 1989), crime (Kongmuang et al., 2006), poverty (Tanton, 2011; Tanton et al., 2010), housing stress (Phillips et al., 2006), obesity (Edwards & Clarke, 2013; Procter et al., 2008), water demand (Clarke et al., 1997; Williamson et al., 1996), smoking rates (Smith et al., 2011; Tomintz, Clarke, & Rigby, 2008; Tomintz, Clarke, & Rigby, 2009), wellbeing (Ballas, 2010; Mohanty, Tanton, Vidyattama, Keegan, & Cummins, 2013), trust (Hermes & Poulsen, 2013) and disability (Lymer et al., 2008). Another use for spatial microsimulation models is to use them for modelling different policies. This can be done either using the model itself see Ballas and Clarke (1999, 2001), Ballas et al. (2006), Clarke and Spowage (1994), Hynes et al. (2009), Vencatasawmy et al. (1999) or by combining the spatial microsimulation model with another type of model that can model a change in policy. An example of this is combining a spatial microsimulation model with a tax/transfer microsimulation model (see Harding, Vu, Tanton, & Vidyattama, 2009; Tanton et al, 2009; Vu & Tanton, 2010) or a CGE model (Rao, Tanton, & Vidyattama, 2013). Spatial microsimulation models have also been linked to locationallocation models to help locate health care facilities (Tomintz et al., 2008, 2009; Tomintz, Clarke, Rigby, & Green, 2013) and to spatial interaction models in order to provide better demand estimates that feed into the allocation models (Morrissey, Hynes, Clarke, & O’Donoghue, 2010; Nakaya et al., 2007). More recently, spatial microsimulation models have also been linked to CGE models to derive small area impacts of macroeconomic changes. Examples are Herault (2007) and Buddelmeyer, Herault, Kalb, and van Zijll de Jong (2009). More recently the question of integrating economic, social and environmental issues in the Murray Darling Basin in Australia has generated interest in a combined CGE, microsimulation and ecological model (see Rao et al., 2013; Vidyattama, Rao, Mohanty, and Tanton, 2014). A range of rural what-if policies are discussed in O’Donoghue, Ballas, Clarke, Hynes, and Morrissey (2013).
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
376
Robert Tanton and Graham Clarke
Another use for spatial microsimulation models is for projections. Methods for projecting using spatial microsimulation models can use static ageing which includes reweighting to projected benchmarks or dynamic ageing which uses projections of births, deaths and migration. A number of methods for projecting using static ageing and spatial microsimulation models are described in Vidyattama and Tanton (2010; 2013) and applications are shown in Harding et al. (2011). Ballas, Clarke, et al. (2005) shows an example of dynamic ageing for SimBritain. Again, because a database containing characteristics of different people is projected forward, cross tabulations can be created, as shown in Harding et al. (2011). The impact of spatial microsimulation models on public policy has been extensive. They allow a policy maker to look at spatial indicators like income, poverty rates, housing stress and subjective wellbeing (see Anderson, 2007a, 2007b; Department of Parliamentary Services, 2009). They also allow policy makers to look at spatial impacts of policy change (see Ballas et al., 2006; O’Donoghue et al., 2013; Tanton et al., 2009) and they allow policy makers to plan for future demographic change (Harding et al., 2011).
12.5. Summary and future directions This chapter has shown that there has been considerable growth in spatial microsimulation modelling over the last few years, and the area has now earned its place in a book on microsimulation. The models used, and the validation applied to the results from the models, all show that the methods are now not only able to provide reasonable small area estimates but able to provide cross tabulations and projections for indicators required from the model. Linking to other microsimulation models is also able to provide small area estimates of a policy change, something that can provide Governments with the power to see where a proposed policy is going to have an impact before it is implemented. There are a number of ways that spatial microsimulation could develop in the future. We have split these into methodological developments; application to new subject areas; and exploration of new methods. In terms of methodological developments, we would argue that the main area of research should be testing the variability in the results from the models so developing measures of confidence. While validation for spatial microsimulation models has come a long way, there is still no way to estimate the variance of the estimates and to therefore derive some confidence intervals around the estimates. Another important methodological question which still needs to be resolved is the differences between the different methods, and which method is most appropriate for which job. Tanton, Williamson, and Harding (2007) have compared two different methods (CO and generalised regression reweighting), but there has been no comparison of all methods.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Spatial Models
377
This may change in the near future, with some work being done by an international collaboration looking at estimating results using a range of models on the same synthetic dataset. The final methodological question is the efficiency of the different routines and speeding up the process. Many spatial microsimulation models are resource intensive. As an example, the SpatialMSM model used by NATSEM takes 9 hours to run for 1,300 areas, which limits the number of areas that can be run. More efficient algorithms need to be derived for the different methods to speed up the processing time. In terms of applying the method to new subjects, there are many subjects where spatial microsimulation has trouble deriving estimates, particularly when trying to estimate rare events. An example is estimating demand for aged care services (see Lymer et al., 2009) or drug use (see Ballas, Rossiter et al., 2005). Further work needs to be done to derive reliable estimates for these topics. Having said this, there are a number of topics where spatial microsimulation is only just starting to be applied, and these areas will grow rapidly as the method gains more followers. Examples are in the health field and in the social field. Recent work in this area has included estimates of subjective wellbeing (see Ballas, 2010; Mohanty et al., 2013) and Indigenous disadvantage (see Vidyattama et al., 2013). Other areas that spatial microsimulation could be used for is expenditure or energy use. In terms of exploring new methods, much of the recent work in spatial microsimulation has been around how it can be used in conjunction with other models (see Rao et al., 2013). This is an exciting development in spatial microsimulation as it allows small area estimates of macroeconomic changes to be derived. The final new method that needs further development is dynamic spatial microsimulation models. There are a few, but they are resource and data intensive demographic change needs to be estimated for every small area. This is possible where administrative data for small areas is available, for example, in Sweden (see Vencatasawmy et al., 1999), but very difficult for areas where these data are not available. The main advantage of a dynamic model is that behaviour can start to be incorporated into the model so choices can be made regarding employment and migration. This is a complex area, but is one of the cutting edges of spatial microsimulation.
References Anderson, B. (2007a). Creating small-area income estimates: Spatial microsimulation modelling. UK Department of Communities. Retrieved from http://webarchive.nationalarchives.gov.uk/20120919132719/ http://www.communities.gov.uk/documents/communities/pdf/325286. pdf. Last Accessed on September 19, 2013.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
378
Robert Tanton and Graham Clarke
Anderson, B. (2007b). Creating small area income deprivation estimates for Wales: Spatial microsimulation modelling. Chimera Working Paper No. 2007 11. University of Essex, Colchester. Ballas, D. (2010). Geographical modelling of happiness and wellbeing. In J. Stillwell, P. Norman, C. Thomas, & P. Surridge (Eds.), Spatial and social disparities (Vol. 2, pp. 53 66). Dordrecht: Springer. Ballas, D., & Clarke, G. P. (1999). Regional versus local multipliers of economic change? A microsimulation approach. 39th regional science Association (ERSA) congress, University College. Dublin: University of Leeds. Ballas, D., & Clarke, G. P. (2001). Modelling the local impacts of national social policies: A spatial microsimulation approach. Environment and Planning, 19(1), 587 606. doi:10.1068/c0003 Ballas, D., Clarke, G. P., & Dewhurst, J. (2006). Modelling the socioeconomic impacts of major job loss or gain at the local level: A spatial microsimulation framework. Spatial Economic Analysis, 1(1), 127 146. doi:10.1080/17421770600697729 Ballas, D., Clarke, G. P., Dorling, D., Eyre, H., Thomas, B., & Rossiter, D. (2005). SimBritain: A spatial microsimulation approach to population dynamics. Population, Space and Place, 11(1), 13 34. Ballas, D., Rossiter, D., Thomas, B., Clarke, G. P., & Dorling, D. (2005). Geography matters: Simulating the local impacts of national social policies. New York, NY: Joseph Rowntree Foundation. Birkin, M., & Clarke, G. P. (2012). Enhancing spatial microsimulation using geodemographics. Annals of Regional Science, 49(2), 515 532. Birkin, M., & Clarke, M. (1988). SYNTHESIS A synthetic spatial information system for urban and regional analysis: Methods and examples. Environment and Planning A, 20(12), 1645 1671. Birkin, M., & Clarke, M. (1989). The generation of individual and household incomes at the small area level using synthesis. Regional Studies, 23(6), 535 548. Buddelmeyer, H., Herault, N., Kalb, G., & van Zijll de Jong, M. (2009). Linking a dynamic CGE model and a microsimulation model: Climate change mitigation policies and income distribution in Australia. Melbourne Institute Working Paper series, Working Paper No. 3/09. Melbourne Institute of Applied Economic and Social Research, University of Melbourne. Cassells, R., Miranti, R., & Harding, A. (2013). Building a static spatial microsimulation model: Data preparation. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 9 16). Dordrecht: Springer. Clarke, G., Kashti, A., McDonald, A., & Williamson, P. (1997). Estimating small area demand for water: A new methodology. Water and Environment Journal, 11(3), 186 192.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Spatial Models
379
Clarke, M., Forte, P., Spowage, M., & Wilson, A. (1984). A strategic planning simulation model of a district health service system: The in-patient component and results. In W. van Elmeren, R. Engelbrecht, & C. D. Flagle (Eds.), Systems science in health care. Berlin: Springer Verlag. Clarke, M., & Spowage, M. (1994). Integrated models for public policy analysis: An example of the practical use of simulation models in health care planning. Papers in Regional Science, 55(1), 25 45. doi:10.1111/j.1435-5597.1984.tb00825.x Department of Parliamentary Services. (2009). Poverty rates by electoral divisions, 2006. Parliamentary Library Research Paper 27, Canberra, Commonwealth of Australia. Edwards, K., & Tanton, R. (2013). Validation of spatial microsimulation models. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 249 258). Netherlands: Springer. Edwards, K. L., & Clarke, G. P. (2013). SimObesity: Combinatorial optimisation (deterministic) model. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 69 85). Netherlands: Springer. Farrell, N., Morrissey, K., & O’Donoghue, C. (2013). Creating a spatial microsimulation model of the Irish local economy. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 105 125). Netherlands: Springer. Harding, A., Vidyattama, Y., & Tanton, R. (2011). Demographic change and the needs-based planning of government services: Projecting small area populations using spatial microsimulation. Journal of Population Research, 28(2 3), 203 224. doi:10.1007/s12546-0119061-6 Harding, A., Vu, Q., Tanton, R., & Vidyattama, Y. (2009). Improving work incentives and incomes for parents: The national and geographic impact of liberalising the family tax benefit income test. Economic Record, 85(s1), S48 S58. doi:10.1111/j.1475-4932.2009. 00588.x Harland, K., Heppenstall, A., Smith, D., & Birkin, M. (2012). Creating realistic synthetic populations at varying spatial scales: A comparative critique of population synthesis techniques. Journal of Artificial Societies and Social Simulation, 15(1), 1 24. Herault, N. (2007). Trade liberalisation, poverty and inequality in South Africa: A computable general equilibrium-microsimulation analysis. Economic Record, 83(262), 317 328. doi:10.1111/j.1475-4932.2007. 00417.x Hermes, K., & Poulsen, M. (2013). The intraurban geography of generalised trust in Sydney. Environment and Planning A, 45(2), 276 294.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
380
Robert Tanton and Graham Clarke
Hynes, S., Morrissey, K., & O’Donoghue, C. (2006). Building a static farm level spatial microsimulation model: Statistically matching the Irish national farm survey to the Irish census of agriculture. 46th congress of the European regional science association. Volos, Greece. Hynes, S., O’Donoghue, C., Morrissey, K., & Clarke, G. P. (2009). A spatial micro-simulation analysis of methane emissions from Irish agriculture. Ecological Complexity, 6(2), 135 146. doi:10.1016/j. ecocom.2008.10.014 Kavroudakis, D. (2014). CRAN Package sms: Spatial microSimulation. Retrieved from http://cran.r-project.org/web/packages/sms/index. html Kongmuang, C., Clarke, G. P., Evans, A., & Jin, J. (2006). Simcrime: A spatial microsimulation model for the analysing of crime in Leeds. School of Geography Working Paper No. 06/1. University of Leeds. Retrieved from http://eprints.whiterose.ac.uk/4982/1/SimCrime_ WorkingPaper_version1.1.pdf Lymer, S., Brown, L., Harding, A., & Yap, M. (2009). Predicting the need for aged care services at the small area level: The CAREMOD spatial microsimulation model. International Journal of Microsimulation, 2(2), 27 42. Lymer, S., Brown, L., Yap, M., & Harding, A. (2008). 2001 Regional disability estimates for new South Wales, Australia, using spatial microsimulation. Applied Spatial Analysis and Policy, 1(2), 99 116. doi:10.1007/s12061-008-9006-4 Mohanty, I., Tanton, R., Vidyattama, Y., Keegan, M., & Cummins, R. (2013). Small area estimates of subjective Wellbeing: Spatial microsimulation on the Australian Unity Wellbeing index survey. NATSEM Working Paper No. 13/23, Canberra, NATSEM. Morrissey, K., Hynes, S., Clarke, G. P., & O’Donoghue, C. (2010). Examining the factors associated with depression at the small area level in Ireland using spatial microsimulation techniques. Irish Geography, 45(1), 1 22. Nakaya, T., Fotheringham, A. S., Hanaoka, K., Clarke, G. P., Ballas, D., & Yano, K. (2007). Combining microsimulation and spatial interaction models for retail location analysis. Journal of Geographical Systems, 4, 345 369. O’Donoghue, C., Ballas, D., Clarke, G., Hynes, S., & Morrissey, K. (Eds.). (2013). Spatial microsimulation for rural policy analysis. Netherlands: Springer. Pfeffermann, D. (2002). Small area estimation-new developments and directions. International Statistical Review, 70(1), 125 143. Phillips, B., Chin, S., & Harding, A. (2006). Housing stress today: Estimates for statistical local areas in 2005. Paper presented to the Australian Consortium for Social and Political Research Incorporated conference, Sydney, 10 13 December, 2006.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Spatial Models
381
Procter, K., Clarke, G. P., Ransley, J., & Cade, J. (2008). Micro-level analysis of childhood obesity, diet, physical activity, residential socioeconomic and social capital variables: Where are the obesogenic environments in Leeds? Area, 40(3), 323 340. Rahman, A., Harding, A., Tanton, R., & Liu, S. (2013). Simulating the characteristics of populations at the small area level: New validation techniques for a spatial microsimulation model in Australia. Computational Statistics & Data Analysis, 57(1), 149 165. doi:10.1016/j. csda.2012.06.018 Rao, M., Tanton, R., & Vidyattama, Y. (2013). A systems approach to analyse the impacts of water policy reform in the Murray-Darling basin: A conceptual and an analytical framework. NATSEM Working Paper No. 13/22. NATSEM, Canberra. Rephann, T. (2004). Economic-demographic effects of immigration: Results from a dynamic spatial microsimulation model. International Regional Science Review, 27(4), 379 410. doi:10.1177/0160017 604267628 Smith, D. M., Clarke, G. P., & Harland, K. (2009). Improving the synthetic data generation process in spatial microsimulation models. Environment and Planning A, 41, 1251 1268. Smith, D. M., Pearce, J. R., & Harland, K. (2011). Can a deterministic spatial microsimulation model provide reliable small-area estimates of health behaviours? An example of smoking prevalence in New Zealand. Health & Place, 17(2), 618 624. Tanton, R. (2011). Spatial microsimulation as a method for estimating different poverty rates in Australia. Population, Space and Place, 17(3), 222 235. doi:10.1002/psp.601 Tanton, R. (2014). A review of spatial microsimulation methods. International Journal of Microsimulation, 7(1), 4 25. Tanton, R., & Edwards, K. (Eds.). (2013). Spatial microsimulation: A reference guide for users. Dordrecht: Springer. Tanton, R., Harding, A., & McNamara, J. (2010). Urban and Rural Estimates of Poverty: Recent advances in spatial microsimulation in Australia. Geographical Research, 48(1), 52 64. doi:10.1111/j.17455871.2009.00615.x Tanton, R., Harding, A., & McNamara, J. (2013). Spatial microsimulation using a generalised regression model. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 87 103). Dordrecht: Springer. Tanton, R., & Vidyattama, Y. (2010). Pushing it to the edge: Extending generalised regression as a spatial microsimulation method. International Journal of Microsimulation, 3(2), 23 33. Tanton, R., Vidyattama, Y., McNamara, J., Vu, Q., & Harding, A. (2009). Old, single and poor: Using microsimulation and microdata to analyse poverty and the impact of policy change among older
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
382
Robert Tanton and Graham Clarke
Australians. Economic Papers: A Journal of Applied Economics And Policy, 28(2), 102 120. doi:10.1111/j.1759-3441.2009.00022.x%20 Tanton, R., Vidyattama, Y., Nepal, B., & McNamara, J. (2011). Small area estimation using a reweighting algorithm. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 931 951. Tanton, R., Williamson, P., & Harding, A. (2014). Comparing two methods of reweighting a survey file to small area data. International Journal of Microsimulation, 7(1), 76 99. Tanton, R., Williamson, P., & Harding, A. (2007). Comparing two methods of reweighting a survey file to small area data: Generalised regression and combinatorial optimisation. Presentation to the Second International Microsimulation conference, Vienna, Austria. Tomintz, M., Clarke, G. P., & Rigby, J. (2008). The geography of smoking in Leeds: Estimating individual smoking rates and the implications for the location for stop smoking services. Area, 40(3), 341 353. Tomintz, M., Clarke, G. P., & Rigby, J. (2009). Planning the location of stop smoking services at the local level: A geographic analysis. Journal of Smoking Cessation, 4(2), 61 73. Tomintz, M., Clarke, G. P., Rigby, J., & Green, J. (2013). Optimizing the location of antenatal classes. Midwifery, 29(1), 33 43. Vencatasawmy, C. P., Holm, E., Rephann, T., Esko, J., Swan, N., O¨hman, M., … Siikavaara, J. (1999). Building a spatial microsimulation model. Paper presented at the 11th European Colloquium on Quantitative and Theoretical Geography, Durham, England, September 3 7. Vidyattama, Y., Cassells, R., Harding, A., & Mcnamara, J. (2013). Rich or poor in retirement? A small area analysis of Australian private superannuation savings in 2006 using spatial microsimulation. Regional Studies, 47(5), 722 739. Vidyattama, Y., Rao, M., Mohanty, I., & Tanton, R. (2014). Modelling the impact of declining Australian terms of trade on the spatial distribution of income. International Journal of Microsimulation, 7(1), 100 126. Vidyattama, Y., & Tanton, R. (2010). Projecting small area statistics with Australian spatial microsimulation model (SpatialMSM). Australian Journal of Regional Studies, 16(1), 99 126. Vidyattama, Y., & Tanton, R. (2013). Projections using a static spatial microsimulation model. In R. Tanton & K. Edwards (Eds.), Spatial microsimulation: A reference guide for users (pp. 145 160). Netherlands: Springer. Voas, D., & Williamson, P. (2000). An evaluation of the combinatorial optimisation approach to the creation of synthetic microdata. International Journal of Population Geography, 6, 349 366.
Downloaded by Cornell University Library At 04:39 01 September 2016 (PT)
Spatial Models
383
Vu, Q. N., & Tanton, R. (2010). The distributional impact of the Australian government’s household stimulus package. Australian Journal of Regional Studies, 16(1), 127 145. Williamson, P. (2014). CO code and documentation. Retrieved from http:// pcwww.liv.ac.uk/∼william/microdata/CO%20070615/CO_software. html. Last Accessed on May 6. Williamson, P., Birkin, M., & Rees, P. (1998). The estimation of population microdata by using data from small area statistics and samples of anonymised records. Environment and Planning Analysis, 30, 785 816. Williamson, P., Clarke, G., & McDonald, A. (1996). Estimating small area demand for water with the use of microsimulation. In G. Clarke (Ed.), Microsimulation for urban and regional policy analysis (pp. 117 148). London: Pion Ltd.
CHAPTER 13
Transportation Models
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Eric J. Miller
13.1. Introduction Given the complex, heterogeneous nature of transportation system processes, the transportation field was a natural one for early experimentation with microsimulation models. Pioneering microsimulation applications in the field include modelling household ridesharing (Bonsall, 1982), household car ownership (Berkowitz, Gallini, Miller, & Wolfe, 1987), residential location (Mackett, 1985; Oskamp, 1995), transit route choice (Chapleau, 1986), integrated demographic, housing and labour markets (Miller, Noehammer, & Ross, 1987), road network performance (Caldwell, 1996; Mahmassani, Hu, & Peeta, 1994), travel demand modelling (Ettema, Borgers, & Timmermans, 1993; Goulias & Kitamura, 1992; Harvey & Deakin, 1996; Kreibich, 1978, 1979; Mackett, 1990; Recker, McNally, & Root, 1986a, 1986b; RDC, Inc., 1995) and transportation emissions (Hassounah & Miller, 1994; Hatzopoulou, 2008). Since these early efforts, microsimulation has gradually become a mainstream tool for modelling a variety of transportation phenomena, most notably transportation network operations (both road and transit) and travel demand forecasting and policy analysis. While a wide variety of small-scale microsimulation models exist in the research community for hypothesis testing and experimentation with alternative models of travel behaviour, this chapter focusses on largescale applications for operational planning, design and system control purposes. Such ‘real-world’ applications present very large computational challenges in that they typically involve the modelling of millions of trips made over tens of thousands of network links with potentially thousands of paths through the network per trip. Practical implementation of transportation-related microsimulation advances, therefore, generally has
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293012
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
386
Eric J. Miller
been closely tied to the computational capabilities of available computer hardware and software. While these capabilities have grown exponentially over time, transportation microsimulation (especially of network operations) remains computationally intensive, and efficient algorithm design remains a critical element in model development. The next section provides a brief overview of transportation systems analysis in order to establish a context and rationale for microsimulation applications. Transportation is a vast field, and full treatment of all possible microsimulation applications is well beyond the scope of this chapter. Three major areas are selected for detailed discussion: urban travel demand modelling, transportation network modelling, and population synthesis methods needed to support transportation microsimulation modelling.
13.2. Context A ‘full specification’ of the transportation problem involves the integrated modelling of demographics, land use, housing and labour markets and, in general, economic production, consumption and exchange (all of which collectively define the context at any point in time for travel behaviour and transportation system performance). This chapter focusses solely on microsimulating urban transportation systems per se. Even with these restrictions, the modelling of the transportation system is challenging given its high dimensionality in space (travel from diverse origins to diverse destinations along many routes and modes of travel), time (frequency of travel by time of day and day of the week) and type of trip-maker (or, in the case of freight, type of shipment). The transportation system can be crudely categorized with respect to two key aspects: (1) The spatial context being modelled: urban versus ‘intercity’/’interregional’ travel. (2) The type of ‘movement’ being modelled: persons versus freight/goods movements. Freight modelling typically has not received as much attention as person travel. Microsimulation modelling of freight transportation, in particular, is currently in a very rudimentary state due to historical data limitations, complexity of the processes involved, and, until recently, a lack of attention to the topic. Some prototype freight microsimulation models are, however, emerging (Boerkamps, van Binsbergen, & Bovy, 2000; Cavalcante & Roorda, 2013; de Jong & Ben-Akiva, 2007; Fischer, Outwater, Cheng, Ahanotu, & Calix, 2005; Hunt & Stefan, 2007; Liedtke, 2006; Pourabdollahi, Karimi, & Mohammadian, 2013; Roorda, Cavalcante, McCabe, & Kwan, 2010; Samimi, Mohammadian, & Kawamura, 2010; Wang & Holguin-Veras, 2008; Wisetjindawat, Sano,
387
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
Matsumoto, & Raothanachonkun, 2007). Given the relatively preliminary nature of these models and their complexity they will not be reviewed in detail here. Intercity person travel models also have been historically underdeveloped (Miller, 2004) and have not generally been the focus of microsimulation modelling. In the past decade-plus this state of affairs has begun to change in terms of the emergence of ‘state-wide’ models, in which a large multi-city region (typically a state or a province) is holistically modelled (Horowitz, 2006). These large-region model systems typically employ a mixture of microsimulation and more spatially and socio-economically aggregate modelling elements. The microsimulation components of these models, however, are essentially the same as are used in urban modelling applications, which are discussed in detail below. Thus, an explicit review of state-wide microsimulation models is not undertaken herein. Large-scale, complex models of urban passenger transportation systems have been developed and applied since the mid-1950s, starting with the first availability of high-speed digital mainframe computers and with pioneering modelling efforts in Detroit, Chicago, Toronto and other major cities in North America and Europe (Meyer & Miller, 2001). The basic unit of travel in these models is the trip: a movement from a single origin to a single destination at a given time of day by a given travel mode (auto, transit, etc.) via a given route through the transportation network to engage in a given activity (aka ‘trip purpose’) at the trip destination. A standard paradigm for modelling trip-making in urban regions emerged in the 1950s and 1960s (see Figure 13.1), known as the four-step or Urban Figure 13.1.
I
The UTMS or four-step process
Population & Employment Forecasts
J
Generation
Oi
Dj
Trip Generation
Distribution
I
Tij
J Trip Distribution
Tij,auto
I
Mode Split
Mode Split
J
Tij,transit Trip Assignment
I
J Assignment -- path of flowTij,auto through the auto network
Link & O-D Flows, Times, Costs, Etc.
Transportation Network & Service Attributes
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
388
Eric J. Miller
Transportation Modelling System (UTMS), which still defines standard practice worldwide (Meyer & Miller, 2001). As shown in Figure 13.1, the forecasting of travel demand involves the determination of the number of trips generated in each zone within the urban region (by trip purpose and time of day), the spatial distribution of trip destinations, the travel mode for each trip, and the route (path) from origin to destination through the road and transit networks used by each trip, given the spatial distribution of the population and employment within the region and the transportation system serving this demand. Implicit in Figure 13.1 is a classic demandsupply relationship in which the demand for transportation services depends on the performance of the system, but system performance depends on the loadings on the system. That is, the usage of a given network link depends on the link travel time (among, perhaps, other factors), but the link travel time depends on the volume of traffic using the link. The assignment of flows to specific paths (and hence links within these paths) is usually based on finding the user equilibrium, in which no trip-maker can unilaterally switch to another path in the system and improve his/her travel time (Sheffi, 1985; Wardrop, 1952). ‘Feedback’ also occurs with the distribution and mode choice components of the four-step process since these demand components also depend on modal path travel times and costs (cf. Figure 13.1). Key features of this classic four-step demand modelling process are: • It is spatially aggregate in that the urban area is divided into a set of mutually exclusive and collectively exhaustive set of zones. All trips are considered to depart from the centroid of each origin zone and end at the centroid of the destination zone. Further, individual trips are not explicitly modelled, but rather origin-destination (O-D) trip flows are the unit of analysis. • Given this spatial aggregation, the models are also typically quite aggregate with respect to trip-maker socio-economic attributes (age, occupation, income, etc.). Such attributes typically are important variables for explaining travel behaviour but are either missing from the models all together or are included by modelling groups that are assumed to be relatively similar in behaviour (e.g. modelling the total number of work trips generated by different occupation or income groups within each zone). • The process is also temporally aggregate or static in nature in that total trips are generated for a given time period (e.g. the morning peak period), but no explicit start times are attached to these trips. Similarly these trips are assigned to paths through the network using static equilibrium techniques in which each trip ‘simultaneously’ occupies each link in the path that it traverses. In other words, ‘clock time’ does not enter the model system, which is not a simulation system but rather one which computes a path-independent, static equilibrium system state.
389
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
This very aggregate nature of the four-step process across spatial, socio-economic and temporal dimensions generally introduces significant aggregation bias by ignoring the heterogeneous nature of trip-making behaviour (different people in different spatialtemporal contexts will make different decisions) and the generally non-linear nature of tripmaker’s decision functions. Figure 13.2 illustrates this with a simple but plausible representation of transit mode choice as a function of trip-maker income, holding all other factors constant. As shown in this figure, the probability of taking transit declines with increased income in a non-linear way. If an aggregate model is built based on zonal average incomes and transit usage it will generate biased model parameters. Similarly, even if the ‘true’ response function is known, if aggregate, average explanatory variables are used, biased predictions will be generated by the model. Thus, the only way one can obtain unbiased estimates of transit mode choice is to both estimate the model parameters from disaggregate (individual trip-maker) data and apply this model in a disaggregate manner (apply it to each trip-maker with his/her individual attributes). Similarly, the static nature of these models represents a serious impediment to modelling road traffic, transit operations, atmospheric emissions, as well as time-dependent elements of travel demand (timing of trips, etc.), all of which require an explicit representation of behaviour within and over time to be accurately modelled. Vehicle emissions are but one important example of this: pollutant emissions cannot be accurately modelled based on average speeds since they depend in very non-linear ways on instantaneous vehicle speeds and accelerations (Hassounah & Miller, 1994). This need for disaggregate, dynamic modelling has long been recognized within the transportation research community, dating back to at least the late 1960s and early 1970s. Translation of this insight into robust, Figure 13.2.
Aggregation bias in transit mode choice modelling
P(transit | Income)
Aggregate models estimated using average values will be biased (B1).
P1
Inserting average values into a disaggregate model will yield biased results (B2).
(Iavg Pavg)
Pavg B2
P(transit | Iavg) B1 P2 I1
Iavg
I2
Income
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
390
Eric J. Miller
operationally tractable microsimulation models, however, was long slowed by inadequate data, theory and computing power, and by a relatively conservative professional practice. Beginning with traffic simulation models and, more recently, models of travel demand, this has changed over the past two decades. As discussed throughout this chapter, a wide variety of transportation-related microsimulation models now co-exist with more aggregate models, with microsimulation modelling comprising a continuing increasing share of the state of good practice in the field. Although the four-step travel demand forecasting model system is methodologically out-dated, the modelling problem it was designed to address remains: we need to predict travel demand in terms of the number of trips occurring within a large urban region, by purpose, by time of day, by origin and destination and by mode. Questions concerning the level of spatial, temporal and socio-economic fidelity (extent of disaggregation and level of representational detail) also remain as important model design challenges. Assumptions concerning the extent to which the system equilibrates over time, the existence (or not) of path dependencies and the extent to which processes can/should be modelled deterministically or stochastically (Miller & Salvini, 2002) are also of continuing importance. The current state of the art in urban passenger transportation microsimulation modelling is discussed in the following three sections. Section 13.3 deals with microsimulation of travel demand effectively the first three steps in the traditional four-step process; Section 13.4 deals with the simulation of route choice and network performance the ‘fourth step’. This division of topics reflects the fact that these two types of models use quite different methods, have evolved in parallel but generally quite separate streams of work and, generally, have been implemented in relatively separate software packages. Section 13.5 deals with the population (agent) synthesis problem which is an essential precursor to travel demand simulation. The chapter then concludes with a brief summary and a few thoughts concerning future directions for the field. 13.3. Microsimulating travel demand No one standard method exists for microsimulating travel demand. All models, however, do start with a few basic propositions: • Travel is a derived demand; that is, people do not travel for the sake of travel alone but rather to engage in activities that are dispersed in space and time. Thus, an activity-based approach is required, in which the need to participate each day in activity episodes of specific types (work, school, shopping, etc.) with specific locations, start times and durations is viewed as the ‘generator’ of travel. The daily activity pattern that arises from people’s decisions concerning where and when to engage in out-of-home activities determines the timing and destinations of trips,
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
391
along with conditions the usage of various modes of travel to execute these trips (e.g. need a car to go shopping; can’t get from one activity to the next in time if use transit). In this framework, trips can be thought of as simply another type of activity to which time and other resources (e.g. money) need to be allocated (Miller, 2005). • Travel is constrained in both time and space by the need to be at certain locations at certain times, by the speed at which a person can travel from point to point using different travel modes and by other constraints (income, lack of driver’s licence, conflicts among household members for use of a car, etc.). This is illustrated in Figure 13.3, in which a timespace prism is constructed that defines feasible locations that might be visited for a shopping episode, given previously scheduled activities (Ha¨gerstrand, 1970; Pendyala, Yamamoto, & Kitamura, 2002; Wang & Miller, 2014). • Within these context-dependent constraints, trip-makers are autonomous agents who choose their activity patterns and associated travel in ways that they perceive as being most beneficial with respect to their personal goals and objectives. These objectives, and associated tastes and preferences, will generally vary from person to person in both systematic (e.g. high income people may systematically value transit differently from poor people) and idiosyncratic ways (e.g. I may simply not ‘like’ to bicycle). Thus, an agent-based (individual trip-maker) based approach is required in order to capture this inherent heterogeneity in behaviour within the population (Miller, 2005). Figure 13.3.
Timespace prism constraints on feasible shopping episode locations
Source: Wang and Miller (2014).
392
Eric J. Miller
Given these very broad observations, most current operational models adopt some form of random utility maximization (RUM) in which it is assumed that decision-makers are rational utility maximizers. That is, if person t needs to choose an alternative i from a set of feasible alternative Ct (e.g. a mode of travel for a morning trip from home to work) then it is assumed that t has a utility or preference function for each alternative i, Uit, and that t will choose that alternative i* that maximizes his/her utility; that is, i* is chosen if and only if (Ben-Akiva & Lerman, 1985; Train, 2009):
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Ui t > Ujt for all j ≠ i ; i and j ∈ Ct
ð13:1Þ
As observers (modellers) of this decision we cannot observe/measure Uit; it is a latent variable. The best we might be able to do is assess the probability that condition (13.1) is true. Hence the probability that t chooses i*, Pi*t, is: Pi t = ProbðUi t > Ujt for all j ≠ i ; i and j ∈ Ct Þ
ð13:2Þ
Without loss of generality we can assume that: Uit = Vit þ ɛit
ð13:3Þ
where Vit is a systematic (or observable or average) utility of alternative i person t and ɛit is the error between our systematic estimate of person t’s utility for alternative i and his/her actual, individually idiosyncratic utility. The systematic utility can be expressed as an analytic function of observable explanatory variables, such as attributes of alternative i (travel time, cost, etc.) and person t (income, age, etc.). Most typically (but not absolutely necessarily) this takes the form of a linear-in-the-parameters function: Vit = βXit
ð13:4Þ
where β is a row vector of parameters and Xit is a column vector of explanatory variables. Given Eq. (13.3), Eq. (13.2) can be rewritten as: Pi t = ProbðVi t þ ɛi t > Vjt þ ɛjt for all j ≠ i ; i and j ∈ Ct Þ = Probðɛjt − ɛ i t ≤ Vjt − Vjt for all j ≠ i ; i and j ∈ Ct Þ
ð13:5Þ
Eq. (13.5) is a cumulative distribution function for the vector of random variables {ɛjt − ɛi*t}. If the distribution of the ɛ’s is known or can be reasonably assumed and if, given this distribution, Eq. (13.5) can be analytically or numerically evaluated, then the required choice probabilities can be evaluated.
Transportation Models
393
Many operational models based on Eq. (13.5) exist, depending on the distributional assumption made. The three most common models in operational practice are: (1) If the ɛ’s are assumed to be independently and identically distributed (iid) Type I Extreme Value, it can be shown that Eq. (13.5) can be expressed as an analytically closed-form equation, the so-called multinomial logit (MNL) function:
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Pi t = expðVi t Þ=Σj expðVjt Þ
ð13:6Þ
(2) If the ɛ’s are assumed to be normally distributed, then a multinomial probit model results. Probit models are theoretically attractive since they can (in principle at least) have arbitrarily complex covariance structures. But they cannot be expressed in an analytical closed form and so are very computationally expensive and cumbersome to evaluate simulation methods are usually required to compute choice probabilities. Parameter estimation is particularly challenging. As a result, general multinomial probit models are rarely used in practice unless a compelling theoretical reason for doing so exists. (3) If the ɛ’s are assumed to be generalized extreme value (GEV), then a variety of model functional forms can be derived, depending on detailed assumptions concerning the GEV distribution’s parameters. By far the most common GEV model in practice is the nested logit model (NL). A typical NL application might involve the joint modelling of shopping destination (d) and travel mode (m). The NL model for this joint choice is given by: Pdmt = Pmjdt Pdt
ð13:7Þ
Pdt = expðVdt þ ϕIdt Þ=Σd0 expðVd0 t Þ þ ϕId0 t Þ
ð13:8Þ
Pmjdt = expðVmjdt =ϕÞ=Σm0 expðVm0 jdt =ϕÞ
ð13:9Þ
Idt = ln½Σm expðVmjdt =ϕÞ
ð13:10Þ
Although mathematically a model of joint choice across two or more ‘choice dimensions’, as shown by Eqs. (13.713.10), this joint choice probability decomposes into a hierarchical or ‘nested’ combination of a ‘lower-level’ conditional mode choice probability given destination and an ‘upper-level’ marginal destination choice probability. The inclusive value or so-called logsum term Idt is the expected maximum utility of the lowerlevel mode choice, given the upper-level destination choice and provides a ‘feedback’ from the lower-level mode choice to the upper-level destination choice (i.e. the higher the expected utility of mode choice for a given destination, the more likely it is that this destination will be selected). The ‘scale parameter’, ϕ, defines the extent to which the upper-level choice
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
394
Eric J. Miller
is influenced the lower-level choice expected utility. The analytically closed form of the NL model (in particular the fact that each level of the model is logit in functional form) and the fact that it can generalize to multiple dimensions/levels has made it a very attractive tool for modelling a variety of complex choice situations. The discrete choice modelling literature is vast and involves applications in a wide variety of fields beyond travel demand. It also involves alternatives to RUM, including models based on prospect theory (Avineri & Bovy, 2008; Kahneman & Tversky, 1979) and regret minimization (Chorus, 2012; Chorus, Rose, & Hensher, 2013), although few of these models have yet been used in operational, large-scale travel demand modelling applications. The wide range of RUM model applications reflects their relative ease of use (more often than not they are analytically closed form), the availability of standardized, efficient parameter estimation software, and the empirical fact that they have been found to perform robustly and credibly in numerous applications. Key concerns in the use of these models are whether utility maximization is an appropriate decision-making rule and the ability to capture covariance among choice alternatives appropriately within a given model formulation. In addition to RUM-based models, various rule-based or computational process models (CPM) also exist, in which various non-RUM decision rules are employed, such as elimination by aspects (Ben-Akiva & Bierlaire, 1999; Recker & Golob, 1979; Tversky, 1972) and bounded rationality/satisficing (Fujii & Ga¨rling, 2002; Kitamura & Fujii, 1998; Simon, 1957; Williams & Ortuzar, 1982, etc.). It has been argued that CPMs may better represent actual decision-making processes than RUMbased models (Doherty, Miller, Axhausen, & Ga¨rling, 2001; Kwan & Golledge, 1997). The challenge with these models, however, is how to develop the rules (and any associated parameters) in a robust, generalizable manner. Perhaps largely because of this issue, relatively few CPM models currently exist in practice, but operational models include TASHA, which, as discussed further below, combines RUM and rulebased components (Miller & Roorda, 2003), and ALBRATROSS (Arentze & Timmermans, 2000, 2004a, 2004b). As perhaps the best example of the CPM approach, ALBATROSS determines the activity schedule for a given day for a given household’s members, given the household’s marital status, residential location, number of children and workers’ job locations, all of which are assumed to be longer-run decisions. The model has a sequential structure, in which work episodes (and associated travel modes) are assigned to first to each person’s schedule and then ‘lower priority’, more ‘discretionary’ activity episodes are generated and assigned to the schedule, subject to a set of context-specific constraints (situational, institutional, spatial-temporal, etc.), providing that the episode can feasibly ‘fit’ into the schedule. Scheduling decisions (episode start time, episode location, travel mode,
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
395
etc.) are made using probabilistic draws from a set of decision trees. A unique feature of ALBATROSS is that these decision trees have been statistically derived from observed activity/travel participation data using the CHAID (Kass, 1980) method that partitions a set of people into maximally homogenous groups with respect to a given decision variable (e.g. choice of travel mode) as a function of a set of ‘conditioning’ variables. CHAID maximizes a Chi-square measure of dissimilarity between groups (Arentze & Timmermans, 2004b). In addition to the RUM versus rule-based dichotomy, a second key methodological distinction among activity-based microsimulation models is whether they are tour-based or an activity scheduling model. Tourbased models focus on choosing a daily set of trips for each person, where these trips are organized into one or more tours or trip-chains, where a tour is a connected set of linked trips whose first origin and last destination is the same location (usually the trip-maker’s home). These models generally involve a multi-level NL model to represent the combination of decisions that must be made to determine the tour characteristics. Tour-based models are the most common form of travel demand microsimulation in operational practice today, especially in the US Operational or near-operational models exist in San Francisco, Denver, Chicago, Atlanta, Columbus, Sacramento, Tel Aviv and Stockholm, among others. The Atlanta Regional Commission (ARC) model is a recent and representative implementation of this class of models (ARC, 2012). Figure 13.4 presents an overview of the key ARC model components which consist of: • Synthesis of the population of households and individuals to be modelled. Population synthesis is discussed further below. • Determination of the work and school locations for each worker and student and the car ownership level for each household. • Specification of the type of daily activity pattern each individual will engage in for the simulated day. Mandatory activity patterns involve participating in work and school episodes. Non-mandatory patterns can involve joint tours engaged in by two or more household members, individual tours and at-work sub-tours. ‘Home’ patterns do not involve out-of-home activities (and hence generate no trips) and are not modelled in detail. For each type of daily activity pattern, the number of tours (frequency) and the time of day each tour starts are determined. • For each tour, a primary ‘tour mode’, the number of stops (activity episodes) and the location of each stop are determined. • For each trip on each tour, the mode used for this trip and the parking location and cost for each auto trip are determined. • Once all trips for all tours for all persons have been determined, they are assigned to the road and transit networks.
396
Eric J. Miller
The ARC tour-based model system
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Figure 13.4.
Source: Atlanta Regional Commission [ARC] (2012).
Thus, the daily travel of every person in the region is built up through a sequence of decisions that incrementally determine the purpose, timing, destination, mode and route for each trip within a tour-based context. Activity scheduling models focus more directly on modelling actual activity patterns, with travel being the emergent outcome of the need to
397
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
move between home and out-of-home activity episode locations. A daily schedule is built using some combination of RUM-based and rule-based algorithms. Trips are then generated to move each person from one activity episode location to another. Once these trips have been generated, their travel modes and, eventually, route choices can be determined. Although not currently as numerous as tour-based models, operational activity scheduling models include: ALBATROSS (The Netherlands; Arentze & Timmermans, 2000, 2004a, 2004b), TASHA (Toronto; Miller & Roorda, 2003; Roorda, Miller, & Habib, 2008), PCATS (Kitamura, Chen, Pendyala, & Narayana, 2000), FAMOS (Pendyala, Kitamura, Kikuchi, Yamamoto, & Fujii, 2005) and CEMDAP/SimAgent (Los Angeles; Bhat, Guo, Srinivasan, & Sivakumar, 2004). Toronto’s TASHA (Travel/Activity Scheduler for Household Agents) is reasonably representative of this class of models. As shown in Figure 13.5, TASHA generates the number of individual activity episodes by type of activity and the desired duration and start time for each episode. Episode generation currently involves Monte Carlo draws from detailed empirical probability frequency distributions but could be replaced by parametric econometric episode generation models if these are available. The generated episodes are then sequentially scheduled within each individual’s provisional schedule, with start times and durations being adjusted if needed to maintain schedule feasibility. For each episode
Figure 13.5.
The TASHA activity scheduling model system fug
Activity Episode Frequency, Start Time and Duration Generation (a) Draw activity frequency from marginal PDF
(b) Draw activity start time from feasible region in joint PDF Joint PDF
Activity Frequency
PDF
Start Time
(c) Draw activity Duration from feasible region in joint PDF Joint PDF
Chain c: 1. Home-Work 2. Work-Lunch 3. Lunch-Meeting 4. Meeting-Work 5. Work-Home
Tour-based Mode Choice Drive Option for Chain c
Start Time
Non-drive option for Chain c
Duration m1 = drive m4
Activity Frequency
m3
Sub-Chain s: 2. Work-Lunch 3. Lunch-Meeting 4. Meeting-Work
Feasible Start Times
Feasible Durations
m1
m2
mN = mode chosen for trip N
Drive for Sub-chain s
Non-drive for Sub-chain s
m2 = drive m3 = drive m4 = drive
m3
m4
m2
Scheduling Activity Episodes into a Daily Schedule Work Project
m5 = drive
Work
School Project Other Project
Other
Shopping Project : : Person Schedule
Shop 1
At-home = “Gap” in Project Agenda
Work
Shop 2
Shop At 1–Other HomeOther = Activity Episode
Shop At-home 2
At-home
= Travel Episode
m5
398
Eric J. Miller
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
scheduled, a trip is generated to move the person from the previous episode location to the newly scheduled episode location and from this location to the next episode in the schedule. Home is the default location for all points in time for which an out-of-home activity has not been scheduled. Once each person’s daily activity schedule has been generated, the tours that associated with this schedule are processed by a tour-based mode choice model that assigns travel modes to each trip. This tour-based mode choice model is a random utility probit model in which the random utility of the tour is simply the sum of the tour’s individual trip utilities. Unique features of TASHA are that: • Conflicts among household individuals for usage of the households cars are explicitly resolved by the household by allocating the available cars so as to maximize overall household utility (Figure 13.6). • Complex within-household ridesharing is explicitly modelled, in which household drivers offer rides to household members for cases in which household travel utility is thereby increased and joint household events can involve ridesharing among the participating household members (Figure 13.7). These features of TASHA illustrate the power of agent-based microsimulation in that these sorts of behaviours are essentially impossible to model in a credible way in more aggregate model formulations such as the traditional four-step process. Generally speaking, activity-based microsimulation models replace the first three stages of the traditional four-step process; that is, they ‘generate’ Figure 13.6.
Household auto usage conflict resolution in TASHA
Three Conflicting With-Car Chains Person 1
Work
Shopping
Person 2 Person 3
Shop
School
Three Possible Vehicle Allocations Person 1 Person 2 Person 3 Allocation 1 Allocation 2
Choose allocation with highest total household utility
Allocation 3
Notes: TASHA assigns household vehicles to drivers based on overall household utility derived from the vehicle usage. Drivers not allocated a car must take their second-best mode of travel.
Household ridesharing in TASHA
Joint Trip Home Joint Activity
Home
Transit W ork Drive
Home
Serve Passenger Trip Passenger’s Activity
Passenger
Home
Joint Trip
Joint Trip
Joint Trip
Joint Activity 2
Joint Trip
Transit
Home
Serve Passenger Trip Passenger
Home
Joint Activity 1
Joint Activity
Drive
Serve Passenger Trip Passenger
Passenger’s Activity
Passenger’s Activity
Drive Driver’s Activity
Transportation Models
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Figure 13.7.
Serve Passenger Trip
Notes: Within-household ridesharing is explicitly handled within TASHA. Drivers will ‘offer’ rides to household members if a net gain in household utility is obtained and feasibility criteria are met.
399
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
400
Eric J. Miller
trips over the course of the day and determine their destinations and modes. They generally then depend on a separate network model to ‘assign’ trips by mode to paths through the road and transit network and, in so doing, determine the travel times, costs, etc. that would arise given the predicted flows. These flows, however, have been computed based on assumed values of service levels. Thus, it is generally necessary to iterate between the ‘demand model’ and the ‘network performance’ model in order to achieve an internally consistent set of travel demands and network service levels. The network performance model used may or may not itself be a microsimulation model and may or may not co-exist within the same piece of software as the demand model. These models are discussed in more detail in the next section. A small handful of exceptions to this very loose ‘coupling’ between demand and network assignment/performance exist that treat travel demand, route choice and network performance in a more integrated, holistic software system. One example of this is the SimTravel model (Figure 13.8) developed by Pendyala et al. (2013), in which the OpenAMOS activity-based travel demand model and the MALTA dynamic traffic assignment (DTA) model (Chiu & Villalobos, 2008) are integrated within a single software system so that near-continuous (every six seconds) reconciliation between activity-travel plans and current roadway travel times occurs (see Figure 13.9). An often-cited barrier to more rapid and widespread adoption of travel demand microsimulation models is the lack of robust, standardized software that can be readily used across many applications. As implied by the plethora of model names listed above, current models almost always involve the development of custom software by both academics
Figure 13.8. SimTravel framework for integrated activity/travel and network performance modelling
t = 0 min t = 1
t=2
t=9
Activity-Travel Demand Model Trip Record 1 Origin O1, Destination D1, Mode M1, Vehicle
Person(s) pursue activity
Trip Record 2 Origin D1, Destination D2, Mode M2, Vehicle
Update O-D Travel Times
1440 minutes
Dynamic-Traffic Assignment
New Link Travel Times
Vehicle is loaded on the network and trip is simulated 6 second interval
Source: Pendyala et al. (2013).
Update TimeDependent Shortest Path Set
401
Transportation Models
Figure 13.9.
Simulation Interval
Route choice and network performance model structure Analysis Period
Assignment Interval
Stop
Arrays storing time-varying travel time, intersection delay, etc.
Converge Check
Network Loading
Path Set Update (including latest Time-Dependent Shortest Path)
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Path Adjustment
Arrays storing vehicles and assigned (selected) paths
and commercial firms that may not transfer readily from one application to another. ‘Conventional’ agent-based microsimulation modelling languages such as NetLogo (http://ccl.northwestern.edu/netlogo/), RePast (http://repast.sourceforge.net/) and SWARM (Iba, 2013) generally have not been used to develop operational transportation models due to the highly specialized nature of these models and their computational intensiveness. As the field continues to mature, however, a growing amount of open source, generalized software for transportation microsimulation modelling is gradually becoming available, although to date applications of such software by the practitioner community has been somewhat limited. Examples include MATSim (http://matsim.org/), OPUS (http://www.urbansim.org/downloads/manual/dev-version/opususerguide/), POLARIS (https://wiki.anl.gov/polaris) and XTMF (TMG, 2012). The long-run impact of such software on operational modelling remains to be seen.
13.4. Microsimulating route choice and network performance A network simulation model involves three basic components: (1) Path choice (routing). (2) Simulating vehicle (person) movements through chosen paths. (3) A procedure for determining algorithm convergence. Road, transit and walk (and bicycle) networks are generally simulated using separate models since both the path choice and the vehicle (person) movement simulation processes are quite different. These three applications are discussed in turn in the following sub-sections.
402
Eric J. Miller
13.4.1. Road network modelling Road network models are by far the best developed type of network simulation model, with a very large research community and a variety of commercial and open-source software systems available for use. These models divide into two broad types:
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
(1) High fidelity network microsimulators that simulate traffic operations on roadway segments (typically a corridor) in great detail. (2) Mesoscopic DTA models. The focus of network microsimulators is on generating detailed simulations of individual vehicle movements through a network with very high fidelity in both time (time steps are typically less than one second) and space (vehicle locations in continuous space at each time step). Detailed representations of roadway geometry, traffic signal timings, etc. are required to support these calculations. Travel demand is usually assumed to be a fixed, exogenous input to the model in terms of origin-destination (O-D) trips by time slice (e.g. the number of trips arriving at each analysis area entry point destined to each exist point in each T minutes of the simulation, where T may typically vary from 5 to 15 minutes). Paths from origin to destination are also often assumed to be fixed, although some ‘way-finding’ logic may exist for routing vehicles through the network. Because of the very detailed nature of the simulations, the computational burden is high and, as a result, these models are not generally suitable for modelling large (e.g. region-wide) networks or for modelling long time periods (e.g. an entire day). They are generally used for operational analysis and design applications for a specific network segment (typically a corridor). In such applications, however, when well calibrated, these models have been found to replicate traffic flow behaviour very well, and it is common best practice within transportation operations and planning agencies to use them in a wide variety of applications. Numerous network microsimulation packages exist, such as Paramics (http://www.paramicsonline.com/), Aimsun (http://www.aimsun.com/wp/), Vissim (http://www. vissim.com/), Open Traffic (Tamminga et al., 2012) and CORSIM (http:// www-mctrans.ce.ufl.edu/featured/TSIS/Version5/corsim.htm). Two key models control the movement of each vehicle through the network: car following and lane changing (gap acceptance). Car following models control each vehicle’s speed (and hence location) within a lane. In a car following model, each vehicle controls its speed so as to maintain its position appropriately relative to the car in front of it. In each time step, the vehicle will accelerate, decelerate or remain at the current speed, depending on the rules embedded in the model. A stimulus-response model is usually assumed, in which the driver’s response is assumed to be proportional to a stimulus provided by the vehicle in front. A very
Transportation Models
403
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
common model in operational use is the General Motors car following model (Rothery, 2002), which takes the general form: aj þ 1 ðt þ 1Þ = λ½vj ðtÞ vj þ 1 ðtÞ
ð13:11Þ
λ = λ0 vj þ 1 ðtÞm =½xj ðtÞ − xj þ 1 ðtÞl
ð13:12Þ
where t is time, subscript j indicates the lead vehicle and j + 1 indicates the trailing vehicle that is adjusting its rate of acceleration (a) in the next time step in response to the difference in the two vehicles’ velocities (v), λ is the sensitivity of the response to the experienced stimulus, which is a function of the trailing vehicle’s current speed and the current distance between the two vehicles (where xj(t) is the current location of vehicle j at time t). λ0, m and l are parameters that are calibrated to reproduce typical driver behaviour and generate stable traffic flows. A lane changing model controls the movement of the vehicle from one lane within the roadway to another. Vehicles need to change lane so as to manoeuvre around slower-moving vehicles, merge from one traffic stream to another (e.g. at a freeway access ramp or when turning into a new traffic stream at an intersection or to position themselves to be able to make desired turning movements at intersections and freeway interchanges). The execution of a safe lane change requires a gap acceptance model that determines whether a suitable gap in the ‘target’ lane is available to change into and that may cause the vehicle to speed up or to slow down so as to be able to make the desired movement. A wide variety of approaches to modelling lane changing exist, the details of which are beyond the scope of this chapter to cover. Reviews of these models can be found in Laval and Daganzo (2006) and Moridpour, Sarvi, and Rose (2010), among others. The focus of DTA models is on modelling route choice through the network (given fixed O-D trips by time slice) as a dynamic equilibrium problem in which each driver attempts to find the minimum experienced time path through the network for his/her trip, given the network link travel times that arise due to congestion, queuing, etc. Equilibrium occurs when no driver can unilaterally change path through the network and improve his/her travel time. This is a conceptually straightforward extension of the static equilibrium concept used in standard four-step model systems, but with link (and hence path) travel times now dynamically depending on when the vehicle is traversing the link and what the travel conditions are on that link at that time. Excellent primers on DTA are presented in TRB (2010) and FHWA (2012). In implementation, however, the DTA problem is very complex, both mathematically and computationally. As a result, vehicle movements within the network are modelled with lower fidelity than in the network microsimulators so as to keep computing times within reasonable limits. Instead of detailed car following and lane changing rules, a mesoscopic
404
Eric J. Miller
Figure 13.10.
Deterministic queuing model for a single intersection approach.
count
Cumulative arrivals at the intersection at the average rate of v cars/min
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
s = saturation flow rate Total number of cars delayed per cycle = N
v
Total delay/cycle = area between arrivals & departures curves
Cumulative departures from the intersection
g = green phase length r = red phase length C = cycle length = g+r
time
approach is adopted in which vehicles are typically assumed to travel at constant speed along link until they are delayed by queues which form at street intersections, freeway interchanges and other ‘bottlenecks’ in the system. These queues are modelled using simple deterministic queuing models, such as illustrated in Figure 13.10 for the case of a single approach at a standard intersection. Solving for a dynamic user equilibrium is computationally challenging and convergence to the equilibrium solution in a large, complex network can usually only be approximately achieved. A variety of convergence criteria are used in models, but the most widely recommended criterion is the relative gap, which is defined as (Chiu et al., 2011): RelGap = ½Σt Σi ∈ I ðΣk ∈ Ki fkt τkt Þ − Σt Σi ∈ I dit uit Þ=Σt Σi ∈ I dit uit Þ
ð13:13Þ
where t is a departure time slice, i indicates a given origin-destination (O-D) pair, k is a feasible path for O-D pair i, Ki is the set of feasible paths for O-D pair i and: fkt= flow departing at time t using path k τkt= experienced travel time on path k for flow departing at time t dit= total flow for O-D pair i departing at time t uit= shortest time path for O-D pair i for flow departing at time t
405
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
The relative gap is thus the aggregate amount of travel time experienced by all trip-makers on all paths that exceeds the travel time that would be experienced if everyone used their shortest time paths (which is assumed to be the trip-makers’ desired objective), expressed as a fraction of the aggregate shortest path times. Convergence is assumed when the relative gap is less than a user-specified threshold value. A fundamental issue in designing a DTA is the definition of path times. Two approaches are possible, based on instantaneous travel times or experienced travel times. Figure 13.11 illustrates the two approaches. As shown in this figure, the instantaneous travel time for a given trip is calculated based on the link travel times that exist at the time the trip commences; that is, the trip-maker is assumed to assess the path time based on the link times that exist at the departure time, and this assessment is not updated as the trip proceeds. Experienced path travel times, on the other hand, consist of the sum of the link travel times that are actually experienced as the traveller moves through the path; that is, the calculation recognizes that it takes time to reach each successive link in the path and, as a result, that the travel time experienced on each link will generally Figure 13.11. Instantaneous versus experience path travel times. (a) Instantaneous travel time calculation. (b) Experienced travel time calculation Link travel time when entering the link at each minute
Time 4
3
4
4
2
3
6
2
3
5
Travel time if departing at min. 2 = 6 min.
1
2
3
Travel time if departing at min. 1 = 3 min.
1
1
1
Link 1
Link 2
Link 3
(a) Instantaneous Travel Time Calculation
Link travel time when entering the link at each minute
Time 4
(b) Experienced Travel Time Calculation
3
4
4
2
3
6
2
3
5
Travel time if departing at min. 2 = 8 min.
1
2
3
Travel time if departing at min. 1 = 9 min.
1
1
1
Link 1
Link 2
Link 3
Source: Adapted from Chiu et al. (2011).
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
406
Eric J. Miller
have changed relative to the travel time that prevailed at the start of the trip. Models exist based on both assumptions; in addition multiple user class (MUC) models also exist that permit different types of users with different travel time perception models to co-exist within the population of trip-makers (Chiu et al., 2011). As with network microsimulators, several operational DTA packages exist. Among the more common are DynaSmart-P (http://mctrans.ce.ufl. edu/featured/dynasmart/), Dynameq (http://www.inro.ca/en/products/ dynameq/), DynusT (http://dynust.net/), TransModeler (http://www.caliper. com/transmodeler/) and MALTA (http://urbanmodel.asu.edu/intmod/ presentations/MALTA%20Overview.pdf). For examples of recent DTA applications see FHWA (2012). While originally developed for (and still primarily applied to) operational design and real-time control problems, DTA models are increasingly being used to replace static route choice models in regional travel demand models. This remains, however, a computationally challenging application for large urban regions. Two model systems that have been explicitly designed to be computationally efficient at the large urban region, 24-hour scale are TRANSIMS (Barrett et al., 1995) and MATSim (http://matsim.org/). TRANSIMS and MATSim can also be differentiated from typical DTA’s in that they are designed to be a full activity-based replacement for the four-step model system; that is, that they are intended to be full travel demand and network modelling systems, not ‘just’ a network assignment model. They are also fully agent-based in nature, in which the movements of individual persons are modelled in a multi-modal setting. Both follow the standard microsimulator design of Figure 13.10 in that they first ‘route’ each trip given a pre-determined origin, destination and start time and then simulate the execution of the trips along these routes within the network model. But they then both achieve regional scale in their application by using simplified models of vehicle movements through the network: MATSim uses a version of the mesoscopic, deterministic queuing model described above, while TRANSIMS uses a novel cellular automata representation in which the roadway is segmented into 7-metre cells and vehicles move an integer number of cells in each one-second time step based on simplified car following and lane changing algorithms. Both models also employ a relatively simplified iterative procedure for ‘stabilizing’ the system by updating path choices in response to updated path travel times. While also tending to speed up calculations, the solutions achieved are not a guaranteed dynamic user equilibrium, which may represent a weakness relative to more rigorous DTA solutions. Originally developed by Los Alamos National Laboratory in the United States and then tested in a serious of demonstration projects, TRANSIMS is now open source software that is used in various applications in the United States Although, as noted above, originally intended to be a comprehensive, multi-modal modelling package as a full
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
407
replacement for the four-step process, to date only the TRANSIMS router and network microsimulator components are generally used in application. No robust transit network model has ever been developed and the TRANSIMS activity-based demand components are more prototype place-holders than truly operational models suitable for immediate operational application. MATSim has been developed by a consortium of European researchers, centred at the Technical University of Berlin and ETH Zurich. While still largely working in the research rather than operational planning domain, increasingly practical MATSim applications currently exist in Berlin, Switzerland (Meister et al., 2010), Singapore, Tel Aviv (Bekhor, Dobler, & Axhausen, 2011) and Toronto (Gao, Balmer, & Miller, 2010), among others. 13.4.2. Transit network modelling Transit route choice and network modelling is, if anything, more complicated than the road assignment problem. Transit path choice depends on a complex set of factors that include: access/egress walk times to/from the transit service; wait and transfer times; ‘in-vehicle’ travel times; fares (if competing routes/services have different fares); and typically difficult to quantify factors such as comfort, reliability and security. These service factors (wait times, in-vehicle travel times, etc.), in turn, depend on transit line service frequencies, transit vehicle operating speeds, vehicle dwell times at stops and stations, on-street delays due to traffic congestion, etc., all of which vary by time of day. Most operational transit assignment models make use of methods provided by commercial software packages such as Emme (http://www.inro.ca/en/products/emme/), TransCAD (http://www.caliper.com/tcovu.htm), Cube (http://www.citilabs.com/), Visum (http://vision-traffic.ptvgroup.com/en-us/products/ptv-visum/), etc. While facilitating detailed representation of the transit network, these models are generally spatially rather aggregate (zone-based, centroid-tocentroid travel flows) and static in nature. In particular, transit vehicle operations are typically modelled based on average line speeds and headways, with, at most, fairly macro adjustments for roadway congestion effects. An early example of a microsimulation approach to transit assignment was the MADITUC model in which individual trips were assigned to point-to-point paths through the network, albeit in a static fashion (Chapleau, 1986). MADITUC has been in operational use for service planning purposes in several Canadian cities since the 1980s. More recently, transit route choice microsimulation models have begun to emerge in research settings. These include the MATSim transit assignment procedure (Rieser, 2010) and MILATRASS (Wahba & Shalaby, 2011). In the case of MILATRASS, a reinforcement learning model is used to
408
Eric J. Miller
develop over a number of iterations of path choices a stabilized set of path choice probabilities. Both of these packages also involve schedulebased transit vehicle performance simulation in which the movement along their routes of individual transit vehicles is simulated based on their planned schedules (Wilson & Nuzzolo, 2008). Schedule-based assignment algorithms are also increasingly available in commercial software packages such as those listed above.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
13.4.3. Pedestrian and bicycle modelling Although walking is usually included as mode of travel in most bestpractice travel demand models, pedestrian movements and levels of service typically are not yet modelled in detail in operational models. An extensive microsimulation capability, however, exists for modelling pedestrian movements for a variety of site-specific design purposes (e.g. modelling pedestrian flows through a major rail terminal or airport) and for research purposes. A particularly important and challenging type of problem is the modelling of pedestrian movements at ‘mass events’ such as the Olympics and the annual Hajj in Mekkah (Currie & Shalaby, 2012). Numerous commercial and open-source pedestrian modelling software packages exist that can be applied to many of these problems. These include: MassMotion (http://www.oasys-software.com/products/engineering/mass motion.html), Paramics (http://www.paramics-online.com/), Vissim (http:// www.vissim.com/), Pedestrian Dynamics (http://www.pedestrian-dynamics. com/pedestrian-dynamics/pedestrian-dynamics.html), Legion (http://www. legion.com/), AnyLogic (http://www.anylogic.com/consulting/pedestriantraffic-flows), SimTread (http://www.vectorworks.net/simtread/) and SimWalk (http://www.simwalk.com/), among others. Given the large number of agents (pedestrians) that typically need to be modelled and the detailed, largely rule-based, calculations per agent per time step (1 second or less) involved, these models generally are quite computationally intensive. Key elements in these models include: • Detailed specification of the walking environment, including geometry, obstacles, grades, etc. • Specification of attributes for each agent that condition the agent’s decision-making (age, etc.). Many models assume a default, average agent. • A model of agent decision-making (choice of speed and direction, avoidance of obstacles, etc.). • A time-driven simulation structure in which in each time step each agent’s location and behaviour are updated in response to their environment (the locations and behaviours of the other agents in the system, nearby obstacles, etc.) and their desired goal (e.g. get to the airport departure gate).
Transportation Models
409
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
• Collection of relevant system state data (individual and collective) for post-run analysis. • Advanced visualization of the simulated pedestrian behaviour. Given the computational burden involved in the simulation, visualization is usually post-run, based on stored run data. Several approaches to microsimulating pedestrian movements are used, including cellular automata (Bandini, Federici, and Vizzari, 2007; Blue & Adler, 2001), discrete choice (Robin, 2011) and social forces models (Sahaleh, Bierlaire, Farooq, Danalet, & Ha¨nseler, 2012). Of these, various forms of social forces models are most commonly used in operational models. Social forces models use Newton’s equation to explain pedestrian i’s instantaneous acceleration at any point in time t as: ai ðtÞ = Fi ðtÞ
ð13:14Þ
where ai(t) is the person’s acceleration vector (in xy space) and Fi(t) is the net ‘social force’ vector acting on the person at time t. This net social force is the resolution of a set of repulsive forces (avoid bumping into obstacles and other pedestrians) and attractive forces (move towards one’s destination, attractive objects such as shop windows or persons such as friends; attempt to maintain a desired walking speed) (Sahaleh et al., 2012): Fi ðtÞ = Fi 0 ðtÞ þ Σj ≠ i Fij ðei ; xi ðtÞ − xj ðtÞÞ þ Σb ∈ B Fib ðei ; xi ðtÞ − xb ðtÞÞ þ Σa ∈ A Fia ðei ; xi ðtÞ − xa ðtÞÞ
ð13:15Þ
Fi 0 ðtÞ = ð1=τi Þðvi 0 ei vi ðtÞÞ
ð13:16Þ
where ei xk(t) k=j k=b k=a Fik Fi0(t)
= Desired direction of travel for person i = Location of object/agent k at time t another person in the simulation an obstacle to be avoided an ‘attractor’ for person i = Force acting on i by object/agent k; repulsive for k = j or b; attractive for k = a = Acceleration (force) associated with person i’s desire to change from his/her current velocity vector vi(t) towards his/her desired direction of travel and desired walking speed, vi0 within a given ‘relaxation time’ τi
Bicycle usage usually receives even less attention than walking in conventional travel demand models, except in cities in which bicycle mode shares are high. In principle, bicycle movements can be microsimulated using methods similar to pedestrian movements, such as cellular automata models (e.g. Zhang, Ren, & Yang, 2013), including the modelling of mixed auto and bicycle flows (Dijkstra, 2012; Vasic & Ruskin, 2011). Commercial network modelling packages have also been applied to
410
Eric J. Miller
modelling bicycle movements (e.g. Dijkstra, 2012; Gerstle, Morgan, & Yang, 2013). Given the growing importance of bicycling in many urban regions it is expected that increasing attention will be paid to improve modelling of this mode in the future.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
13.5. Population (agent) synthesis Travel demand microsimulation models operate upon a list of agents (persons and/or households), simulating the behaviour of each of these individuals. Occasionally this list may consist of actual individuals, typically a set of travel survey respondents. This works in cases in which the model is being used for short-run forecasts/policy analyses and the survey respondents are statistically representative of the population being modelled. More typically, however, the list of agents to be modelled does not exist, either because a representative sample is not available or the model is being applied in a future year context in which the set of actual individuals is unknown. In this case, the list of agents needs to be synthesized from whatever aggregate data are available. This synthesis involves generating a set of individual agents that are statistically consistent with the available aggregate data and that are as representative of the ‘true’ population as possible. Synthesis methods include combinatorial optimization (Ryan, Maoh, & Kanaroglou, 2009; Williamson, Birkin, & Rees, 1998), iterative proportional fitting, IPF (Beckman, Baggerly, & McKay, 1996) and simulation (Farooq, Bierlaire, Hurtubia, & Flo¨ttero¨d, 2013; Miller et al., 1987; Wilson & Pownall, 1976). Recent reviews of synthesis methods include Guo and Bhat (2007), Mu¨ller and Axhausen (2011) and Lu (2011). Popularized by the Beckman et al. (1996) TRANSIMS synthesizer developed in the mid1990s, IPF is by far the most common operational method. Applications and extensions include Arentze, Timmermans, and Hofman (2007), Auld, Mohammadian, and Wies (2009), Pendyala, Christian, and Konduri (2011), and Pritchard and Miller (2012). As illustrated in Figure 13.12, IPF involves estimating the joint distribution of attributes of a target population (in this case, attributes X and Y) given known marginal distributions for this target population and a representative joint sample. The joint sample distribution is iteratively ‘updated’ (scaled row-wise and column-wise) until the ‘updated’ table matches (as close as possible) the target marginal. In practice, this typically involves using one-, two- and possibly three-way marginal distributions for a census tract or traffic zone obtained from a national census and ‘public use microdata’ file containing actual person (and/or household) records randomly drawn from the census database for a larger geographic area that contains the given census tract. An issue in the use of IPF is that it generates non-integer weights for individual agents (including ‘fractions of agents’). These must be
Transportation Models
Figure 13.12.
411
Synthesizing population attributes using IPF
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
N
X(i) ith value of variable X Y(j) jth value of variable Y n ij Number of elements with values X(i) and Y(j) in the source sample Ni+ Observed number of elements with value X(i) in the target population N+j Observed number of elements with value Y(j) in the target population N ij Estimated number of elements with values X(i) and Y(j) in the target population
Source: Pritchard and Miller (2012).
‘integerized’ before the agents can be placed within the microsimulation model. Various procedures to do this have been used in practice. Lovelace and Ballas (2013) provide an excellent review of the problem and alternative approaches to is solution. They recommend ‘truncate, replicate, sample’ (TRS) approach which appears to work well. Alternatively, combinatorial optimization or simulation approaches to the problem can be used which generate integer weights directly. In the simplest applications either persons or households are synthesized, depending on which type of agent is being modelled in the microsimulation. Increasingly, such as in SimAgent and TASHA, both persons and the households within which they reside are modelled so that intrahousehold interactions can be explicitly captured. In this case the synthesis task is more complicated since both agents need to be synthesized simultaneously to ensure that the ‘right’ people are placed in the ‘right’ type of households (Pribyl & Goulias, 2005). Various extensions of the standard IPF approach exist for dealing with the joint person-household synthesis problem (Pendyala et al., 2012; Pritchard & Miller, 2012) while emerging simulation methods also hold considerable promise for addressing this problem (Farooq et al., 2013).
412
Eric J. Miller
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
13.6. Summary and future directions Although four-step travel demand models are still in operational use, microsimulation-based model systems are now well-established in many countries as the best-practice approach to many transportation modelling problems. Virtually all new travel demand model systems being developed for operational use in North America and elsewhere consist of some form of activity-based microsimulation. These are largely tour-based in formulation, but activity scheduling models are increasingly being operationally implemented (e.g. Los Angeles and Toronto). This trend will certainly continue as data availability, computing power and planning and policy analysis application needs continue to drive modelling practice in this direction. Likely future developments in this area include: • Inclusion of mechanisms for agent learning/adaptation. Currently model parameters are static, econometric estimates based on fitting model predictions to observed base data. The agent-based formulation of current models opens the possibility for development of learning mechanisms that would allow the agents to adapt their tastes and preferences over time in response to their travel and other experiences. • Increasing integration with both land use models (see Chapter 12) on the one hand and network models on the other. The nature and extent of such integration will depend on the problem to be addressed by the model. • Increasing exploitation of cloud-based computing and other distributed/parallel high performance computing options. Many transportation processes involve a high degree of agent interaction which do not always readily parallelize. Nevertheless, multi-thread and multi-node simulation code is becoming more common in the field and will continue to do so. • Even the most advanced travel demand models currently model at most a single weekday. Multi-day (perhaps even week-long) models are beginning to become feasible, given continuing advances in computing power as well as increasing availability of multi-day/week-long travel data. • Increasing use of microsimulation for both freight/goods movement and intercity (large region) modelling. Traffic microsimulation models are now standard practice for a wide variety of operational, short-term, facility-specific modelling tasks, with a wide variety of commercial and open-source software packages available for use. These models will continue to develop for real-time Intelligent Transportation System (ITS) roadway control applications. The continuing challenge here is get the software to run very fast so that it can be used in real-time, short-run optimization routines. Similarly, mesoscopic
Transportation Models
413
models will continue to develop and be increasingly applied at the regional scale for longer-term strategic planning applications. Computational challenges still remain, but packages such as MATSim are being used in large-scale research applications and are increasingly being considered for operational implementation. It is only a matter of time before large-scale operational applications are in place.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
References Arentze, T. A., & Timmermans, H. J. P. (2000). ALBATROSS: A learning-based transportation oriented simulation system. Eindhoven: EIRASS. Arentze, T. A., & Timmermans, H. J. P. (2004a). ALBATROSS version 2.0: A learning based transportation oriented simulation system. Eindhoven: EIRASS. Arentze, T. A., & Timmermans, H. J. P. (2004b). A learning based transportation oriented simulation system. Transportation Research B, 38, 613633. Arentze, T. A., Timmermans, H. J. P., & Hofman, F. (2007). Creating synthetic household populations: Problem and approach. Transportation Research Record, 2014, 8591. Atlanta Regional Commission. (2012). Activity-based travel model specifications: Coordinated travel-regional activity based modelling platform (CT-RAMP) for the Atlanta region. Retrieved from http://www. atlantaregional.com/File%20Library/Transportation/Travel%20 Demand%20Model/tp_abmodelspecifications_010213.pdf Auld, J., Mohammadian, A., & Wies, K. (2009). Population synthesis with subregion-level control variable aggregation. Journal of Transportation Engineering, 135(9), 632639. Avineri, E., & Bovy, P. H. L. (2008). Identification of parameters for a prospect theory model of travel choice analysis. Transportation Research Record, 2082, 141147. Bandini, S., Federici, M. L., & Vizzari, G. (2007). Situated cellular agents approach to crowd modelling and simulation. Cybernetics and Systems, 38, 729753. Barrett, C., Berkbigler, K., Smith, L., Loose, V., Beckman, R., Davis, J., … Williams, M. (1995). An operational description of TRANSIMS, LA-UR-95-2393. Los Alamos, NM: Los Alamos National Laboratory. Beckman, R. J., Baggerly, K. A., & McKay, M. D. (1996). Creating synthetic baseline populations. Transportation Research A, 30(6), 415429. Bekhor, S., Dobler, C., & Axhausen, K. W. (2011, January). Integration of activity-based with agent-based models: An example from the Tel
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
414
Eric J. Miller
Aviv model and MATSim. Presented at the 90th annual meeting of the Transportation Research Board, Washington, DC. Ben-Akiva, M. E., & Bierlaire, M. (1999). Discrete choice methods and their applications to short term travel decisions. In Handbook of transportation science (Vol. 23, pp. 533). Boston: Kluwer. Ben-Akiva, M. E., & Lerman, S. R. (1985). Discrete choice analysis: Theory and application to travel demand. Cambridge, MA: MIT Press. Berkowitz, M. K., Gallini, N. T., Miller, E. J., & Wolfe, R. A. (1987). Forecasting vehicle holdings and usage with a disaggregate choice model. Journal of Forecasting, 6(4), 249269. Bhat, C. R., Guo, J. Y., Srinivasan, S., & Sivakumar, A. (2004). A comprehensive econometric microsimulator for daily activity-travel patterns. Transportation Research Record, 1894, 5766. Blue, V. J., & Adler, J. L. (2001). Cellular automata microsimulation for modelling bi-directional pedestrian walkways. Transportation Research B, 35, 293312. Boerkamps, J. H. K., van Binsbergen, A. J., & Bovy, P. H. L. (2000). Modelling behavioural aspects of urban freight movement in supply chains. Transportation Research Record, 1725, 1725. Bonsall, P. W. (1982). Microsimulation: Its application to car sharing. Transportation Research A, 15, 421429. Caldwell, S. B. (1996). Dynamic microsimulation and the CORSIM 3.0 model. Ithaca, NY: Institute for Public Affairs, Department of Sociology, Cornell University. Cavalcante, R., & Roorda, M. J. (2013). Freight market interactions simulation (FREMIS): An agent-based modelling framework. Procedia Computer Science, 19, 867973. Chapleau, R. (1986, April). Transit network analysis and evaluation with a totally disaggregate approach. Publication #462. Montreal: Universite´ de Montre´al, Centre de recerche sur les transports. Chiu, Y.-C., Bottom, J., Mahut, M., Paz, A., Balakrishna, R., Waller, T., & Hicks, J. (2011). Dynamic traffic assignment, a primer, transportation research board circular number E-C153. Washington, DC: Transportation Research Board. Chiu, Y.-C., & Villalobos, J. A. (2008). The anisotropic mesoscopic simulation model in the interrupted highway facilities. Presented at the symposium on the fundamental diagram: 75 years (Greenshields 75 symposium), Woods Hole, MA. Chorus, C. G. (2012). Random regret minimization: An overview of model properties and empirical evidence. Transport Reviews, 32(1), 7592. Chorus, G., Rose, J. M., & Hensher, D. A. (2013). Regret minimization or utility maximization: It depends on the attribute. Environment and Planning B, 40(1), 154169.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
415
Currie, G., & Shalaby, A. (2012). A synthesis of transport planning approaches for the world’s largest events. Transport Reviews, 32(1), 113136. de Jong, G., & Ben-Akiva, M. E. (2007). A micro-simulation model of shipment size and transport chain choice. Transportation Research B, 41(9), 950965. Dijkstra, A. (2012). Effects of a robust roads network on bicycle traffic: Results obtained from a microsimulation model (in Dutch). Retrieved from http://www.swov.nl/rapport/R-2012-03.pdf, English abstract. Retrieved from http://trid.trb.org/view.aspx?id=1137522 Doherty, S. T., Miller, E. J., Axhausen, K. W., & Ga¨rling, T. (2001). A conceptual model of the weekly household activity-travel scheduling process. In E. Stern, I. Salomon, & P. Bovy (Eds.), Travel behaviour: Patterns, implications and modelling (pp. 148165). Cheltenham, UK: Elgar Publishing Ltd. Ettema, D., Borgers, A., & Timmermans, H. (1993). Simulation model of activity scheduling behaviour. Transportation Research Record, 1413, 111. Farooq, B., Bierlaire, M., Hurtubia, R., & Flo¨ttero¨d, G. (2013). Simulation based population synthesis. Transportation Research Part B, 58, 243263. FHWA. (2012). Traffic analysis toolbox volume XIV: Guidebook on the utilization of dynamic traffic assignment in modelling. Publication No. FHWA-HOP-13-015. Washington, DC: US Department of Transportation Federal Highway Administration. Fischer, M. J., Outwater, M., Cheng, L., Ahanotu, D., & Calix, R. (2005). An innovative framework for modelling freight transportation in Los Angeles county. Transportation Research Record, 1906, 105112. Fujii, S., & Ga¨rling, T. (2002). Application of attitude theory for improved predictive accuracy of stated preference methods in travel demand analysis. Transportation Research A, 37(4), 398402. Gao, W., Balmer, M., & Miller, E. J. (2010). Comparisons between MATSim and EMME/2 on the greater Toronto and Hamilton area network. Transportation Research Record, 2197, 118128. Gerstle, D., Morgan, D., & Yang, Q. (2013). Micro-simulation of bicycles for planning and design. Retrieved from http://www.caliper.com/ Library/microsimulation-of-bicycles-for-planning.pdf Goulias, K. G., & Kitamura, R. (1992). Travel demand forecasting with dynamic microsimulation. Transportation Research Record, 1357, 817. Guo, J. Y., & Bhat, C. R. (2007). Population synthesis for microsimulation: State of the art. Transportation Research Record, 2014, 92101.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
416
Eric J. Miller
Ha¨gerstrand, T. (1970). What about people in regional science? Papers of the Regional Science Association, 24, 721. Harvey, G., & Deakin, E. (1996). Description of the step analysis package, draft manuscript. Hillsborough, NH: Deakin/Harvey/Skabardonis. Hassounah, M. I., & Miller, E. J. (1994). Modelling air pollution from road traffic: A review. Traffic Engineering & Control, 35(9), 510514. Hatzopoulou, M. (2008). An integrated multi-model approach for predicting the impact of household travel on urban air quality on simulating population exposure. Ph.D. thesis, Department of Civil Engineering, University of Toronto, Toronto. Horowitz, A. (2006). Statewide travel forecasting models, NCHRP synthesis 358. Washington, DC: National Cooperative Highway Research Program, US Transportation Research Board. Hunt, J. D., & Stefan, K. J. (2007). Tour-based microsimulation of urban commercial movements. Transportation Research B, 41(9), 9811013. Iba, H. (2013). Agent-based modelling and simulation with swarm. CRC Studies in Informatics Series. Boca Raton, FL: Chapman & Hall. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263291. Kass, G. V. (1980). An exploratory technique for investigating large quantities of categorical data. Applied Statistics, 29, 119127. Kitamura, R., Chen, C., Pendyala, R. M., & Narayana, R. (2000). Microsimulation of daily activity-travel patterns for travel demand forecasting. Transportation, 27, 2551. Kitamura, R., & Fujii, S. (1998). Two computational process models of activity-travel behaviour. In T. Ga¨rling, T. Laitila, & K. Westin (Eds.), Theoretical foundations of travel choice modelling (pp. 251279). Amsterdam: Elsevier. Kreibich, V. (1978). The successful transportation system and the regional planning problem: An evaluation of the Munich rapid transit system in the context of urban and regional planning policy. Transportation, 7, 137145. Kreibich, V. (1979). Modelling car availability, modal split and trip distribution by Monte-Carlo simulation: A short way to integrated models. Transportation, 8, 153166. Kwan, M. P., & Golledge, T. G. (1997). Computational process modelling of disaggregate travel behaviour. In M. M. Fisher & A. Getis (Eds.), Recent developments in spatial analysis (pp. 171185). New York: Springer. Laval, J. A., & Daganzo, C. (2006). Lane-changing in traffic streams. Transportation Research B, 40, 251264. Liedtke, G. (2006). Actor-based approach to commodity transport modelling. Germany: Verlagsgesellschaft.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
417
Lovelace, R., & Ballas, D. (2013). Truncate, replicate, sample: A method for creating integer weights for spatial microsimulation. Computers, Environment and Urban Systems, 41, 111. Lu, M. (2011). Generating disaggregate population characteristics for input to travel-demand models. PhD dissertation, University of Florida. Retrieved from http://gradworks.umi.com/35/14/3514962.html Mackett, R. L. (1985). Micro-analytical simulation of locational and travel behaviour. Proceedings PTRC summer annual meeting, Seminar L: Transportation Planning Methods, PTRC, London (pp. 175188). Mackett, R. L. (1990). Exploratory analysis of long-term travel demand and policy impacts using micro-analytical simulation. In P. Jones (Ed.), Developments in dynamic and activity-based approaches to travel analysis (pp. 384405). Aldershot: Avebury. Mahmassani, H. S., Hu, T. Y., & Peeta, S. (1994). Microsimulation-based procedures for dynamic network traffic assignment. Proceedings of the 22nd European transport forum, Seminar H: Transportation Planning Methods, PTRC (Vol. II, 5364). Meister, K., Balmer, M., Ciari, F., Horni, A., Rieser, M., Waraich, R. A., & Axhausen, K. W. (2010, July). Large-scale agent-based travel demand optimization applied to Switzerland, including mode choice. Presented at the 12th world conference on Transportation Research, Lisbon. Meyer, M. D., & Miller, E. J. (2001). Urban transportation planning (2nd ed.). New York, NY: McGraw-Hill. Miller, E. J. (2004). The trouble with intercity travel demand models. Transportation Research Records, 1895, 94101. Miller, E. J. (2005). Propositions for modelling household decision-making. In M. Lee-Gosselin & S. T. Doherty (Eds.), Integrated land-use and transportation models: Behavioural foundations (pp. 2160). Oxford: Elsevier. Miller, E. J., Noehammer, P. J., & Ross, D. R. (1987). A micro-simulation model of residential mobility. In Proceedings of the international symposium on transport, communications and urban form. Analytical Techniques and Case Studies (Vol. 2, pp. 217234), Monash University. Miller, E. J., & Roorda, M. J. (2003). A prototype model of household activity/travel scheduling. Transportation Research Record, 1831, 114121. Miller, E. J., & Salvini, P. A. (2002). Activity-based travel behaviour modelling in a microsimulation framework. In H. S. Mahmassani (Ed.), Perpetual motion, travel behaviour research opportunities and application challenges (pp. 533558). Amsterdam: Pergamon. Moridpour, S., Sarvi, M., & Rose, G. (2010). Lane changing models: A critical review. Transportation Letters: The International Journal of Transportation Research, 2, 157173.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
418
Eric J. Miller
Mu¨ller, K., & Axhausen, K. W. (2011). Population synthesis for microsimulation: State of the art. Proceedings of the 90th annual meeting of the Transportation Research Board. Washington, DC. Oskamp, A. (1995, March 1518). LocSim: A microsimulation approach to household and housing market modelling. Paper presented to the 1995 annual meeting of the American Association of Geographers, Chicago, PDOD Paper No. 29, Department of Planning and Demography, AME Amsterdam Study Centre for the Metropolitan Environment, University of Amsterdam, Amsterdam. Pendyala, R. M., Bhat, C. R., Goulias, K. G., Paleti, R., Konduri, K. C., Sidharthan, R., … Christian, K. P. (2012). The application of a socio-economic model system for activity-based modelling: Experience from Southern California. Presented at the 91st annual meeting of the Transportation Research Board. Retrieved from http://www.scag.ca.gov/Documents/7140D07E62B9.pdf Pendyala, R. M., Christian, K. P., & Konduri, K. C. (2011). PopGen 1.1 user’s guide. Raleigh, NC: Lulu Publishers. Pendyala, R. M., Kitamura, R., Kikuchi, A., Yamamoto, T., & Fujii, S. (2005). FAMOS: The Florida activity mobility simulator. Transportation Research Record, 1921, 123130. Pendyala, R. M., Konduri, K. C., Chiu, Y.-C., Hickman, M., Noh, H., Waddell, P., … Gardner, B. (2013). Integrated land use – transport model system with dynamic time-dependent activity-travel microsimulation. Transportation Research Record: Journal of the Transportation Research Board, 2301, 1927. Pendyala, R. M., Yamamoto, T., & Kitamura, R. (2002). On the formation of time-space prisms to model constraints on personal activitytravel engagement. Transportation, 29(1), 7394. Pourabdollahi, Z., Karimi, B., & Mohammadian, A. (2013). A joint model of freight mode and shipment size choice. Transportation Research Record, 2378, 84-91. Pribyl, O., & Goulias, K. (2005). Simulation of daily activity patterns incorporating interactions within households: Algorithm overview and performance. Transportation Research Record: Journal of the Transportation Research Board, 1926, 135141. doi:10.3141/1926-16 Pritchard, D., & Miller, E. J. (2012). Advances in population synthesis: Fitting many attributes per agent and fitting to household and person margins simultaneously. Transportation, 39(3), 685704. RDC, Inc. (1995). Activity-based modelling system for travel demand forecasting. Washington, DC: U.S. Department of Transportation. Recker, W. W., & Golob, T. F. (1979). A non-compensatory model of transportation behaviour based on sequential consideration of attributes. Transportation Research B, 13(4), 269280.
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
Transportation Models
419
Recker, W. W., McNally, M. G., & Root, G. S. (1986a). A model of complex travel behaviour: Part I Theoretical development. Transportation Research A, 20A(4), 307318. Recker, W. W., McNally, M. G., & Root, G. S. (1986b). A model of complex travel behaviour: Part I An operational model. Transportation Research A, 20A(4), 319330. Rieser, M. (2010). Adding transit to an agent-based transportation simulation: Concepts and implementation. PhD thesis, VSP, TU Berlin, Germany. Robin, T. (2011). New challenges in disaggregate behavioural modelling: Emotions, investments and mobility. PhD thesis, Ecole Polytechnique Fe´de´rale de Lausanne, Lausanne. Roorda, M., Cavalcante, R., McCabe, S., & Kwan, H. (2010). A conceptual framework for agent-based modelling of logistics services. Transportation Research E, 46(1), 1831. Roorda, M. J., Miller, E. J., & Habib, K. M. N. (2008). Validation of TASHA: A 24-hour activity scheduling microsimulation model. Transportation Research A, 42, 360375. Rothery, R. (2002). Car following models, lecture notes, University of Texas at Austin. Retrieved from http://ocw.mit.edu/courses/civiland-environmental-engineering/1-225j-transportation-flow-systemsfall-2002/lecture-notes/carfollowinga.pdf Ryan, J., Maoh, H., & Kanaroglou, P. (2009). Population synthesis: Comparing the major techniques using a small, complete population of firms. Geographical Analysis, 41(2), 181203. Sahaleh, A. S., Bierlaire, M., Farooq, B., Danalet, A., & Ha¨nseler, F. S. (2012). Scenario analysis of pedestrian flow in public spaces, transport and mobility laboratory report. Lausanne: Ecole Polytechnique Fe´de´rale de Lausanne. Samimi, A., Mohammadian, A., & Kawamura, K. (2010, July 1115). Freight demand microsimulation in the US. Proceedings of the 12th world conference on transport research, Lisbon, Portugal. Sheffi, Y. (1985). Urban transportation networks: Equilibrium analysis with mathematical programming methods. Englewood Cliffs, NJ: PrenticeHall. Simon, H. (1957). A behavioural model of rational choice. In H. Simon (Ed.), Models of man. New York, NY: Wiley. Tamminga, G., Miska, M., Santos, E., van Lint, H., Nakasone, A., Prendinger, H., & Hoogendoorn, S. (2012). Design of open source framework for traffic and travel simulation. Transportation Research Record: Journal of the Transportation Research Board, 2291(-1), 4452. doi:10.3141/2291-06 TMG. (2012). XTMF documentation, Toronto: Travel modelling group department of civil engineering. University of Toronto. Retrieved from
Downloaded by Chinese University of Hong Kong At 08:36 06 May 2016 (PT)
420
Eric J. Miller
http://www.ecf.utoronto.ca/∼miller/TMG-XTMF-Documentation. pdf Train, K. (2009). Discrete choice methods with simulation (2nd ed.). Cambridge: Cambridge University Press. TRB. (2010). A primer for dynamic traffic assignment. Washington, DC: U.S. Transportation Research Board. Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79(4), 281299. Vasic, J., & Ruskin, H. J. (2011). A discrete flow simulation model for urban road networks, with application to combined car and singlefile bicycle traffic. Computational Science and its applications ICCSA 2011 (Vol. 6782, pp. 602614). Lecture Notes in Computer Science. Wahba, M., & Shalaby, A. (2011). Large-scale application of MILATRAS: Case study of the Toronto transit network. Transportation, 38(6), 889908. Wang, J. A., & Miller, E. J. (2012). A prism- and gap-based approach to shopping destination choice. Special issue of Environment & Planning B. (forthcoming). Wang, Q., & Holguin-Veras, J. (2008). Investigation on the attributes determining trip chaining behaviour in hybrid micro-simulation urban freight models. Transportation Research Record, 2066, 18. Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. Proceedings, Institution of Civil Engineers, 11(1), 325378. Williams, H. C. W. L., & Ortuzar, J. D. (1982). Behavioural theories of dispersion and the mis-specification of travel demand models. Transportation Research B, 16(3), 167219. Williamson, P., Birkin, M., & Rees, P. H. (1998). The estimation of population microdata by using data from small area statistics and samples of anonymized records. Environment and Planning A, 30(5), 785816. Wilson, A. G., & Pownall, C. E. (1976). A new representation of the urban system for modelling and for the study of micro-level interdependence. Area, 8, 246254. Wilson, N. H. M., & Nuzzolo, A. (Eds.). (2008). Schedule-based modelling of transportation networks: Theory and applications. Boston: Kluwer Academic Publishers. Wisetjindawat, W., Sano, K., Matsumoto, S., & Raothanachonkun, P. (2007). Micro-simulation model for modelling freight agents interactions in Urban Freight Movement. 86th TRB annual meeting. Washington, DC. Zhang, S., Ren, G., & Yang, R. (2013). Simulation model of speed-density characteristics for mixed bicycle flow Comparison between cellular automata model and gas dynamics model. Physica A, 392, 51105118.
CHAPTER 14
Health Models
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Deborah Schofield, Hannah Carter and Kimberley Edwards
14.1. Introduction Health is a relatively new application of microsimulation. Early microsimulation models were generally developed for tax-benefit applications. However, over the last decade or so the health applications of microsimulation to health and health policy have proliferated reflecting the complexity of human health and health policy and the significant size and growth of national health budgets. The main areas of development of health microsimulation applications have been in response to the priority questions of government how much do health policies cost, what are their rate of growth and how is their expenditure distributed, how does health differ at the regional level, and increasingly, does the health workforce match the need for health care? There has also been an increasing integration of health-related mortality modelling into dynamic models of longterm socio-economic and health trends. This chapter will summarise the development of microsimulation models and their applications to health published since the year 2000 covering four main applications: health expenditure, spatial analysis, mortality and the health workforce, concluding with a section on emerging applications and future developments.
14.2. Health expenditure Given the focus on fiscal policy of early microsimulation models, it is not surprising that the largest number of microsimulation models is devoted to modelling the distribution of health expenditure. Early models typically imputed health status onto tax-benefit models using a small number of
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293013
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
422
Deborah Schofield, Hannah Carter and Kimberley Edwards
covariates (Landt, Percival, Schofield, & Wilson, 1995). Later, models of health expenditure began to be developed that were largely purpose-built with their primary purpose being to simulate health policy. These types of purpose-built models are the focus of this section although there are still a number of models that rely on imputed averages and are not underpinned by unit record data capturing the distributional characteristics of health or related expenditure (Fukawa, 2007). A model of this type called the Integrated Analytical Model for Household Simulation (INAHSIM) has been developed to forecast health and long-term care expenditure to 2050 for Japan (Fukawa, 2012). More sophisticated models are not only built on unit record health surveys but additional data is imputed at the unit record level using processes such as statistical matching if exact matching is not possible (e.g. COMPARE model, Cordova, Girosi, Nowak, Eibner, & Finegold, 2013; Girosi et al., 2009). Australia was one of the earliest countries to adopt purpose-built microsimulation models of major health expenditures, the first of which were developed about two decades ago (Schofield, 1998a, 1998b), with the more recent models including HealthMOD (Lymer, Brown, Payne, & Harding, 2006) covering a range of health programs and those for specific health programs such as the pharmaceutical benefits scheme (Brown, Abello, Phillips, & Harding, 2004). Dynamic models capture behavioural response and are seen for example in polices such as incentives to participate in public health insurance schemes (e.g. Girosi et al., 2009) where citizen behaviour is a significant driver of policy cost. There are relatively few dynamic models built specifically for modelling health or health policy. It is more common to see health modules ‘bolted on’ to dynamic models primarily developed to project demographic change and capture the impacts on tax-benefit policy. Examples include the Swedish model SESIM which has ‘absence due to sick days’ and ‘retirement due to ill health modules’ (Klevmarken & Lindgren, 2008), and DYNAMOD, an Australian model, to which modules simulating broad health status and major public health expenditures were added (Lymer & Brown, 2010). Sometimes models are developed largely to simulate disease trajectories and then to model cost-effectiveness of interventions. These models can be used to project beyond the follow-up period of the clinical trial in the way that a typical aggregate cohort-based Markov model would do by simulating the probability of transitions between health states and the associated costs often imputed in aggregate from published or administrative data (e.g. Fukawa, 2007; Walker, Butler, & Colagiuri, 2010). POHEM, is one of the most comprehensive dynamic microsimulation models developed specifically to model health. It simulates diseases and risk factors, health care costs and health-related quality of life (Wills, Berthelot, Nobrega, Flanagan, & Evans, 2001). Other models have been more focused, such as an Australian model developed to simulate the costs of diabetes and its sequelae (Walker et al., 2010). Table 14.1
Table 14.1.
Static models Gruber (2000) (USA, 2000)
Aims/objectives
To estimate the impact of tax policies in on government costs and insurance coverage.
COMPARE (Girosi et al., 2009) (USA, 2009)
Model the impact of a series of policy options on public health insurance coverage and in the US.
HealthMOD (Fukawa, 2012) (Australia, 2004)
Model major public health expenditures
Dynamic and Projection Models Models health, health risk factors and Population Health Model health costs (POHEM) (Brown et al., 2004) (Canada, 2001) Diabetes model (Walker et al., 2010) (Australia, 2010) Intergenerational reports (Schofield & Rothman, 2007)(Australia, 2002)
Data sources
Nationally representative sample of individuals from the Current Population Surveys (CPS) for February and March, 1997 Nationally representative surveys : Survey of Income and Program Participation (SIPP), the Medical Expenditure Panel Survey (MEPS), the Kaiser Family Foundation/ Health Research and Educational Trust Employer Survey (Kaiser/HRET), and the Survey of U.S. Businesses (SUSB) National Health Survey
Modelling diabetes and its health system costs
Canadian Community Health Service (CCHS), Statistics Canada’s census projections, and the National Population Health Survey (NPHS) National Health Survey, AUSDIAB
To forecast health expenditure and other demographically sensitive expenditure over 40 years
Australian Government Budget Papers, National Health Survey, The Australian Treasury Demographic Forecasts
Sample size
Not stated
Not stated
30,000
Health Models
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Model (country, year developed)
Microsimulation models of health expenditure, 2000 2013
130,000
30,000
423
Microsimulation models based on Australian National Health Surveys ∼30,000 records
Integrated Analytical Model for Household Simulation (IAMHS) (Fukawa, 2007) (Japan, 2007) Hiligsmann et al. (2009)
SESIM (Klevmarken & Lindgren, 2008) (Sweden, 2008) DYNAMOD (Lymer & Brown, 2010) (Australia, 2009) SAGE (Scott, 2003) (UK, 2001)
Aims/objectives
Data sources
Sample size
Model health and long-term care expenditures of the elderly
IAMHS with imputed averages for expenditures by age
Not stated
Assess the cost-effectiveness of prevention and treatment of osteoporosis using alendronate therapy over a lifetime taking account of direct health care costs. Adding to a tax-benefit model absence due to sick days and retirement due to ill health
Clinical intervention and published data for projections
Not stated
The HINK (Household Income Distribution Survey
30,000 records
Adding to a population projection and taxbenefit model broad health states and costs of major public health programs Dynamic demographic/tax model for the UK
Australian Census and other data sources for health imputation
100,000 records
10% sample drawn from the Individual/ Household, 1991 Anonymised records combined with several survey datasets
54,000 individuals
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Model (country, year developed)
(Continued )
424
Table 14.1.
Health Models
425
summarises health expenditure models reported in the literature from 2000 to 2013.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
14.3. Spatial models of health and disease Spatial microsimulation is a growing area of modelling, particularly in relation to health and disease data since the Black Report, written by the UK Department of Health and Social Security in 1980 which highlighted widening spatial health inequalities (Gray, 1982); and these have been subsequently corroborated worldwide (Dorling, Mitchell, & Pearce, 2007; Safaei, 2007; Wilkinson & Picket, 2006). Spatial microsimulation for health followed earlier applications of spatial microsimulation modelling to examine social policy and population dynamics (Ballas, Clarke, Dorling, et al., 2005; Ballas, Rossiter, Thomas, Clarke, & Dorling, 2005; Birkin & Clarke, 1988; Chin & Harding, 2006; Williamson, Birkin, & Rees, 1998). Both static and dynamic spatial health models have been developed, depending upon the needs of the motivating research or policy question, the latter models being more computationally intensive. Part of the reason for the increase in popularity of these models, aside from academic and policy need, is increasing computation ability, at low cost, seen over recent decades, which together with software/model sharing has opened up this methodology to individuals with numerical ability but without the need to be a computer programmer. Spatial microsimulation models allow the user to build large-scale micro-datasets using either deterministic or probabilistic algorithms, combining a detailed population dataset (with the outcomes of interest) with a detailed geographic dataset (which includes the spatial detail that is required in the final estimation dataset) to simulate how a person’s location affects their risk of disease. Examples include spatial microsimulation for health decision support (SimSALUD) by the Federal Ministry for Transport Innovation and Technology and the Austrian Science Fund at the Carinthia University of Applied Sciences in Austria (http:// simsalud.cti.ac.at:81/simsalud/). Spatial health models are summarised in Table 14.2. Spatial microsimulation modelling has now been used successfully to estimate health and disease data as well as lifestyle and environmental factors associated with chronic disease, to examine health inequalities and health care provision. There is high prevalence of chronic diseases mediated by lifestyle choices (e.g. smoking, alcohol consumption, physical activity levels, diet). Accordingly in order to model chronic disease and to elucidate about their aetiology and prevention, it is useful to have detailed information about these lifestyle variables. There are many chronic diseases linked to lifestyle choices, some of which are increasing in prevalence (for example, obesity) (World Health
Aims/objectives
Model type
Anderson (2008) (UK)
This study sort to estimate micro-level patterns of ‘internet-update and media time use’.
Lifestyle indicators of disease
Campbell (2011) (Scotland)
The aim of this study was to simulate health data at the small area level and investigate any spatial inequalities existing in Scotland. The model was also used to explore potential outcomes of policy scenarios to attempt to reduce inequalities. This study produced small area estimates of disability levels and need for aged care across age groups, in order to facilitate the development and execution of appropriate social policy and elderly care services for older Australians. They found that as age increases, the presence and severity of disability also increases, indicating a higher level of care requirement, and particular geographic areas had greater rates of disability or care provision need. To understand ‘the influence of social capital on health outcomes’This model estimated various health and healthrelated behaviours of individuals as well as area measures of social capital, but found no relationship between social capital and health, at this spatial scale. This model produces micro-level estimates of synthetic individuals including changes in six demographic processes (health change, mortality, fertility, ageing, household formation and migration).
Health inequalities
CareMod (Chin & Harding, 2006, 2007; Lymer et al., 2009, 2008) (Australia)
Mohana et al. (2005) (UK)
MoSeS (Birkin et al., 2009; Wu & Birkin, 2013) (UK)
Data sources 2000 Office for National Statistics Time-Use Survey Scottish Health Survey UK Census
Disease/morbidity prevalence; health care provision
ABS Survey of Disability, Ageing and Carers; 2001 Census small area data
Health inequalities
Health and Lifestyle Survey of England UK Census
Disease/morbidity prevalence; health care provision
British Household Panel Survey; UK Census; ONS Mid-Year Estimation and Sub-national projections; Special Migration Statistics; Vital Statistics
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Model (country)
Spatial microsimulation models of health and disease
426
Table 14.2.
SimHealth (Smith et al., 2006, 2007, 2011; Tomintz et al., 2008) (UK; New Zealand)
SimObesity (Edwards & Clarke, 2009; Edwards, Clarke, Thomas, et al., 2010; Edwards, Clarke, Ransley, & Cade, 2010; Procter et al., 2008) (UK) SMILE (Ballas, Clarke, & Weimers, 2005; Morrissey, Clarke, Ballas, Hynes, & O’Donoghue, 2008; Morrissey et al., 2013, 2010) (Ireland)
This model takes a spatial microsimulation approach to population dynamics, asking what would happen if Britain were ‘more equal in terms of health’ over a 30 year simulation. The health of the simulated individuals are explored and the interdependencies between health variables (for example, limiting long-term illness) and socio-economic attributes are discussed to answer the question about what role socio-economic factors have in determining an individual’s health. To investigate the association between access to health food and diet related ill health, including diabetes, obesity and low birth weight. It has also been used to estimate prevalence of smoking at the individual level in order to assess optimal location of smoking cessation clinics. To create micro-level estimates of obesity prevalence and obesogenic lifestyle and environment factors.
To investigate the small area incidence of depression and limiting long-term illness in Ireland, as well as levels of access to both acute and community psychiatric care. SMILE: Simulation Model for the Irish Local Economy
Health inequalities
British Household Panel Survey; UK Census
Disease prevalence; Lifestyle indicators of disease; health care provision
Health Survey for England; UK Census
Disease prevalence; Lifestyle indicators of disease
Health Survey for England; Expenditure and Food Survey; UK Census
Health Models
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
SimBritain (Ballas, Clarke, Dorling, et al., 2005; Ballas et al., 2006) (UK)
Disease prevalence; health care provision
427
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
428
Deborah Schofield, Hannah Carter and Kimberley Edwards
Organisation [WHO], 2000), increasing the need for formulating effectual and suitable policies to help prevent these diseases. Accordingly, spatial models have been developed to address the need to assess the context dependent impact of proposed interventions to health. For example, does a policy to reduce obesity focusing on community projects have a different impact in areas of high or low deprivation, urban or rural areas or perhaps there are particular geographical ‘hot spots’ of disease in a city that should not be averaged out (Procter, Clarke, Ransley, & Cade, 2008). The same intervention may be successful in one area but have no or negative impact in another; understanding this level of geographic detail is central to spatial microsimulation. SimObesity is a model built to increase our understanding about the relationship between lifestyle and obesogenic environment risk factors and obesity, in both children and adults (Edwards & Clarke, 2009; Edwards, Clarke, Thomas, & Forman, 2010). Similar to a number of other spatial microsimulation models, it generates a ‘synthetic population’ of individuals and households in the study area linking attributes from the data sources -other models based on similar synthetic populations include SimHealth for type 2 diabetes (Smith, Clarke, Ransley, & Cade, 2006; Smith, Harland, & Clarke, 2007); and SMILE for depression and limiting long-term illness (Morrissey, Clarke, & O’Donoghue, 2013; Morrissey, Hynes, Clarke, & O’Donoghue, 2010). SimObesity has also been used to estimate obesogenic environment attributes at the small area level in order to increase understanding about obesity at the neighbourhood level, which in turn may facilitate the development of targeted obesity prevention health policies. Anderson (2008) has modelled detailed micro-level data on understanding patterns of e-play, which is an important component of physical inactivity which we know is linked to ill health. Similarly, SimHealth has been used to estimate smoking prevalence at the individuallevel in both the UK and New Zealand (Smith, Pearce, & Harland, 2011; Tomintz, Clarke, & Rigby, 2008). Some of these models utilise the estimated disease data to understand access to health care services, such as access to smoking cessation clinics in the UK (Tomintz et al., 2008) or access to GP services in Ireland (Morrissey et al., 2013), often by linking to other models such as locationallocation or spatial interaction models. There are also examples of dynamic spatial microsimulation models which model micro-level health data, as well as other attributes. The key advantage of these models is that, as well as creating a detailed snap shot of the population’s attributes, they are able to predict the trajectory of disease or the impact of health interventions over time (O’Donoghue, 2001). MoSes is one such model, including a ‘health change’ attribute, which has been used to examine the effects of increasing obesity prevalence on health status and life expectancy. Additionally CareMod has been used in Australia to estimate disability (Lymer, Brown, Yap, & Harding, 2008)
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
429
and elderly care requirement at fine spatial levels (Lymer, Borwn, Harding, & Yap, 2009). In addition to the chronic disease models described above, there are also a number of infectious diseases modelled, usually dynamically at the small area scale. These have typically been developed in order to track the spread of the disease to facilitate the examination of the impact of different prevention or mitigation strategies, whether for vaccination policies or bioterrorism avoidance. These agent based models are not covered in detail here, however, examples for smallpox include, Eubank et al. (2004) and Burke et al. (2006), and for influenza, Timpka, Morin, Jenvald, Eriksson, and Gursky (2005) and Eriksson et al. (2007). Several spatial microsimulation models have been developed to examine health and health inequalities. Ballas, Clarke, Dorling, Rigby, and Wheeler (2006) used a dynamic spatial microsimulation approach to analyse health inequalities using SimBritain. Mohana, Twigg, Barnard, and Jones (2005) examined the relationship between social capital and health, and Campbell (2011) investigated spatial inequalities of ill health at the small area level in Scotland.
14.4. Mortality Mortality is generally simulated in dynamic microsimulation models which capture lifetime transitions until death. The modelling of mortality, and the level of complexity applied, depends largely on two key factors: (1) the aim of the model, as some objectives and policies may be more or less related to specific health behaviours or risk factors; and (2) the availability of data, which severely constrains modellers to different degrees in different countries (Pennec & Bacon, 2007). As Lymer (2009) has identified, dynamic microsimulation models related to health tend to fall into two categories: broad models of demographic, economic and welfare processes that contain a health module; and dedicated health models that have been developed to exclusively model health behaviours, outcomes and/or resource use. The techniques and data sources used to model mortality were found to differ significantly between these two categories. Accordingly, the models are divided into two categories: Table 14.3 summarises the models that represent a broad economic system while incorporating health outcomes as one component of interest, while Table 14.4 summarises the models that were specifically developed to model the health system. The least sophisticated models of mortality are generally components of microsimulation models of broad economic systems. Since the primary aim of these large models is to simulate broad socio-economic and public policy processes, the mortality projections they contain tend to be of limited scope and hence provide limited opportunity to estimate the mortality implications of alternative
Table 14.3.
DYNASIM 3 (USA, 2004)
CAPP_DYN (Italy, 2008) INAHSIM, revision 3 (Japan, 2005)
SAGE (UK, 2001)
Designed to provide answers regarding the future distributional impact of policy change and other issues associated with policy responses to population ageing Designed to analyse the long-term distributional consequences of retirement and ageing issues
Analyses the long-term redistributive effects of social policies. Simulates demographic and social evolution, able to simulate kinship relationships in detail Dynamic demographic/tax model for the UK
Base data source
Sample size
Mortality covariates
1% census sample drawn from the 2001 Census
188,013 individuals
SIPP panels 1990 to 1993
100,000 individuals Time trend from Vital and 44,000 Statistics 1982 97, households includes socio-economic differentials; separate process for the disabled based on age, sex, and disability duration derived from actuarial estimates (Zayatz, 1999). Age, gender, year of 21,148 individuals simulation and area of and 8,011 residence households 128,000 individuals Age, sex, health status as a and 49,000 binary variable households
Survey of Households’ Income and Wealth (SHIW), 2002 2004 Comprehensive Survey of the Living Conditions of People on Health and Welfare (CSLC), aligned with population census 10% sample drawn from the Individual/Household, 1991 Anonymised records combined with several survey datasets
54,000 individuals
Age, sex
Age, sex, social class
Mortality data source
ABS Life Tables
National Longitudinal Survey of Youth (NLSY) 1979 81, Vital Statistics, Intermediate assumptions of the OASDI Trustees (OACT)
ISTAT official projections 1/1/2007 Projected life tables published by the National Institute of Population and Social Security Research (IPSS) Based on abridged life tables for the years 1987 1991 produced by the Office for National Statistics (ONS) using the ONS Longitudinal Study (LS). The LS is a dataset produced by record linkage of Census and vital event information for individuals living in Britain.
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
APPSIM (Australia, 2007)
Aims/objectives
430
Model (country, year developed)
Microsimulation models of broad economic systems, incorporating health and mortality outcomes
Table 14.4.
Aims/objectives
Base data source
Canadian Community Health Service (CCHS) survey, with additional variables imputed using a range of administrative datasets
POHEM (Canada, 2001)
Used to compare competing health intervention alternatives within a framework that captures the effects of disease interactions
LifeLossMOD (Australia, 2013)
Australian mortality records Estimates the long-term economic from 2003 impacts of premature mortality in Australia
DYNOPTASIM (Australia, 2011)
Produces scenarios of health outcomes of the baby boomers and older cohorts in Australia and will evaluate the impact of modifying risk factors associated with different age-related disabilities Assesses the cost-effectiveness and distributional impact of obesity-related interventions in selected OECD countries
Chronic Disease Prevention Model (CDP) (European A WHO region, 2009)
Sample size
131,535 individuals
131,495 individual records
Mortality covariates
Mortality data sources
Age/sex over time, as well as certain risk factors (smoking, the evolution of body mass index (BMI) and blood-related risk factors [total cholesterol, high density lipid count (HDL)] and blood pressure). Age, sex
Based on mortality data and using Statistics Canada projections of mortality for Canada. Estimates for mortality associated with risk factors were obtained from published literature ABS cause of death data, synthetically matched with individual records from the APPSIM model Unclear from currently published documents
Age, sex, specific health conditions
Unclear from currently published documents
By age/sex, also explicitly accounts for three groups of chronic diseases: stroke, ischemic heart diseases and cancer (including lung, colorectal and breast cancer)
Unclear from currently published documents
Modelled on the basis of best existing epidemiological evidence for the relevant countries from a range of sources, including national health surveys, published studies, and
431
50,652 Pooled dataset prepared by individuals harmonising selected variables from the datasets of nine different Australian longitudinal studies of ageing.
Health Models
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Model (country, year developed)
Microsimulation models specifically developed of models health and mortality outcomes
Aims/objectives
Base data source
Sample size
Mortality covariates
HealthAgeingMOD (Australia, 2007)
Cost-benefit model system of chronic diseases to assess and rank prevention and treatment options. Designed to use standard cost-benefit and costeffectiveness methods to assess the impact of a series of simulated policy options
NHS05 person level household data, plus sdac03 for institutionalised persons
25,906 individuals
Age, sex, presence of disease (CVD or diabetes)
Dynamic physical activity model (Canada, 2011)
Evidence-based simulation tool to study the impact of physical activity on population health outcomes
Canadian Community Health Survey
>100,000 individuals
Age, sex, year (time), region, race/ethnicity, smoking status, drinking status, education, bmi, hypertension, diabetes, heart disease, cancer, level of physical activity
Mortality data sources
datasets from WHO, the UN Food and Agriculture Organization, and the International Agency for Research on Cancer. Initial weights from NHS 2005, calibrated to reflect ABS published age/sex targets. Diabetes deaths from AUSDiab data. CVD deaths from ABS published CVD deaths targets National Population Health Survey (NPHS), and adjusted so that they agree with those provided by Statistics Canada Demography Division
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Model (country, year developed)
(Continued )
432
Table 14.4.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
433
health policy interventions. All five models identified were population representative and designed to provide projections regarding the future distributional impact of policy change, particularly focussing on the implications of population ageing. The health status of individuals within the models was incorporated to varying extents, but was generally allocated either in terms of a disability status (as in APPSIM (Lymer & Brown, 2012), CAPP_DYN (Mazzaferro & Morciano, 2008) and DYNASIM III (Favreault & Smith, 2004)) or with health status health dichotomised into two categories: ‘good health’ and ‘ill health’ (as in INAHSIM (Inagaki, 2005) and SAGE (Scott, 2003)) (see Table 14.3). Mortality rates by age and sex were the basic indicators used, as these data are relatively easy to obtain in the form of life tables from a country’s vital statistics database. Depending on the availability of data, some of the models also incorporated health and socio-economic variables as additional determinants of mortality. Health variables were mainly related to the presence of disability, while socio-economic status included higher education level, labour force status or occupation. In imputing the risk of mortality, transition rates were applied as modelled by regression techniques (using hazard functions or logistic regression equations). Mortality rates within the APPSIM, CAPP_DYN and SAGE models were not linked to their respective health modules, but functioned separately, based on population life tables and/or official projections. While the simulation of mortality within these models may implicitly account for variables such as changing health status and increased future life expectancy, there is no explicit means of modelling the heterogeneity of death rates associated with diseases or health behaviours. The developers of APPSIM acknowledge this is a major limitation of the current modelling, which was largely due to constraints surrounding the lack of individual-level mortality data available in Australia (Lymer & Brown, 2012). The DYNASIM III model incorporated the greatest level complexity in the modelling of mortality rates. It was able to overcome some of the limitations associated with individual-level data by applying a four-stage process in projecting mortality that relied on a combination of both individual and aggregate level data, drawing on the respective strengths of each data source. In the first stage, the probability of death was estimated according to an individual’s age and sex, as a function of individual fixed characteristics and some varying socio-economic attributes. In the second stage, data from the US vital statistics are used to calibrate the aggregated age-race parameters in the models and incorporate a time trend. For those receiving disability insurance, the third stage of the model assigns probabilities based on the estimates from aggregate data derived from the Office of the Chief Actuary of the Social Security Administration. The last stage calibrates the expected probability of death to achieve the targets produced by the social security actuaries.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
434
Deborah Schofield, Hannah Carter and Kimberley Edwards
As expected, the modelling of health is more complex when modelling mortality in dedicated health system models which range widely in the choice of health conditions, outcome measures and explanatory variables (Lymer, 2009). In turn, the mortality projections are also able to incorporate more complexity, and can be based on the presence of specific health conditions or individual risk factors. These models have varying aims, including to model general health status POHEM (Wolfson, 1994), DYNOPTASIM (Brown et al., 2011), LifeLossMOD (Carter, Schofield, & Shrestha, 2013), specific diseases or groups of diseases (Chronic Disease Prevention (CDP) model (Sassi, Cecchini, Lauer, & Chisholm, 2009), HealthAgeingMOD (Walker, Butler, & Colagiuri, 2011)) or the impacts of positive health behaviours (Dynamic physical activity model (Nadeau et al., 2011)) (see Table 14.4). Perhaps the best known dedicated dynamic health microsimulation model is Canada’s Population Health Model (POHEM). POHEM is a detailed model of morbidity and mortality for various diseases and is used to evaluate competing health care scenarios for specific diseases. It creates a virtual synthetic longitudinal dataset, which represents the full life cycle of a birth cohort. Mortality is simulated in continuous time, whereby the time to death is simulated, rather than the probability of a transition. This allows for the implementation of a competing risk framework, by which the event with the shortest time to a mortality transition is deemed to happen (Willa, Berthelota, Nobregaa, Flanagana, & Evansb, 2001). The mortality rates within POHEM were based on official projections by age and sex, and were then altered for individuals with specific risk factors (including smoking, the evolution of body mass index (BMI) and blood-related risk factors (total cholesterol, high density lipid count and blood pressure)) using estimates derived from the published literature. A similar approach was taken in HealthAgeingMOD, DYNOPTASIM and the CDP model, whereby base mortality rates were estimated by age and sex, and then adjusted to reflect the presence of specific diseases. LifeLossMOD aims to project the long-term economic impacts of premature mortality and thus takes a different approach to the modelling of mortality rates. The base population of the model consists of a complete record of Australian deaths in 2003, with variables describing age, sex, socio-economic status and cause of death. These records were then synthetically matched with individual records from the APPSIM microsimulation model containing identical descriptors of sex, age and socioeconomic status in 2003. By tracking the matched individuals through the APPSIM model between the years 2003 to 2030, a counter-factual estimate of the economic outcomes forgone due to premature mortality is obtained. This in turn allows for estimates of the net costs or savings associated with interventions and policies to prevent disease. The final technique used to model mortality rates in the dedicated health models identified is applied in the Canadian dynamic physical
Health Models
435
activity model. This model aims to estimate the impact of physical activity on population health outcomes. Mortality rates within the model were imputed based on data from the National Population Health Survey (NPHS), an individual-level longitudinal database. As such, a large number of mortality covariates were able to be applied, including age, sex, year, region, race/ethnicity, smoking status, drinking status, education, BMI, hypertension, presence of diabetes, heart disease, or cancer and level of physical activity. These rates were then calibrated against those provided by the Statistics Canada Demography Division.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
14.5. Health workforce Microsimulation is increasingly being used for purpose-built models to plan the health workforce (Table 14.5 summarises models developed since the year 2000). Increasingly these models are profession specific and take account of the number of entrants to and exits from professions, ageing of the population and the related changes in demand for care, and indeed, the impacts of an ageing health workforce. In the US, the Care Span Model, which took account of the aging of the population and the increasing prevalence of chronic disease and complex medical conditions, was developed to predict future demand for adult primary care services by specialty to the year 2025 (Dall et al., 2013). Similar models were developed for the physiotherapy (Schofield & Fletcher, 2007), medical, nursing and oncology workforces in Australia (Schofield, 2007; Schofield, Callander, Kimman, Scuteri, & Fodero, 2012), as well as New Zealand, where a microsimulation model was developed to plan the future primary care workforce (Pearson et al., 2011). There are now also a number of microsimulation models developed to project the future demand for formal aged care as well as for informal care, the most well established being PSSRU Long-term Care Finance Model, although this model is based on aggregated data (Wittenberg et al., 2011). More recent models of aged care include a specific aged care module of APSSIM in Australia (Nepal et al., 2011) and DOPAMID, one of the few microsimulation models developed for the middle east, a model of demand for informal aged care in Iran (Teymoori, Hansen, Franco, & Demongeot, 2010a, 2010b). Like simulation of taxes and benefits, microsimulation is also used to assess policy impacts in relation to the health workforce. Sometimes these policies interact with other portfolios, and in some cases these interacting impacts are simulated. For example, a US study simulated the economic cost of school closure under the Community Strategy for Pandemic Influenza Mitigation and its impact on the health care system. This study captured the impact of parents staying home to care for their children and the number of parents who would be absentee health care workers
436
Model (country, year developed) Static Models Schofield, Shrestha, and Callander (2012) (Australia) (2012) Howard et al. (2009) (US) (2009)
Dynamic, cohort and Projection Models Ba¨rnighausen and Bloom (2008), (Sub-Saharan African) (2008)
Care Span Model (Dall et al., 2013) (USA) (2013) Schofield D (Schofield, Callander, Kimman, Scuteri, & Fodero, 2012; Schofield & Fletcher, 2007) (Australia) (2007) PACASO (Lay-Yee, Pearson, Davis, von Randow, & Pradhan, 2011) (New Zealand) (2011) APPSIM (Pearson et al., 2011) (Australia) (2011)
Microsimulation models of health workforce, 2000 2013 Aims/objectives
To estimate the extent of unmet demand for general practitioner services in disadvantaged areas Simulate the effects of the Community Strategy for Pandemic Influenza Mitigation and its impact on the health care system To estimate the effect scholarships could Have in increasing low numbers of health workers in sub-Saharan Africa Forecast adult primary care services by specialty to the year 2025 Forecast shortages of physiotherapists and nurses
Forecast future demand for care
Projecting the need for formal and informal aged care in Australia
Data sources
Sample size
Australia National Health Survey (NHS)
∼35,000
2008 Current Population Survey (CPS)
405,211
World Bank International Migration and Development Program contains doctor emigration rates and published data for probabilities and other inputs Not available
Not stated
Not available
Disaggregated Australian Bureau of Statistics Census data
Disaggregated census data
New Zealand Health Survey, National Primary Medical Care Survey and the Australian National Health Survey 1% sample of the Australian Bureau of Statistics using records for persons aged 65 years and over only
Not stated
Not stated
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Table 14.5.
Dynamic Projection of Old Aged Disability in Iran: DOPAMID Microsimulation (Teymoori et al., 2010b) (Iran) (2011) Schofield, Callander, Kimman, Scuteri, and Fodero (2012) (Australia) (2012)
Projecting the need for informal care in Iran
Not stated
Not stated
Projecting the need for informal care by linking two models
2008-based population and marital status projections, data from the 2001/2 General Household Survey, the 2005 PSSRU survey of older care home admissions, March 2009 data on residential care and home-based care, expenditure data for 2009/10 and unit costs adjusted to 2010/11 prices 25,747 people aged 65 + living in England from the 002/3, 2003/4 and 2004/5 UK Family Resources Survey Not stated
Aggregate data
Projecting the need for informal care in Iran
Forecast shortages or oversupply of the oncology health workforce to 2019
Data collected from Australian universities and the relevant professional associations for radiation oncologists, radiation physicists and radiation therapists
Not stated
Profession census data
Health Models
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Dynamic Projection of Old Aged Disability in Iran: DOPAMID Microsimulation (Teymoori et al., 2010a) (Iran) (2011) PSSRU Long-term Care Finance Model and CARESIM (Wittenberg et al., 2011) (UK) (2011)
437
438
Deborah Schofield, Hannah Carter and Kimberley Edwards
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
and the extent to which this absenteeism counteracted the main benefit of school closures for infection control alleviation of pressure on the health care system (Howard, Epstein, & Hammond, 2009). Another specific application of microsimulation to the health workforce is to address the issue of inequity of access to care, such as a recently reported model developed to determine the number of additional general practitioner services that disadvantaged individuals (those in rural areas, those on low family incomes, those in poor health and those in low socioeconomic areas) would receive were they to have similar access to services as those who were less disadvantaged (Schofield, Callander, Kimman, Scuteri, & Fodero, 2012).
14.6. Future developments As health and health policy has become a more common application of microsimulation models, the sophistication and breadth of purpose of these models has begun to increase. Indeed, purpose-built health models are beginning to be used as a platform to link other data and models of taxes and benefits. One such application is the recent development of HealthandWealthMOD, an Australian microsimulation model which links with one or two other microsimulation models (depending on the application) to capture the economic impacts of health and changes in health status across multiple government portfolios and the national economy. HealthandWealthMOD has been used to estimate the impacts of health and interventions to improve health and capacity to remain in the labour force on: family incomes, saving, incomes in retirement and poverty status as well as government tax revenues and welfare expenditure (Schofield et al., 2009, 2011). Another example of this type of cross-portfolio application is the model of the health workforce effects of proposed school closures to manage pandemics (Scott, Peter, Joyce, Schofield, & Davies, 2011). There have also been moves towards the use of microsimulation methods to evaluate the cost-effectiveness of new interventions generally drawing information on evidence of effectiveness from randomised clinical trials or other study designs. This has tended to take the form of Markov models to undertake cost-effectiveness studies using microsimulation methods. These are often hybrid models, which do not always have the same amount of use of unit record data as typical microsimulation models, but rather, rely largely on aggregate data or synthetic populations (e.g. Ba¨rnighausen & Bloom, 2008). If the model is developed in conjunction with an intervention study such as a clinical trial which collects individual patient data then the model contains most of these data sources at the unit record level. These models have a distinct advantage over aggregate Markov models in that they can simulate a virtually unlimited number of health states, with the
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
439
number of health states only limited by the number of states captured and number of records in the primary individual-level data source. This potentially increases the reliability of the model as well as capturing distributional impacts (e.g. Hiligsmann, 2009). The benefits of using microsimulation for cost-effectiveness studies has been highlighted for its ability of the models to handle large data sets and the potentially complex and very large number of permutations of conditions, side effects and treatments within a study as well as for extrapolating outcomes beyond the period of follow up of patients during the trial (Vanness, Tosteson, Gabriel, & Melton, 2005; Weinstein, 2006). There are currently a small number of cost-effectiveness studies using microsimulation with this purpose, with one example being to evaluate the cost-effectiveness of memantine to treat Alzheimer’s disease patients receiving donepezil (Derek et al., 2007). The application of microsimulation modelling to cost-effectiveness studies associated with clinical trials would be a natural development of microsimulation models developed to simulate the progression of disease and the related costs such as the microsimulation models used by the US National Cancer Institute (http://cisnet.cancer.gov/) or the models used by NATSEM for type 2 diabetes and its control in Australian populations (http://www.natsem.canberra.edu.au/) (Brown et al., 2009) or HealthAgeingMod which simulates obesity and progression to diabetes and cardiovascular disease (Walker et al., 2010). These models, if adapted to simulate the outcomes of clinical trials, would have the capacity to appraise the impact (clinical, social and economic), of recommended prevention strategies or treatments before expensive, time consuming real world implementation. This development would also be consistent with the work of Sassi and Hurst (2008) who emphasise the need to assess the impact of any proposed interventions for prevention of these diseases empirically before implementation, with this process facilitated by the use of erudite tools to model the complex dynamics involved in disease aetiology.
14.7. Conclusions From its infancy two decades ago, microsimulation applications to health, health care and health expenditure have proliferated. Starting as relatively simple models, often based on cell based approaches without the benefits of unit record health data and the capacity for distributional analysis, and then simple means-based imputation onto established tax-benefit models, they now represent some of the most sophisticated and innovative of microsimulation models. Microsimulation offers many benefits in its application to health as human health and individual characteristics are so diverse, with the extent of that diversity only now beginning to be understood as genetics and personalised medicine has developed. These
440
Deborah Schofield, Hannah Carter and Kimberley Edwards
new frontiers are likely continue to propel the development of health applications of microsimulation for decades to come.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
References Anderson, B. (2008). Time to play: Combining time use surveys and census data to estimate small area distributions of potentially ICT mediated leisure. Paper presented at the AoIR 8, October, 17, 2007, Simon Fraser University, Vancouver, BC, Canada. Ballas, D., Clarke, G., Dorling, D., Rigby, J., & Wheeler, B. (2006). Using geographic information systems and spatial microsimulation for the analysis of health inequalities. Health Informatics Journal, 12(1), 65 79. Ballas, D., Clarke, G. P., Dorling, D., Eyre, H., Rossieter, D., & Thomas, B. (2005). SimBritain: A spatial microsimulation approach to population dynamics. Population, Space and Place, 11(1), 13 34. Ballas, D., Clarke, G., & Weimers, E. (2005). Building a dynamic spatial microsimulation model for Ireland. Population, Space and Place, 11, 157 172. Ballas, D., Rossiter, D., Thomas, B., Clarke, G., & Dorling, D. (2005). Geography matters: Simulating matters: Simulating the local impacts of national social policies. York: The Joseph Rowntree Foundation. ISBN: 1-85935-266-9. Ba¨rnighausen, T., & Bloom, D. E. (2008). Conditional scholarships’ for HIV/AIDS health workers: Educating and retaining the workforce to provide antiretroviral treatment in sub-Saharan Africa. Social Science & Medicine, 68(3), 544 551. Birkin, M., & Clarke, G. (1988). SYNTHESIS A synthetic spatial information system for urban and regional analyses: Methods and examples. Environment and Planning A, 20(12), 1645 1671. Birkin, M., Turner, A., Wu, B., Townsend, P., Arshad, J., & Xu, J. (2009). MoSeS: A grid-enabled spatial decision support system. Social Science Computer Review, 27(4), 493 508. doi:10.1177/ 0894439309332295. Retrieved from http://ssc.sagepub.com/content/ 27/4/493.short. Accessed on Febuary 19, 2014. Brown, L., Abello, S., Phillips, B., & Harding, A. (2004). Moving towards an improved microsimulation model of the Australian pharmaceutical benefits scheme. Australian Economic Review, 31(1), 41 61. Brown, L., Nepal, B., Booth, H., Pennec, S., Anstey, K., & Harding, A. (2009). 42nd national conference of the australian association of gerontology, Canberra, Australia. Retrieved from http://www.natsem. canberra.edu.au/publications/search-by-year/?publication=bridgingthe-gap-in-meeting-clinical-targets-for-the-treatment-of-type-2diabetes. Accessed on Febuary 19, 2014.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
441
Brown, L., Nepal, B., Booth, H., Pennec, S., Anstey, K., & Harding, A. (2011). Dynamic modelling of ageing and health: The Dynopta microsimulation model. National Centre for Social and Economic Modelling (NATSEM), Canberra, Australia. Burke, D. S., Epstein, J. M., Cummings, D. A. T., Parker, J., Cline, K. C., Singa, R. M., & Chakravarty, S. (2006). Individual based computational modelling of small pox epidemic control strategies. Academic Emergency medicine, 13(11), 1142 1149. Campbell, M. H. (2011). Exploring the social and spatial inequalities of illhealth in Scotland: A spatial microsimulation approach. Ph.D. thesis, University of Sheffield. Retrieved from http://etheses.whiterose.ac. uk/1942/. Accessed on Febuary 19, 2014. Carter, H., Schofield, D., & Shrestha, R. (2013). LifeLossMOD: A microsimulation model of the economic burden of premature mortality in Australia (Under review). Chin, S. F., & Harding, A. (2006). Regional dimensions: Creating synthetic small area micro-data and spatial microsimulation models. NATSEM Technical Paper No. 33. University of Canberra, Canberra, Australia. Chin, S. F., & Harding, A. (2007). SpatialMSM NATSEM’s small are household model of Australia. In A. Harding & A. Gupta (Eds.), Modelling our future: Population ageing, health and aged care, international symposia in economic theory and econometrics. Amsterdam: North Holland. Cordova, A., Girosi, F., Nowak, S., Eibner, C., & Finegold, K. (2013). The COMPARE microsimulation model and the US affordable care act. International Journal of Microsimulation, 6(3), 78 117. Dall, T., Gallo, P., Chakrabarti, R., West, T., Semilla, A., & Storm, M. (2013). The care span: An aging population and growing disease burden will require a large and specialized health care workforce by 2025. Health Affair, 32(11), 2013 2020. Derek, W., Taneja, C., Edelsberg, J., Haim Erder, M., Schmitt, F., Setyawan, J., & Oster, G. (2007). Cost-effectiveness of memantine in moderate-to severe Alzheimer’s disease patients receiving donepezil. Current Medical Research and Opinion®, 23, 1187 1197. Dorling, D., Mitchell, R., & Pearce, J. (2007). The global impact of income inequality on health by age: An observational study. British Medical Journal, 335, 873 875. Edwards, K. L., & Clarke, G. P. (2009). The design and validation of a spatial microsimulation model of obesogenic environments for children in Leeds, UK: SimObesity. Social Science and Medicine, 69(7), 1127 1134. Edwards, K. L., Clarke, G. P., Ransley, J. K., & Cade, J. E. (2010). The neighbourhood matters: Studying exposures relevant to childhood obesity and the policy implications in Leeds, UK. Journal of
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
442
Deborah Schofield, Hannah Carter and Kimberley Edwards
Epidemiology and Community Health, 64(3), 194 201. doi:10.1136/ jech.2009.088906 Edwards, K. L., Clarke, G. P., Thomas, J., & Forman, D. (2010). Internal and external avalidation of spatial microsimulation models: Small area esimates of adult obesity. Applied Spatial Analyses and Policy. doi:10.1007/s12061-010-9056-2 Eriksson, H., Morin, M., Jenvald, J., Gursky, E., Holm, E., & Timpka, T. (2007). Ontology based modelling of pandemic simulation scenarios. MedInfo 2007: Proceedings of the 12th world congress on health (medical) informatics; building sustainable health systems. Eubank, S., Guclu, H., Kumar, V. S. A., Marathe, M. V., Srinivasan, A., Toroczkai, Z., & Wang, N. (2004). Modelling disease outbreaks in realistic urban social networks. Nature, 429(6988), 180 184. doi:10.1038/nature02541 Favreault, M., & Smith, K. (2004). A primer of the dynamic simulation of income model (DYNASIM3). The Urban Institute. Fukawa, T. (2007). Health and long-term care expenditures of the elderly in Japan using a micro-simulation model. The Japanese Journal of Social Security Policy, 6(2), 199 206. Fukawa, T. (2012). Household projection and its application to health/ long-term care expenditures in Japan using INAHSIM-II. Social Science Computer Review, 29(1), 52 66. Girosi, F., Cordova, A., Eibner, C., Roan Gresenz, C., Keeler, E., Ringel, J., … Vardavas, R. (2009). Overview of the COMPARE microsimulation model RAND. Working Paper No. WR-650. Gray, A. M. (1982). Inequalities in health. The black report: A summary and comment. International Journal of Health Services, 12(3), 349 380. Gruber, J. (2000). Microsimulation estimates of the effects of tax subsidies for insurance. Part 1. National Tax Journal, 53(3), 329 342. Hiligsmann, M., Ethgen, O., Bruye`re, O., Richy, F., Gathon, H. J., & Reginster, J. Y. (2009). Development and validation of a Markov microsimulation model for the economic evaluation of treatments in Osteoporosisv. Value in Health, 12(5), 687 696. Howard, L., Epstein, J., & Hammond, R. (2009). Economic cost and health care workforce effects of school closures in the U.S. PLoS Currents. October(1), RRN1051. doi:10.1371/currents.RRN1051 Inagaki, S. (2005). Projections of the Japanese socio-economic structure using a microsimulation model (INAHSIM). IPSS Discussion Paper Series No. 2005 03. National Institute of Population and Social Security Research. Klevmarken, A., & Lindgren, B. (Eds.). (2008). Simulating an ageing population. A microsimulation approach applied to Sweden. Bingey, UK: Emerald Group Publishing Limited.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
443
Landt, J., Percival, R., Schofield, D., & Wilson, D. (1995, March). Income inequality in Australia: The impact of non-cash subsidies for health and housing. NATSEM Discussion Paper No. 5. Canberra, March. Lay-Yee, R., Pearson, J., Davis, P., von Randow, M., & Pradhan, S. (2011). Primary care in an ageing society: Developing the PCASO microsimulation model. Technical Report. University of Auckland. Lymer, S. (2009). APPSIM Modelling health. National Centre for Social and Economic Modelling (NATSEM). Canberra: University of Canberra. Lymer, S., & Brown, L. (2012). Developing a dynamic microsimulation model of the Australian health system: A means to explore impacts of obesity over the next 50 years. Epidemiology Research International, 2012, 1 13. Article 132392. Available at http://dx.doi. org/10.1155/2012/132392 Lymer, S., Borwn, L., Harding, A., & Yap, M. (2009). Predicting the need for aged care services at the small area level: The CAREMOD spatial microsimulatoin model. International Journal of Microsimulation, 2(2), 27 42. Lymer, S., Brown, L., Payne, A., & Harding, A. (2006). Development of ‘healthmod’ A model of the use and costs of medical services in Australia. Paper presented at the 8th Nordic Seminar on Microsimulation Models, Oslo, Norway. Lymer, S., Brown, L., Yap, M., & Harding, A. (2008). Regional disability estimates for New South Wales in 2001 using spatial microsimulation. Applied Spatial Analysis and Policy, 1(2), 99 116. Mazzaferro, C., & Morciano, M. (2008). CAPP_DYN: A dynamic microsimulation model for the Italian social security system. CAPPaper n. 48. Centro di Analisi delle Politiche Pubbliche (CAPP). Mohana, J., Twigg, L., Barnard, S., & Jones, K. (2005). Social capital, geography and health. A small area analyses for England. Social Science and Medicine, 60(6), 1267 1283. Morrissey, K., Clarke, G., Ballas, D., Hynes, S., & O’Donoghue, C. (2008). Examining access to GP services in rural Ireland using microsimualtion analyses. Area, 40(3), 354 364. Morrissey, K., Clarke, G. P., & O’Donoghue, C. (2013). Linking static spatial microsimulation modelling to meso-scale models: The relationship between access to GP services and long term illness. In R. Tanton & K. L. Edwards (Eds.), Spatial microsimulation: A reference guide for users. Understanding population trends and processes (Vol. 6). Dordrecht: Springer. doi:10.1007/978-94-007-4623-7_3 Morrissey, K., Hynes, S., Clarke, G. P., & O’Donoghue, C. (2010). Examining the factors associated with depression at the small area level in Ireland using spatial microsimulation techniques. Irish Geography, 45(1), 1 22.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
444
Deborah Schofield, Hannah Carter and Kimberley Edwards
Nadeau, C., Flanagan, W., Oderkirk, J., Rowe, G., Edge, V., Gillis, D., & Waddell, J. T. (2011). Population health model Physical activity dynamic model. Statitics Canada. Nepal, B., Brown, L., Kelly, S., Percival, R., Anderson, P., Hancock, R., & Ranmuthugala, G. (2011). Projecting the need for formal and informal aged care in Australia: A dynamic microsimulation approach. NATSEM Working Paper No. 11/07, Canberra, Australia. O’Donoghue, C. (2001). Dynamic microsimulation: A methodological survey. Brazilian Electronic Journal of Economics, 4(2). Retrieved from http://econpapers.repec.org/article/bejissued/. Accessed on Febuary 19, 2014. Pearson, J., Peter Davis, P., O’Sullivan, D., von Randow, M., Kerse, N., & Pradhan, S. (2011). Primary care in an aging society: Building and testing a microsimulation model for policy purposes. Social Science Computer Review, 29(1), 1, 21 36. Pennec, S., & Bacon, B. (2007). APPSIM Modelling fertility and mortality. Canberra: National Centre for Social and Economic Modelling (NATSEM), University of Canberra. Procter, K. L., Clarke, G. P., Ransley, J. K., & Cade, J. E. (2008). Microlevel analysis of childhood obesity, diet, physical activity, residential socio-economic and social capital variables: Where are the obesoegenic environments in Leeds? Area, 40(3), 323 340. Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1475-4762.2008.00822.x/full. Accessed on Febuary 19, 2014. Safaei, J. (2007). Income and health inequality across Candaian provinces. Health and Place, 13, 629 638. Sassi, F., Cecchini, M., Lauer, J., & Chisholm, D. (2009). Improving lifestyles, tackling obesity: The health and economic impact of prevention strategies. OECD Health Working Paper No. 48. OECD, Paris. Sassi, F., & Hurst, J. (2008). The prevention of lifestyle-related chronic diseases: An economic framework. OECD Health Working Paper No. 32. Retrieved from www.oecd.org/dataoecd/57/14/40324263. pdf. Accessed on Febuary 17, 2014. Schofield, D. (1998a). Public expenditure on hospitals: Measuring the distributional impact. NATSEM Discussion Paper No. 36. National Centre for Social and Economic Modelling, University of Canberra, Canberra, Australia. Schofield, D. (1998b). Re-examining the distribution of pharmaceutical benefits in Australia: Who benefits from the pharmaceutical benefits scheme? NATSEM Discussion Paper No. 36. National Centre for Social and Economic Modelling, University of Canberra, Canberra, Australia. Schofield, D. (2007). Replacing the projected retiring baby boomer nursing cohort 2001 2026. BMC Health Services Research, 7(1), 87.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
445
Schofield, D., Callander, E., Kimman, M., Scuteri, J., & Fodero, L. (2012). Projecting the radiation oncology workforce in Australia. Asian Pacific Journal of Cancer Prevention, 13(4), 1159 1166. Schofield, D., & Fletcher, S. (2007). The physiotherapy workforce is ageing, becoming more masculinised, and is working longer hours: A demographic study. Australian Journal of Physiotherapy, 53(2), 121 126. Schofield, D., Passey, M., Earnest, A., Percival, R., Kelly, S., Shrestha, R., & Fletcher, S. (2009). Health&WealthMOD: A microsimulation model of the economic impacts of diseases on older workers. International Journal of Microsimulation, 2(2), 58 63. Schofield, D., & Rothman, G. (2007). Projections of commonwealth health expenditure in Australia’s first intergenerational report. In A. Gupta & A. Harding (Eds.), Modelling our future: Population ageing, health and aged care (Vol. 16, pp. 149 168). International Symposia in Economic Theory and Econometrics. Amsterdam: Elsevier B. V. Schofield, D., Shrestha, R., & Callander, E. (2012). Access to general practitioner services amongst underserved Australians: A microsimulation study. BMC Human Resources for Health, 10(1). Available at http://www.human-resources-health.com/content/10.1/1 Schofield, D., Shrestha, R., Percival, R., Kelly, S., Passey, M., Callander, E., & Fletcher, S. (2011). Modelling the cost of ill health in Health&WealthMOD (Version II): Lost labour force participation, income and taxation, and the impact of disease prevention. International Journal of Microsimulation, 4(3), 33 37. Scott, A. (2003). Implementation of demographic transitions in the SAGE model. Simulating social policy in an ageing society (SAGE), Technical note. London School of Economics, ESRC-SAGE Research Group, London. Scott, T., Peter, S., Joyce, C., Schofield, D., & Davies, P. (2011). Alternative approaches to health workforce planning. Final report, April 2011. National Health Workforce Planning and Research Collaboration, Adelaide. Smith, D., Clarke, G. P., Ransley, J., & Cade, J. (2006). Food access and health: A microsimulation framework for analyses. Studies in Regional Sciecne, 35(4), 909 927. Smith, D. M., Harland, K., & Clarke, G. (2007). SimHealth: Estimating small area populations using deterministic spatial microsimulation in Leeds and Bradford. Leeds: University of Leeds. Smith, D., Pearce, J. R., & Harland, K. (2011). Can a deterministic spatial microsimulation model provide reliable small area estimates of health behaviours? An example of smoking prevalence in New Zealand. Health and Place, 17, 618 624.
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
446
Deborah Schofield, Hannah Carter and Kimberley Edwards
Teymoori, F., Hansen, O., Franco, A., & Demongeot, J. (2010a). Conference on Complex, Intelligent and Software Intensive Systems (CISIS) International Tehran, Iran. Teymoori, F., Hansen, O., Franco, A., & Demongeot, J. (2010b). Conference on Complex, Intelligent and Software Intensive Systems (CISIS), International Tehran, Iran. Timpka, T., Morin, M., Jenvald, J., Eriksson, H., & Gursky, E. A. (2005). Towards a simulation environment for modelling of local influenza outbreaks. AMIA Annual Symposium Proceedings, 729 733. Tomintz, M. N., Clarke, G. P., & Rigby, J. E. (2008). The geography of smoking in Leeds: Estimating individual smoking rates and the implications for the location of stop smoking services. Area, 40(3), 341 353. Vanness, D., Tosteson, A., Gabriel, S., & Melton, J. (2005). The need for microsimulation to evaluate osteoporosis interventions. Osteoporos International, 16, 353 358. doi:10.1007/s00198-004-1826-8 Walker, A., Butler, J., & Colagiuri, S. (2010). Economic model system of chronic diseases in Australia: A novel approach initially focusing on diabetes and cardiovascular disease. International Journal of Simulation and Process Modelling, 6(2), 137 151. Walker, A., Butler, J., & Colagiuri, S. (2011). Cost-benefit model system of chronic diseases to assess and rank prevention and treatment optionsHealthAgeingMod. Report No. 10. Australian Centre for Economic Research on Health Research, Australian National University, Canberra, Australia. Weinstein, M. (2006). Recent developments in decision-analytic modelling for economic evaluation. Pharmacoeconomics 2006, 24(11), 1043 1053. doi:1170-7690/06/0011-1043 Wilkinson, R. G., & Picket, K. E. (2006). Income inequality and population health: A review and explanation of the evidence. Social Science and Medicine, 62, 1768 1784. Willa, B. P., Berthelota, J.-M., Nobregaa, K. M., Flanagana, W., & Evansb, W. K. (2001). Canada’s Population Health Model (POHEM): A tool for performing economic evaluations of cancer control interventions. European Journal of Cancer, 37(14), 1797 1804. Williamson, P., Birkin, M., & Rees, P. H. (1998). The estimation of population microdata by using data from small area statistics and samples of anonymised records. Environment and Planning A, 30, 785 816. Wills, B. P., Berthelot, J. M., Nobrega, K. M., Flanagan, W., & Evans, W. K. (2001). Canada’s Population Health Model (POHEM). European Journal of Cancer, 37(14), 1797 1804. Wittenberg, R., Hu, B., Hancock, R., Morciano, M., Comas-Herrera, A., Malley, J., & King, D. (2011). Projections of demand for and costs of
Downloaded by Cornell University Library At 19:01 03 August 2016 (PT)
Health Models
447
social care for older people in england, 2010 to 2030, under current and alternative funding systems. Working Paper. PSSRU, PSSRU Discussion Paper No. 2811. Wolfson, M. C. (1994). POHEM A framework for understanding and modelling the health of human populations. World Health Statistics Quarterly, 47, 157 176. World Health Organisation. (2000). World health report 2000: Health systems: Improving performance. Washington, DC: WHO. Wu, B., & Birkin, M. (2013). Moses: A dynamic spatial microsimulation model for demographic planning. In R. Tanton & K. L. Edwards (Eds.), Spatial microsimulation: A reference guide for users (Vol. 6). Understanding Population Trends and Processes. New York, NY: Springer. doi:10.1007/978-94-007-4623-7_3 Zayatz, T. (1999). Social security disability insurance program worker experience. Actuarial Study No. 114. Administration outcasts, Baltimore.
CHAPTER 15
Environmental Models
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Stephen Hynes and Cathal O’Donoghue
15.1. Introduction The environment as a policy issue has increased dramatically over the past four decades. Research in this area extends from global challenges such as climate change (CC), access to water and soils, ozone emissions and biodiversity loss to issues with a smaller geographical scope such as water quality and congestion (Wong & Chandra, 2012) to the impact of the environment on health (Orcutt, Franklin, Mendelsohn, & Smith, 1977). As the extent of, or at least the realisation of the extent of, these issues has increased, so too has the scope of public policy in the cases of things like environmental regulations, carbon taxes, emissions trading and the valuation of natural capital. In addition, strategies to increase both the market and non-market values from environmental and natural resources have also increased. These policy concerns have brought with it the need for more evidence and analysis, particularly ex-ante policy analyses and impact assessments and the use of non-market valuation methods. Microsimulation methods are highly appropriate for these types of analyses and have as a result been increasingly used in environmental and natural resource economics. The increased use has resulted both from increased demand and as a result of the rapid advancement in computer power and technology that has facilitated the use of complex simulation procedures in applied environmental economic research. The use of microsimulation modelling in the realm of the environment overlaps with many traditional areas of microsimulation modelling such as the distributional incidence of public policies (Serret & Johnstone, 2006) or the impact on behaviour in relation to the incidence of these
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293014
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
450
Stephen Hynes and Cathal O’Donoghue
policies (Symons, Proops, & Gay, 1994). In an in-depth review of this area, Serret and Johnstone (2006) describe a variety of different distributional analyses of different environmental policies. Within the environmental and natural resource economics literature the interaction between human activity and the environment has also been shown to be strongly influenced by spatial location, and the use of spatial microsimulation models has proven a useful tool for modelling socioeconomic-environmental interactions and policies in this regard. Svoray and Benenson (2009), in a special issue of the journal Ecological Complexity focusing on environmental microsimulation models, highlight the increasing use of microsimulation models within the environmental field, emphasising the availability of spatial environmental data and models in relation to the interaction between ecological and socio-economic systems. Given the impact of the environment on agriculture and vice versa, there has also been an increased focus of the modelling of agriculture in this context and in particular related to environmental regulation (Hynes, Hanley, & O’Donoghue, 2009; Kruseman, Blokland, Bouma et al., 2008; Kruseman, Blokland, Luesink, & Vrolijk, 2008). Another area where simulation has been increasingly employed within the environmental and natural resource economics literature is in regards to the quantification of the non-market value of environmental and natural resources, via the estimation of valuation models (and in particular discrete choice models) and the subsequent calculation of related welfare measures for environmental goods that are not generally traded in an established market. Using these models the welfare impact of moving from a status quo situation to a number of alternative environmental states are often simulated (Hynes, Hanley, & Scarpa, 2008). In this chapter we review the literature in relation to the use of microsimulation models for environmental analysis. In Section 15.2, we explore in more detail the policy context, associated with the use of microsimulation for environmental analysis. Section 15.3 analyses the uses and applications of these models. Section 15.4 considers methodological choices made in the field. Section 15.5 concludes and provides some direction for future research.
15.2. Policy context Considering first the management of negative externalities associated with the environment, much of environmental policy attempts to adjust behaviour so that social costs of economic activity are incorporated in the decision space of private actors; polluters will act, in terms of their product mix, technological use and production process on the basis of their private costs and benefits and not on the costs faced by society.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
451
In order to reduce the external cost of pollution control will be necessary either through regulation or through some market mechanism such as taxation. Since both approaches require monitoring systems and administrative systems to be effective, regulations and market mechanisms are similar. Regulations can be designed to achieve the same level of pollution reduction as market measures. Regulations have the advantage that if they are adhered to, environmental standards are actually achieved. However they are not dynamically efficient in the sense that once these standards are achieved, there is no further incentive to improve on them. In addition regulations are statically inefficient as they make no allowance for the fact that the cost of compliance can vary across sectors of the economy, which means that the total cost to the economy may be higher if regulations were used. Market based instruments such as taxation can, by exploiting these cost differentials, lead to lower total compliance costs. They can also lead to continuous behavioural changes. An optimal tax would be set so as to reduce pollution to the point where the marginal social cost of pollution (MDC) and the marginal abatement cost (MAC) are equal. However it is difficult to determine the value of the external costs or the cost to society of pollution not taken into account by the polluter. Incentive taxes are therefore used to achieve a certain target. Some market based policies can achieve a double dividend (Cramton & Kerr, 1999), both reducing the polluting activity, while at the same time generating revenue that can reduce the need for revenue raising from other distorting taxation such as taxation on labour. Barrios, Pycroft, and Saveyn (2013) analysed the marginal cost of public funds in the EU: the case of labour versus green taxes. Their analysis suggested that the economic distortions provoked by labour taxes are significantly larger than for green taxes. The authors conclude that this result implies that a greentaxes oriented fiscal consolidation would be preferred to a labour-tax oriented one (assuming that both tax increases would yield the same tax revenues). However, it should be noted that there are a number of disadvantages in using taxation to regulate the environment (Pearce, 1991; Pearson & Smith, 1991). Short run energy elasticities are often lower than long run elasticities due to the time taken to switch to new technologies, which may slow down the achievement of targets. Simple environmental taxes may also not be appropriate where pollution is concentrated over time or in a certain location. More complicated measures or regulation would be more effective here. In addition to efficiency issues environmental taxation has important distributional implications; taxes placed on essential goods such as domestic fuels will hit those at the bottom of the income distribution most, however fiscal measures such as increased transfer payments can be introduced to reduce the distributional impact (see Symons et al., 1994).
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
452
Stephen Hynes and Cathal O’Donoghue
Another problem with charging polluters is measuring how much they pollute. It is impossible to measure how much greenhouse gasses are emitted by each pollution source as it would require the placing of measuring devices on every car exhaust and every chimney, etc. Instead a tax could be levied at source. Carbon dioxide emissions are related to the volume of fuel used which means that emissions can easily be taxed, by levying a tax proportional to carbon component of the fuel. An environmental tax has an income effect, raising the price of energy and also a substitution effect, substituting expenditure away from fuels with a high carbon component such as coal or peat and towards fuels with lower carbon components such as natural gas. From a policy perspective, understanding the value (market or nonmarket) of a resource can be influential in terms of the weight public policy places on a policy intervention. Environmental valuation can be considered as a set of analytical tools designed to measure the net contribution that an environmental public policy makes to the economic wellbeing of members of society (Freeman, 2003). It includes the rules and procedures used to conduct a technical economic calculation to reduce all costs and beneficial effects that each policy alternative has on individuals to a common monetary measure. The primary purpose of undertaking an assessment or comparison of these costs and benefits is to determine whether a proposed intervention is worthwhile in economic terms. This is known generally as cost-benefit analysis. Indeed, according to Haab and McConnell (2002) the practice of measuring benefits and costs in monetary terms extends back at least five decades, while Hanemann (1992) claims that the use of cost-benefit analysis as a practical tool of government decision making in the United States can be traced back to the start of the 19th Century. Economic valuation is therefore generally concerned with estimating economic values for use in policy or management decisions, and when these values relates to environmental resources the process is known as environmental valuation. In fact Hanemann (1992) states that ‘environmental valuation grew out of cost-benefit analysis’. The idea that many environmental goods and services have economic value, though are not traded in a market, introduces two crucially important issues however. The first concerns how to conceptualise these values, in a theoretical senses, while the second relates to how these values can then be measured empirically. A massive literature has grown around these two issues and good reviews of these and other important concerns in this area include Portney (1994), Scarpa and Alberini (2005) and Hanley and Barbier (2009). 15.3. Uses and applications Environmental policy measures, including environmental regulations, taxes and emission trading schemes have been proposed to reduce
Environmental Models
453
pollution. In this section we will focus on environmental taxation, as they are most amenable to simulation using a microsimulation model, requiring both the behavioural response to the policy being measured and the distributional impact.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
15.3.1. Distributional incidence analysis of environmental policy Taxes on polluting goods can be similar to the indirect taxes described in Chapter 6 such as VAT or Excise duties on fuels. However pure environmental taxes are slightly different as they tend to be levied not in proportion to value or volume, but in relation to the amount of pollution that is produced. There is a relatively extensive literature on modelling the distributive impact of environmental taxes. O’Donoghue (1997) and Callan, Lyons, Scott, Tol, and Verde (2009) have analysed the distributional implications of carbon taxes in Ireland, Hamilton and Cameron (1994) in Canada, Labandeira and Labeaga (1999) and Labandeira, Labeaga, and Rodrı´ guez (2007) in Spain, Bureau (2010) in France, Casler and Rafiqui (1993) in the United States, Symons et al. (1994) in the United Kingdom; Yusuf and Resosudarmo (2007), Yusuf (2008) in Indonesia; Kerkhof, Moll, Drissen, and Wilting (2008) in the Netherlands; Bach, Kohlhaas, Meyer, Praetorius, and Welsch (2002) and Bork (2006) in Germany, Poltima¨e and Vo˜rk (2009) in Estonia, Newbery, O’Donoghue, Pratten, and Santos (2002) and Cornwell and Creedy (1996) and Buddelmeyer, He´rault, Kalb, and van Zijll de Jong (2012) in Australia. Microsimulation analyses have also been used to undertake distributional assessments of other environmental policies such as tradable emissions permits (Waduda, Noland, & Graham, 2008), taxes on methane emissions from cattle (Hynes, Morrissey, O’Donoghue, & Clarke, 2009) and taxes on nitrogen emissions (Berntsen, Petersen, Jacobsen, Olesen, & Hutchings, 2003). Doole, Marsh, and Ramilan (2013) examined the distributional impact of a cap and trade strategy on dairy farms. Elsewhere, Cervigni, Dvorak, and Rogers (2013) have analysed the distributional impact of wider low-carbon economic development policies. Other types of distributional impact analysis include the simulation of ‘what if’ scenarios such as the impact of an action on emissions if consumption patterns are changed. For example Alfredsson (2004) utilised a microsimulation model to undertake a life cycle analysis, that incorporates energy use and CO2 emissions connected with the whole production process up to the point of purchase, of alternative ‘greener’ consumption patterns. However the impact is marginal. While much of the literature has focused upon the design and effectiveness of public policies to mitigate pressures on global warming and CC, a small literature has developed considering the distributional incidence of potential changes in climate. These studies relate both to the impact of
454
Stephen Hynes and Cathal O’Donoghue
weather related change on outcomes such as food security (Breisinger et al., 2011; Bussolo, de Hoyos, Medvedev, & van der Mensbrugghe, 2008; Kyophilavong & Takamatsu, 2011) and on the impact of developed county CC policies on developing countries (Boccanfuso, Savard, & Estache, 2013).
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
15.3.2. Spatial incidence environmental models The interaction between human activity and the environment is strongly influenced by spatial location. The use of spatial microsimulation models can be useful for modelling socio-economic-environmental interactions and policies. Svoray and Benenson (2009), in a special issue of the journal Ecological Complexity focusing on environmental microsimulation models, highlight the increasing use of microsimulation models within the environmental field. The special issue emphasised the availability of spatial environmental data and models in relation to the interaction between ecological and socio-economic systems. One of the consequences of CC is an increased incidence of extreme weather events and their consequential impact on the location in which the events occur. Microsimulation models are increasingly being used for disaster simulation (Barton, Eidson, Schoenwald, Stamber, & Reinert, 2000; Sander et al., 2009). For example, Brouwers (2005) developed a spatial microsimulation model to consider the spatial impact of floodingrelated disasters (Brouwers & Linnerooth-Bayer, 2003). In an alternative application, Potter, Atwood, Lemunyon, and Kellogg (2009) developed an environmental focused model, linking scientific data with farm management data that was used to model carbon sequestration from cropland in the United States. Elsewhere, by combining both policy incidence and spatial analysis, Leahy, Lyons, Morgenroth, and Tol (2009) modelled the spatial incidence of a carbon tax. 15.3.3. Agriculture and the environment With regard to agriculture, Hynes, Morrissey et al. (2009) and Hynes, Farrelly, Murphy, and O’Donoghue (2013) developed a model of spatial farm incomes as part of the SMILE. Lindgren and Elmquist (2005) linked natural sciences and economics in their Systems AnaLysis for Sustainable Agricultural production (SALSA) model to evaluate the economic and environmental impact of alternative farm management practices on a site specific arable farm in Sweden. While much of the literature focuses on issues associated with relatively intensive agriculture in developed countries, Smith and Strauss (1986) utilised the spatial microsimulation methodology in a subsistence environment in Sierra Leone. Given the potential impact of agricultural practices on water, a number of microsimulation models have also been developed to model at farm
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
455
level processes that can impact emissions. For example, Dijk, Leneman, and van der Veen (1996) developed a microsimulation model to analyse the nutrient flow model for Dutch agriculture. Ramilan, Scrimgeour, Levy, and Romera (2007) and Ramilan, Scrimgeour, and Marsh (2011) modelled the environmental and economic efficiency of dairy farms using a farm population microsimulation model. Agriculture also makes large contributions to greenhouse gas emissions (GHG). Farm-based microsimulation models have simulated GHG (Hynes, Morrissey, & O’Donoghue, 2013; Hynes, Morrissey et al., 2009). Also, Lal and Follett (2009) applied the methodology to model carbon sequestration in the soil on cropland. Renewable energy has been a seen as a potential mitigation strategy in relation to GHG. In this regard, Clancy et al. (2012) utilised a microsimulation model in Ireland to assess the optimal spatial location for the growth of willow and miscanthus for biomass production. In terms of risk management choice under CC, Kimura, Anton, and Cattaneo (2012) applied a microsimulation model to simulate the effects on the Saskatchewan Crop Sector. A variant of the Agri-SMILE (Hynes, Hanley et al., 2009) focuses on recreational activity in forests within a single city (Cullinan, Hynes, & O’Donoghue, 2008). Also with a small area focus (a number of municipalities), van Leeuwen, Dekkers, and Rietveld (2008) have developed a model exploring the linkages between on and off-farm employment, which is becoming an increasing part of farmer’s incomes in the EU. In terms of biodiversity related issues, microsimulation models were also used to look at a range of issues including wildlife-recreation interaction (Bennett et al., 2009) and the non-market value of wild bird conservation (Hynes, Hanley, & O’Donoghue, 2010), and participation in AgriEnvironmental Protection Schemes (Hynes, Farrelly, Murphy, & O’Donoghue, 2008; Hynes, Farrelly, Murphy, & O’Donoghue, 2013; Hynes & Garvey, 2009; Kelley, Rensburg, and Yadav, 2013). 15.3.4. Resource demand Continuing the global warming and energy consumption theme, Perez, Vautey, and Ka¨mpf (2012) developed a microsimulation model of energy usage and efficiency in an urban setting and use the framework to assess the energy efficiency improvements of alternative investments. Chingcuanco and Miller (2012) similarly model energy demand in an urban setting. Water is another limited natural resource. Mitchell (1999) developed a model that integrates microsimulation and econometric models to produce small area forecasts of water demand. Clarke, Kashti, McDonald, and Williamson (1997), Williamson (2001), Williamson, Mitchell, and McDonald (2002) developed models for small area demand for water and water demand forecasting. Peters, Brassel, and Spo¨rri (2002) model
456
Stephen Hynes and Cathal O’Donoghue
scenarios relating to wastewater technology in relation to human waste within a Swiss region. Petersen, Gernaey, Henze, and Vanrolleghem (2002) modelled a municipal-industrial wastewater treatment plant.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
15.3.5. Transport and land use Given the importance of GHG from transport related emissions, there is an extensive environmental related literature within the field of transportation economics. Microsimulation modelling has long been a key tool in this field for undertaking ex-ante analyses of alternative transport choice options (Lee & Miller, 2001; Mavoa, 2007; Zhu & Ferreira, 2012). Congestion has both efficiency and environmental implications via the use of additional fuel. Policy interventions such as road tolling (Martinez, 2008) can be utilised to manage transportation systems to improve both congestion and environmental emissions. Gomes, May, and Horowitz (2004) developed a model of a motorway containing alternative congestion reducing options such as high occupancy vehicle lanes and metered on-ramps. Elsewhere, Stevanovic, Stevanovic, and Kergaye (2012) use a microsimulation model to consider the impact of Adaptive Traffic Control System, finding a moderate environmental benefit but accompanied by improved travel times. Huang, Bird, and Bell (2009) looking at the impact of emissions using a life cycle assessment method in a microsimulation framework found that road maintenance works can significantly add to emissions. From a transport related energy efficiency perspective, de Haan, Mueller, and Scholz (2009) apply a simulation perspective to consumer demand for alternative energy efficient cars, while Kazimi (1997a, 1997b) evaluate the environmental impact of alternative-fuel vehicles. The pattern of land use is a complementary area of study having an impact on transportation flows and associated carbon footprint, but also the environmental impact on land use change (Hooimeijer, 1996; Moeckel, Spiekermann, Schu¨rmann, & Wegener, 2003). Model frameworks like CitySim (Robinson et al., 2009) have been developed to model characteristics at the micro level that impact on the environmental sustainability such as energy use, transportation and waste management, while Tirumalachetty, Kockelman, and Nichols (2013) used a microsimulation mode to estimate household and firm GHG resulting from land use and transport patterns in Austin, Texas. Similarly, Noth, Borning, and Waddell (2003) developed UrbanSim to simulate, over periods of time, the development of urban areas, including land use, transportation and environmental impacts. 15.3.6. Non-market valuation studies Much of the literature thus far has focused on either modelling the distributional incidence of environmental impact or the incidence or behavioural impact of environmental policy. Another area where micro-based
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
457
simulation methods have been applied in relation to the environment is the estimation of valuation models (and in particular discrete choice models) and the subsequent calculation of related welfare measures for environmental goods that are not generally traded in an established market. In particular, the simulation utilising estimates of the non-market value of environmental benefits in the field of benefit transfer (BT) (Johnston & Rosenberger, 2010). While much of the literature focuses on the macro level, BT is being utilised at a micro level to account for individual preference heterogeneity (Cullinan, 2011). The BT method, which is based on the estimation of the value of an environmental good at a target ‘policy’ site using an analysis undertaken for another ‘study’ site, is gaining interest in the implementation of such policies as the EU Water Framework Directive (Hanley, Colombo, Tinch, Black, & Aftab, 2006; Norton et al., 2012), health risks associated with water quality (Kask & Shogren, 1994), air quality (Rozan, 2004) and forest management (Bateman, Day, Georgiou, & Lake, 2006). Research in this area has also focused on the development of models that use simulated maximum likelihood techniques and techniques that ensure that measures of willingness to pay, and other relevant welfare statistics are ‘behaviourally plausible in the presence of unobserved heterogeneity’ (Alberini & Scarpa, 2005). Indeed, Scarpa and Alberini (2005) brought together a number of empirical research papers in environmental valuation that made use of microsimulation methods. The result of this collection was a volume entitled ‘Applications of Simulation Methods in Environmental and Resource Economics’. 15.4. Methodological characteristics and choices In this section we discuss the methodological characteristics and choices faced by builders and users of environmental microsimulation modelling. These choices are quite broad, given the breadth of issue and policy focus. In essence, the same methodologies and choices span much of the microsimulation literature as most types of microsimulation models, bar perhaps dynamic microsimulation models, are also utilised in environmental microsimulation modelling. In this section therefore we focus on a limited number of modelling choices particularly relevant to environmental analysis, leaving readers to review other more standard methodologies elsewhere in the Handbook. 15.4.1. Distributional incidence of environmental outcomes or policy Much of the environmental policy related microsimulation modelling focuses on simulating the incidence of an environmental externality or related policy or the behavioural response to an environmental policy.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
458
Stephen Hynes and Cathal O’Donoghue
As a result, there is significant overlap between the wider microsimulation literature. For example there are overlaps with the distributional incidence focused models from the static microsimulation literature and in the models of consumer behavioural response, both types of analysis in the indirect tax and consumer behaviour literature. There is however a dichotomy between models that incorporate behavioural response, so that the emission response to a change in policy can be measured (Symons et al., 1994) and those that focus solely on the static distributional incidence without behavioural response (Casler & Rafiqui, 1993). Amongst the models incorporating behavioural response, modelling choices vary from statistically estimating demand systems (Symons et al., 1994) to utilising an imputation based upon the use of a Frisch parameter and the Linear Expenditure Demand System (Cornwell & Creedy, 1996) or ‘borrowing’ parameters from elsewhere (O’Donoghue, 1997) or utilising scenario based elasticities (Newbery et al., 2002). 15.4.2. Modelling pollution and environmental pollution The main additional component used by pollution focused environmental microsimulation models is in the modelling of the environmental pollution itself. As these models are based on micro-data where the polluting behaviour is often expressed in value terms, for example the amount of fuel or energy consumed, the first task, where the data is not present is to generate volumes of use (O’Donoghue, 1997). The next step is to model the amount of pollution generated associated with specific volumes. Much of the literature focuses on GHG, which are typically modelled using carbon accounting parameters (Chakraborty, Bhattacharya, & Li, 2006; McQuinn & Binfield, 2002). Although much of the focus of emissions reduction related policy has focused on carbon and greenhouse gasses, Kruseman, Blokland, Bouma et al. (2008) and Kruseman, Blokland, Luesink et al. (2008) developed the MAMBO model of livestock and agriculture in the Netherlands to model the impact of tightening environmental policy on Phosphate emissions, while Villot (1998) modelled a tax related to sulphur emissions and Kruseman, Blokland Bouma et al. (2008) focused on Ammonia emissions. In addition to modelling the distributional cost or incidence of behavioural change, a strand of the literature is interested in the marginal cost of abatement and in particular modelling the marginal cost of different mitigating measures. MAC curves are frequently created for whole economies or for sectors. However in order to understand incentive structures and induce behavioural change, one needs to know the structure of these curves at the level of the decision maker. Doole (2012) for example modelled nitrogen MAC curves at farm level, while Dieckhoener and Hecking (2012) modelled greenhouse gas abatement cost curves in relation to residential heating markets.
Environmental Models
459
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
15.4.3. Macro-micro model linkages Given the multi-sectoral nature of pollution incidence and interactions, much of the literature has focused on macro-micro interactions. The most common macro-micro linkage is the linkage of an input-output model to a microsimulation model. While household level expenditure data in relation to for example fuels and energy can provide distributional estimates of direct pollution generation at a household level, the indirect effect related to the consumption of other goods and services requires economy wide data. The input-output methodology due to Leontief (1951) is an appropriate methodology. Similar analyses such as this conducted internationally include O’Donoghue (1997) for Ireland, Gay and Proops (1993) in the United Kingdom and Casler and Rafiqui (1993) in the United States. An input-output table contains information about sectors of an economy, mapping the flows of inputs from one sector to another or to final demand (that consumed by households or exported, etc.). Output in each sector has two possible uses; it can be used for final demand or as an intermediate input for other sectors. In an n sector economy, final demand for sector I’s production is denoted by di and the output of sector i by xi. Intermediate input from sector i into sector j is defined as aij xj, where aij, the input coefficients, are fixed in value. In other words, aij is the quantity of commodity i that is required as an input to produce a unit of output j. Output can therefore be seen as a sum of intermediate inputs and final demand, as follows: xi =
X
a x j ij j
þ di
ð15:1Þ
or in matrix terminology: x = A⋅x þ d
ð15:2Þ
Combining the output coefficients to produce an (I − A) technology matrix and inverting, the Leontief inverse matrix (I − A)−1 is produced, which gives the direct and indirect inter industry requirements for the economy: x = d⋅ðI − AÞ − 1
ð15:3Þ
Eq. (15.3) can be expanded to produce the following: x = d⋅ðI þ A þ A2 þ A3 þ ⋯Þ
ð15:4Þ
As A is a non-negative matrix with all elements less than 1, Am approaches the null matrix as m gets larger, enabling us to get a good approximation to the inverse matrix. Eq. (15.4) thus expands output per sector into its components of final demand d; Ad, the inputs needed to produce d; A2d, the inputs required to produce Ad, etc. to produce the number of units
460
Stephen Hynes and Cathal O’Donoghue
of each output used in the production of a unit of final demand for each good. If tax t is applied and is passed on in its entirety to consumers, then the tax on goods consumed in final demand is td, the tax on the inputs to these goods is tAd, the tax on inputs to these is tA2d and so on. Combining, total tax is
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
x = t⋅d⋅ðI þ A þ A2 þ A3 þ ⋯Þ = t⋅dðI − AÞ − 1
ð15:5Þ
Input-output models assume that prices are passed directly onto the consumer without any behavioural change at firm level. However, environmental taxes are also likely to impact the price of other goods consumed and so it is likely that they will have general equilibrium effects as firms will also change their production structure. It thus may be suitable to conduct the analysis via Computable General Equilibrium (CGE) model (Boccanfuso, Estache, & Savard, 2011; Yusuf & Resosudarmo, 2007). Boccanfuso et al. (2011) carried out a survey of distributional impact analysis of environmental policies with emphasis on taxes envisaged or implemented to reduce GHG. As a result of the survey the authors concluded that the CGE microsimulation approach has not been fully exploited in the analysis of such impacts. In more recent work Boccanfuso et al. (2013) present a distributional impact analysis of CC policies envisaged or implemented to reduce GHG in Senegal where the authors simulate the diminishing productivity of agricultural land as a potential result of CC for Senegal. Their results show slight increases in poverty when the world price of fossil fuels increases and the negative impact is further amplified with decreases in land productivity. However, their research also reveals the fact that subsidising electricity consumption to protect consumers from world price increases in fossil fuels provides a weak cushion to poverty increase. Elsewhere Buddelmeyer et al. (2012) linked a dynamic CGE model and a microsimulation model while Jochem (2009) linked a macro economic model of the energy and transport sector to model the impact of a carbon trading scheme and linked it to a microsimulation to model individual level responses.
15.4.4. Unit of analysis While much of the pollution distribution incidence models focus on the household as the unit of analysis (Serret & Johnstone, 2006) and there is an increasing interest at the farm level (Berntsen et al., 2003), there is a relatively limited literature at the level of the firm (Oltra & Jean, 2005; Zhao, Liu, & Chen, 2009). Given the relatively high proportion of the economy accounted by the industrial and service sectors and consequential
Environmental Models
461
impact on the environment and the importance of environmental policy and regulation, this is a relatively clear gap in the literature.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
15.4.5. Scope and spatial disaggregation Another decision in building an environmental microsimulation model are the spatial dimensions of the model. These dimensions can be divided into the spatial scope and spatial disaggregation of the model. With regard to spatial scope, this may vary from a village (Kuiper & Ruben, 2006) or farm (Lindgren & Elmquist, 2005) to a city (Ballas & Clarke, 2001) to a country (Harding, Warren, & Lloyd, 2006) to a continent as in the case of the EU comparative tax-benefit microsimulation model EUROMOD (Immervoll, O’Donoghue, & Sutherland, 1999). The relevant scope will depend upon the user and use of the model and may be constrained by the complexity associated with the methodology. Spatial disaggregation refers to the nature of the spatial levels within the model. This can vary in decreasing size from country in EUROMOD to a region in an interregional analysis (Lloyd, 2014), to spatial districts (Ballas & Clarke, 2001; Chin et al., 2005) to spatial coordinates (Cullinan et al., 2008). The greater the degree of spatial disaggregation, the greater the degree of inter-area analysis that may be conducted (Minot & Baulch, 2005). To date, the literature indicates a divide between the spatial scope of models across different countries. Those coming from a social policy origin, such as the NATSEM models in Australia or the SMILE model in Ireland, have a national scope, reflecting their (primarily) national government stakeholders. In contrast, the University of Leeds based group have focused on city based models, reflecting the history of planning and the local government stakeholders. In general the spatial disaggregation has tended to be district or zone level. In terms of planning purposes, there is little added value and very little data availability for going to a building level analysis. A similar divide in spatial scope exists with regard to policy applications. Models of the retail, transport and regional development tend to utilise a city focus. However, in contrast, some agri-environmental based models utilise a national level unit of analysis (Hynes, Hanley et al., 2009; Hynes, Morrissey et al., 2009). Indeed, research has indicated that the choice of spatial resolution or level of disaggregation can be important. Benenson (2007) showed that when using a finer resolution than the ‘true’ resolution in environmental analysis, one can result in different conclusions. For example where the impact of pollutants is non-linearly related to concentration, particularly spatial concentration, then a smaller resolution may understate the significance. Sometimes the scope and resolution is very small when one is interested in site specific phenomena such as the model of walking trails on biodiversity in parkland (Bennett et al., 2009). Finally, whilst a division has been observed, it is important to note that may exists because the desired spatial disaggregation is not available
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
462
Stephen Hynes and Cathal O’Donoghue
within the data. For example, Felsenstein, Axhausen, and Waddell (2010) in using Urbansim to create a grid based model for Tel Aviv only had zonal data and assumed a uniform distribution across the zone to populate grids. While this may approximate reality in an urban area, it is not the case in rural analyses as identified in Cullinan et al. (2008). Issues of scope and spatial disaggregation are also very relevant in regards to the valuation of environmental goods and services. Bateman et al. (2006) point out that the extent of the market for the environmental good may be a more significant determinant of the aggregate value placed on an environmental good than the average value that an individual holds for a change in that good or service. The authors also point to the ‘distance decay’ effect the further the population is from where the environmental benefit occurs, the lower their willingness to pay (WTP) will be. The authors also showed that the ‘economic jurisdiction’ (an area that incorporates all those who gain economic value from a project) is often smaller than the political jurisdiction (the area within some administrative boundary); the latter often being used in aggregating environmental value estimates. As a result of these issues the BT approach to estimating aggregate economic values (usually for at an unsurveyed recreation site or amenity) has begun to be a popular choice using both spatial microsimulation and geographical Information System (GIS) techniques. In BT, the functionality of a GIS allows comparable measures of travel distance and time, substitute availability, socio-economic characteristics of the population of likely users, as well as other spatial data to be estimated for both study and policy (target) sites. Thus, according to Bateman, Jones, Lovett, Lake, and Day (2002), ‘in this way GIS techniques provide the ideal medium for conducting benefit [environmental value] transfer exercises.’ In fact a number of previous BT studies have utilised a GIS and examples include Wilson and Liu (2008), Bateman et al. (2004) and Hynes, Norton, and Hanley (2013). This technique is reviewed in greater detail in the next section. 15.4.6. Environmental valuation As mentioned previously, a number of methods have been developed to estimate the economic value of environmental resources. One of the most used approaches is referred to as stated preference techniques. These techniques derive estimates of consumer surplus via simulated markets through which individuals are asked to express their WTP for environmental goods and services such as a change to water quality at a popular bathing spot or the conservation of a wetland. The stated preference technique known as the contingent valuation method (CVM) can be used to directly seek information from survey respondents regarding their maximum WTP (or minimum compensation demanded) for a change in environmental quality or for some specified change in a recreation experience,
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
463
all within the confines of a simulated market. In CVM the level or change in environmental quality under consideration is usually determined by a third party (e.g. through policy or management changes), as opposed to the individual being surveyed, and thus the estimated welfare measures generally correspond to either Hicksian equivalent surplus or compensating surplus. Examples of the use of CVM to estimate the value of a simulated change in environmental goods include Choe et al. (1996) for the valuation of water quality improvements, Nunes (2002) for the estimation of the value of a simulated change to natural parks and Hynes et al. (2010) for the estimation of the value that the public attribute to the conservation of a rare bird species. Stated preference techniques such as CVM have however been criticised with respect to the reliability (or consistency) of statements of WTP (Hanley, Schla¨pfer, & Spurgeon, 2003; Loomis & Walsh, 1997) as well as the validity (or accuracy) of the estimates (Smith, 1993). Problems relate to part-while bias, embedding effects and the sensitivity of estimates to the format used in elicitation. In fact, Kahneman and Knetsch (1992) claim contingent valuation reflects the WTP for the moral satisfaction of contributing to public goods, not their economic value. Although CVM is the most widely used stated preference technique in the literature a variant of the approach is the discrete choice experiment (CE) approach. These experiments involve the generation and analysis of choice data through the construction of a simulated market using a survey. CEs consist of a number of choice sets, each containing a set of hypothetical alternatives between which respondents are asked to choose their preferred one. Alternatives are defined based on a set of attributes, each attribute taking one or more levels. Individuals’ choices imply implicit trade-offs between the levels of the attributes in the different alternatives included in a choice set. When the cost or price of the environmental policy or good is included as an attribute, the researcher can convert marginal utility estimates into WTP estimates for changes in the attribute levels and welfare measures may also be estimated for simulated changes to the policy or environmental good under consideration. Examples of the use of CE for the valuation of simulated changes to environmental policies include Hynes, Norton et al. (2013) for assessing the value of benefits due to changes to the EU Bathing Water Directive, Christie, Hanley, and Hynes (2007) for forestry amenity provision and Birol, Karousakis, and Koundouri (2006) for assessing preference heterogeneity towards wetland attributes in Greece. 15.4.7. Benefit transfer An alternative to the primary non-market valuation methods such as revealed (e.g. travel cost and hedonic valuation methods) and stated (e.g. contingent valuation and CEs) preference approaches is BT. Each
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
464
Stephen Hynes and Cathal O’Donoghue
primary economic valuation methodology has its own strengths and limitations, thereby restricting its use to a select range of goods and services associated with a coastal zone. The policy tool of BT can take the results of these studies to form the bedrock of practical environmental analysis. Primary valuation research, while being a ‘first best’ strategy, is also very expensive and time consuming. Thus, secondary analysis of the valuation literature is a ‘second best’ strategy that can yield very important information in many scientific and management contexts (Brouwer, 2000; Ledoux & Turner, 2002; Rosenberger & Loomis, 2001). When analysed carefully, information from past studies published in the literature can form a meaningful basis for use in environmental policy. Transfer errors and the applicability of transferring certain values are of the greatest concern in the transfer valuation literature as these issues are the most important for providing confidence in the final valuation of the policy site (Colombo & Hanley, 2008). The subject of BT is a maturing area, and with more studies and more understanding of the valuation of ecosystems, more confidence will be attained in the methodology. Another important issue in BT, as mentioned above, is defining the extent of the market at the policy site. In general a number of steps are followed when estimating the total non-market value of ecosystem services using BT. These include selecting the ecosystems services to be valued; defining, using GIS, the geographical area of the site; defining the land and/or marine cover typologies; the search and analysis of the valuation literature; the simulation and transfer of value estimates and estimation of ecosystem values on a per hectare basis and the calculation of the total non-market value of ecosystem services at the policy site. For an in-depth discussion of the BT methodology the interested reader should consult Norton et al. (2012). There are a number of methods of transferring values between sites. The simplest and most commonly used is to use the unadjusted WTP estimates from one or more study sites and apply their average value to the policy site. This method is referred to as ‘unit value transfer’. An extension to the unit value transfer method is where the WTP values are adjusted for one or more factors (e.g. adjustments for differences in income between study and policy sites and for differences in price levels over time or between sites) before the values are transferred between the sites. The next step in complexity of BT (and the more popular) is where microsimulation techniques start to be applied through the use of a ‘function transfer’ method. This may involve using the original WTP function from the study site and using input values from the policy site to generate a distribution as well as the mean WTP for the site. Meta-analysis is a more complex form of value function transfer which uses a value function estimated from multiple study results together with information on parameter values for the policy site, to simulate policy site values (Wilson & Liu, 2008). In a typical example of the method,
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
465
Ghermandia and Nunes (2013) compiled a global database of primary valuation studies that focus on recreational benefits of coastal ecosystems. The profile of each of the individual observations was then enlarged with characteristics of the built coastal environment and natural coastal environment at each of the study sites, geo-climatic factors and socio-political context. The authors then employed a meta-analytical framework built upon GIS. By doing so the authors could explore the role of spatial heterogeneity with the selected meta-regression variables as well as the spatial profile of the transferred values. The use of spatial microsimulation techniques for BT is another form of value function transfer that has been suggested by Hynes, Hanley, and O’Donoghue (2007) and Hynes et al. (2010). In an example of this type of transfer Hynes et al. (2010) used the SMILE model framework to statistically match population census data to a contingent valuation survey. The matched survey and census information were then used to produce regional and national total WTP figures. These figures were then compared to figures derived using more standard approaches to calculating aggregate environment values. The authors found that the choice of aggregation approach had a major impact upon estimates of total environmental value at a regional level, especially when the target population displays considerable heterogeneity across space.
15.5. Summary and future directions This chapter has explored the use of microsimulation models for environmental analysis. It demonstrates the wide use of microsimulation within the environmental and natural resource economics literature particularly in regards to modelling the distributive impact of environmental taxes, modelling of the environmental pollution itself, use of spatial microsimulation models for modelling socio-economic-environmental interactions and policies and the quantification of the non-market value of environmental and natural resources, via the estimation of valuation models. Applications of alternative microsimulation approaches were presented for CC analysis, land use, transport, pollution abatement and the valuation of the benefits derived from environmental ecosystem services. While a large body of research has been carried out in the use of microsimulation models for environmental analysis there are a number of avenues to be explored in terms of future work. With the explosion in the availability of GIS-based environmental data the opportunities to incorporate environmental indicators and variables in spatial simulation models have greatly expanded. The use of such data should also allow researchers to predict more accurately the outcomes of policy choices. A good example of the use of this type of data is Morinie`re, Taylor, and Hamza (2009) and what the authors refer to as the Global Footprint
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
466
Stephen Hynes and Cathal O’Donoghue
project. The project aims to compile the best-available evidence through space and time to explore patterns of human migration of the recent and less recent past. Once a full data set is compiled and major geo-temporal trends are identified, the authors intend to carry out a GIS-based microsimulation exercise to explore the importance of various drivers acting on migration patterns. Marine environmental GIS data is also just now becoming more available and this should also allow researchers to simulate the impact of anthropogenic disturbances to the marine environment and the impact for example of displacement of commercial fishing efforts following the designation of marine protected areas. Another area deserving more attention is the simulation of adaption to CC. Much of the previous research on CC has concentrated on developing baseline trajectories of emissions and mitigation scenario analyses. Much less effort has gone into the representation of climate impacts and adaptation responses. Given their implications for distributional effects the differences in adaptive capacity or differences in the ability of regions to adapt to CC should be important elements to capture in model analyses but are typically not represented in existing models. Another area in CC analysis where empirical work to inform models is lacking, according to Fisher-Vanden, Sue Wing, Lanzi, and Popp (2013), is in the dynamics of recovery from CC impacts. The authors note that most models represent climate damages as a reduction in economic output which is assumed to recover over time but further empirical work on thresholds and time could help inform models on the type of dynamics that should be captured in impact and adaptation analyses. In terms of the use of simulation approaches in environmental valuation, non-market valuation surveys are often regional specific and spatial microsimulation BT techniques could be used to improve aggregation estimation from this local level to the national level or to another region. This would involve using WTP data which had been collected in one region and matching it to census data based perhaps on variables such as age, sex or ethnicity, to generate a simulated national data set which includes simulated values for WTP. A combinatorial optimisation algorithm such as that used by Hynes et al. (2010) would then reweight the regional sample to produce a nationally representative population and a nationally representative estimate of aggregate environmental benefit. While spatial microsimulation BT techniques have been used to go from national level WTP surveys to analyse environmental values at a regional level to date no study has used the technique to go in the opposite direction. The requirements for a regional seas level analysis and a water management unit level analysis of environmental benefit values under the Marine Strategy Framework Directive and the Water Framework Directive respectively would also suggest a future role for the application of spatial microsimulation BT techniques.
Environmental Models
467
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Finally, while most of the literature has focused on individual preferences or individual impacts, the design and implementation of environmental policy result from public preferences. In the same way as one can use microsimulation model to scale up non-market values for environmental public goods, one could model the political economy process by which environmental regulations and priorities are set along the lines that Fourati and O’Donoghue (2009) undertook for pensions policy. This may help to inform the response to socio-economic change of environmental policy priority setting.
References Alberini, A., & Scarpa, R. (2005). Introduction. In R. Scarpa & A. Alberini (Eds.), Applications of simulation methods in environmental and resource economics (pp. 2936). Dordrecht, the Netherlands: Springer. Alfredsson, E. C. (2004). Green consumption No solution for climate change. Energy, 29(4), 513524. Bach, S., Kohlhaas, M., Meyer, B., Praetorius, B., & Welsch, H. (2002). The effects of environmental fiscal reform in Germany: A simulation study. Energy Policy, 30(9), 803811. Ballas, D., & Clarke, G. P. (2001). Modelling the local impacts of national social policies: A spatial microsimulation approach. Environment and Planning C, 19(4), 587606. Barrios, S., Pycroft, J., & Saveyn, B. (2013). The marginal cost of public funds in the EU: The case of labour versus green taxes. Directorate General Taxation and Customs Union, European Commission Taxation Paper No. 35. Barton, D. C., Eidson, E. D., Schoenwald, D. A., Stamber, K. L., & Reinert, R. K. (2000). Aspen-ee: An agent-based model of infrastructure interdependency. Albuquerque, NM: Sandia National Laboratories. Bateman, I., Day, B., Georgiou, S., & Lake, I. (2006). The aggregation of environmental benefit values: Welfare measures, distance decay and total WTP. Ecological Economics, 60(2), 450460. Bateman, I. J., Cole, M., Cooper, P., Georgiou, S., Hadley, D., & Poe, G. L. (2004). On visible choice sets and scope sensitivity. Journal of Environmental Economics and Management, 47(1), 7193. Bateman, I. J., Jones, A. P., Lovett, A. A., Lake, I. R., & Day, B. H. (2002). Applying geographical information systems (GIS) to environmental and resource economics. Environmental and Resource Economics, 22(12), 219269. Benenson, I. (2007). Warning! The scale of land-use CA is changing! Computers, Environment and Urban Systems, 31(2), 107113.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
468
Stephen Hynes and Cathal O’Donoghue
Bennett, V. J., Beard, M., Zollner, P. A., Fernandez-Juricic, E., Westphal, L., & LeBlanc, C. L. (2009). Understanding wildlife responses to human disturbance through simulation modelling: A management tool. Ecological Complexity, 6, 113134. Berntsen, J., Petersen, B., Jacobsen, B., Olesen, J., & Hutchings, N. (2003). Evaluating nitrogen taxation scenarios using the dynamic whole farm simulation model FASSET. Agricultural Systems, 76(3), 817839. Birol, E., Karousakis, K., & Koundouri, P. (2006). Using a choice experiment to account for preference heterogeneity in wetland attributes: The case of Cheimaditida Wetland in Greece. Ecological Economics, 60, 145156. Boccanfuso, D., Estache, A., & Savard, L. (2011). The intra-country distributional impact of policies to fight climate change: A survey. The Journal of Development Studies, 47(1), 97117. Boccanfuso, D., Savard, L., & Estache, A. (2013). The distributional impact of developed countries’ climate change policies on Senegal: A macro-micro CGE application. Sustainability, 5(6), 27272750. Bork, C. (2006). Distributional effects of the ecological tax reform in Germany: An evaluation with a microsimulation model. In Y. Serret & N. Johnstone (Eds.), The distributional effect of environmental policy (pp. 139170). Cheltenham, UK: Edward Elgar/OECD. Breisinger, C., Ecker, O., Al-Riffai, P., Robertson, R., Thiele, R., & Wiebelt, M. (2011). Climate change, agricultural production and food security: Evidence from Yemen. Kiel Working Paper No. 1747. Brouwer, R. (2000). Environmental value transfer: State of the art and future prospects. Ecological Economics, 32, 137152. Brouwers, L. (2005). Microsimulation models for disaster policy making. Doctoral dissertation, Stockholm. Brouwers, L., & Linnerooth-Bayer, J. (2003). Spatial and dynamic modelling of flood management policies in the Upper Tisza. Interim Report IR-03-02. Int. Inst. for Applied Systems Analysis (IIASA), Laxenburg, Austria. Buddelmeyer, H., He´rault, N., Kalb, G., & van Zijll de Jong, M. (2012). Linking a microsimulation model to a dynamic CGE model: Climate change mitigation policies and income distribution in Australia. International Journal of Microsimulation, 5(2), 4058. Bureau, B. (2010). Distributional effects of a carbon tax on car fuels in France. Energy Economics, 33(1), 121130. Bussolo, M., de Hoyos, R., Medvedev, D., & van der Mensbrugghe, D. (2008). Global climate change and its distributional impacts. Washington, DC: World Bank. Callan, T., Lyons, S., Scott, S., Tol, R. S. J., & Verde, S. (2009). The distributional implications of a carbon tax in Ireland. Energy Policy, 37(2), 407412.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
469
Casler, S. D., & Rafiqui, A. (1993). Evaluating fuel tax equity: Direct and indirect distributional effects. National Tax Journal, 50, 197205. Cervigni, R., Dvorak, I., & Rogers, J. A. (Eds.). (2013). Assessing lowcarbon development in Nigeria: An analysis of four sectors. World Bank Publications. Retrieved from http://dx.doi.org/10.1596%2F9780-8213-9973-6 Chakraborty, A., Bhattacharya, D., & Li, B. (2006). Spatiotemporal dynamics of methane emission from rice fields at global scale. Ecological Complexity, 3, 231240. Chin, S. F., Harding, A., Lloyd, R., McNamara, J., Phillips, B., & Vu, Q. N. (2005). Spatial microsimulation using synthetic small-area estimates of income, tax and social security benefits. Australasian Journal of Regional Studies, 11(3), 303336. Chingcuanco, F., & Miller, E. J. (2012). A microsimulation model of urban energy use: Modelling residential space heating demand in ILUTE. Computers, Environment and Urban Systems, 36(2), 186194. Christie, M., Hanley, N., & Hynes, S. (2007). Valuing enhancements to forest recreation using choice experiments and contingent behaviour methods. Journal of Forest Economics, 13(2), 75102. Clancy, D., Breen, J., Butler, A. M., Morrissey, K., O’Donoghue, C., & Thorne, F. (2012). The location economics of biomass production for electricity generation. In C. O’Donoghue, S. Hynes, K. Morrissey, D. Ballas, & G. Clarke (Eds.), Spatial microsimulation for rural policy analysis (pp. 159176). Springer Berlin Heidelberg. Clarke, G. P., Kashti, A., McDonald, A., & Williamson, P. (1997). Estimating small area demand for water: A new methodology. Water and Environment Journal, 11(3), 186192. Colombo, S., & Hanley, N. (2008). How can we reduce the errors from benefits transfer? An investigation using the choice experiment method. Land Economics, 84(1), 128147. Cornwell, A., & Creedy, J. (1996). Carbon taxation, prices and inequality in Australia. Fiscal Studies, 17, 2138. Cramton, P., & Kerr, S. (1999). The distributional effects of carbon regulation: Why auctioned carbon permits are attractive and feasible. In T. Sterner (Ed.), The market and the environment. Cheltenham, United Kingdom: Edward Elgar. Cullinan, J. (2011). A spatial microsimulation approach to estimating the total number and economic value of site visits in travel cost modelling. Environmental and Resource Economics, 50(1), 2747. Cullinan, J., Hynes, S., & O’Donoghue, C. (2008). Estimating catchment area population indicators using network analysis: An application to two small-scale forests in county Galway. Irish Geography, 41(3), 279294. de Haan, P., Mueller, M. G., & Scholz, R. W. (2009). How much do incentives affect car purchase? Agent-based microsimulation of
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
470
Stephen Hynes and Cathal O’Donoghue
consumer choice of new cars Part II: Forecasting effects of feebates based on energy-efficiency. Energy Policy, 37(3), 10831094. Dieckhoener, C., & Hecking, H. (2012). Greenhouse gas abatement cost curves of the residential heating market A microeconomic approach. EWI Working Paper No. 12/16. Dijk, J., Leneman, H., & van der Veen, M. (1996). The nutrient flow model for Dutch agriculture: A tool for environmental policy evaluation. Journal of Environmental Management, 46(1), 4355. Doole, G. J. (2012). Cost-effective policies for improving water quality by reducing nitrate emissions from diverse dairy farms: An abatement cost perspective. Agricultural Water Management, 104, 1020. Doole, G. J., Marsh, D., & Ramilan, T. (2013). Evaluation of agrienvironmental policies for reducing nitrate pollution from New Zealand dairy farms accounting for firm heterogeneity. Land Use Policy, 30(1), 5766. Felsenstein, D., Axhausen, K., & Waddell, P. (2010). Land use Transportation modelling with urbanism: Experiences and progress. The Journal of Transport and Land Use, 3(2), 13. Fisher-Vanden, K., Sue Wing, I., Lanzi, E., & Popp, D. C. (2013). Modelling climate change feedbacks and adaptation responses: Recent approaches and shortcomings. Climatic Change, 117, 481495. Fourati, Y. A., & O’Donoghue, C. (2009). Eliciting individual preferences for pension reform: School of economics. Working Paper No. 0150, National University of Ireland, Galway. Freeman, A. M. (2003). The measurement of environmental and resource values: Theory and methods (2nd ed.). Washington, DC: Resources for the Future. Gay, P. W., & Proops, J. L. (1993). Carbon-dioxide production by the UK economy: An input-output assessment. Applied Energy, 44(2), 113130. Ghermandi, A., & Nunes, P. A. (2013). A global map of coastal recreation values: Results from a spatially explicit meta-analysis. Ecological Economics, 86, 115. Gomes, G., May, A., & Horowitz, R. (2004). Congested freeway microsimulation model using VISSIM. Transportation Research Record: Journal of the Transportation Research Board, 1876(1), 7181. Haab, T. C., & McConnell, K. E. (2002). Valuing environmental and natural resources: The econometrics of non-market valuation. Edward Elgar. Hamilton, K., & Cameron, G. (1994). Simulating the distributional effects of a Canadian carbon tax. Analyse de Politiques. [Canadian Public Policy], 20(4), 385399. Hanemann, W. M. (1992). Preface (notes on the history of environmental valuation in the USA). In S. Navrud, (Ed.), Pricing the environment. Oslo: Scandinavian University Press.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
471
Hanley, N., & Barbier, E. (2009). Pricing nature: Cost benefit analysis and environmental policy. Edward Elgar. Hanley, N., Colombo, S., Tinch, D., Black, A., & Aftab, A. (2006). Estimating the benefits of water quality improvements under the water framework directive: Are benefits transferable? European Review of Agricultural Economics, 33(3), 391413. Hanley, N., Schla¨pfer, F., & Spurgeon, J. (2003). Aggregating the benefits of environmental improvements: Distance-decay functions for use and non-use values. Journal of Environmental Management, 68, 297304. Harding, A., Warren, N. A., & Lloyd, R. (2006). Moving beyond traditional cash measures of economic well-being: Including indirect benefits and indirect taxes. Canberra: National Centre for Social and Economic Modelling. Hooimeijer, P. (1996). A life-course approach to urban dynamics: State of the art in and research design for the Netherlands. In G. P. Clarke (Ed.), Microsimulation for urban and regional policy analysis (pp. 2863). London: Pion. Huang, Y., Bird, R., & Bell, M. (2009). A comparative study of the emissions by road maintenance works and the disrupted traffic using life cycle assessment and micro-simulation. Transportation Research Part D: Transport and Environment, 14(3), 197204. Hynes, S., Farrelly, N., Murphy, E., & O’Donoghue, C. (2008). Modelling habitat conservation and participation in agri-environmental schemes: A spatial microsimulation approach. Ecological Economics, 66(2), 258269. Hynes, S., Farrelly, N., Murphy, E., & O’Donoghue, C. (2013). Conservation and rural environmental protection schemes. In C. O’Donoghue, D. Ballas, G. Clarke S. Hynes, & K. Morrissey (Eds.), Spatial microsimulation for rural policy analysis (pp. 123141). Advances in Spatial Science. Berlin Heidelberg: Springer. Hynes, S., & Garvey, E. (2009). Modelling farmers’ participation in an agri Environmental scheme using panel data: An application to the rural environment protection scheme in Ireland. Journal of Agricultural Economics, 60(3), 546562. Hynes, S., Hanley, N., & O’Donoghue, C. (2007, March 23). Using Spatial microsimulation techniques in the aggregation of environmental benefit values: An application to Corncrake conservation on Irish farmland. Paper presented at Envecon 2007: Applied Environmental Economics conference, organised by the UK Network of Environmental Economists (UKNEE), London. Hynes, S., Hanley, N., & O’Donoghue, C. (2009). Alternative treatments of the cost of time in recreational demand models: An application to white water rafting in Ireland. Journal of Environmental Management, 90(2), 10141021.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
472
Stephen Hynes and Cathal O’Donoghue
Hynes, S., Hanley, N., & O’Donoghue, C. (2010). A combinatorial optimization approach to non-market environmental benefit aggregation via simulated populations. Land Economics, 86(2), 345362. Hynes, S., Hanley, N., & Scarpa, R. (2008). Effects on welfare measures of alternative means of accounting for preference heterogeneity in recreational demand models. American Journal of Agricultural Economics, 9(4), 10111027. Hynes, S., Morrissey, K., & O’Donoghue, C. (2013). Modelling greenhouse gas emissions from agriculture. In C. O’Donoghue, D. Ballas, G. Clarke, S. Hynes, & K. Morrissey (Eds.), Spatial microsimulation for rural policy analysis (pp. 143157). Berlin Heidelberg: Springer. Hynes, S., Morrissey, K., O’Donoghue, C., & Clarke, G. (2009). A spatial microsimulation analysis of methane emissions from Irish agriculture. Journal of Ecological Complexity, 6, 135146. Hynes, S., Norton, D., & Hanley, N. (2013). Accounting for cultural dimensions in international benefit transfer. Environmental and Resource Economics, 56, 499519. Immervoll, H., O’Donoghue, C., & Sutherland, H. (1999). An introduction to EUROMOD. Microsimulation Unit, Department of Applied Economics, University of Cambridge. Jochem, P. (2009). Impacts of a carbon dioxide emissions trading scheme in German road transportation. Transportation Research Record: Journal of the Transportation Research Board, 2139(1), 153160. Johnston, R. J., & Rosenberger, R. S. (2010). Methods, trends and controversies in contemporary benefit transfer. Journal of Economic Surveys, 24(3), 479510. Kahneman, D., & Knetsch, J. (1992). Valuing public goods: The purchase of moral satisfaction. Journal of Environmental Economics and Management, 22, 5770. Kask, S., & Shogren, J. (1994). Benefit transfer protocol for long-term health risk valuation: A case of surface water contamination. Water Resources Research, 30, 28132823. Kazimi, C. (1997a). A microsimulation model for evaluating the environmental impact of alternative-fuel vehicles. Transportation Research Part A, 31(1), 5657. Kazimi, C. (1997b). Evaluating the environmental impact of alternativefuel vehicles. Journal of Environmental Economics and Management, 33(2), 163185. Kelley, H., Rensburg, T. M. V., & Yadav, L. (2013). A micro-simulation evaluation of the effectiveness of an Irish grass roots agrienvironmental scheme. Land Use Policy, 31, 182195. Kerkhof, A. C., Moll, H. C., Drissen, E., & Wilting, H. C. (2008). Taxation of multiple greenhouse gases and the effects on income distribution: A case study of the Netherlands. Ecological Economics, 67(2), 318326.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
473
Kimura, S., Anton, J., & Cattaneo, A. (2012, August 1824). Effective risk management policy choices under climate change: An application to Saskatchewan Crop Sector. 2012 conference, Foz do Iguacu, Brazil (No. 126736). International Association of Agricultural Economists. Kruseman, G., Blokland, P. W., Bouma, F., Luesink, H., Mokveld, L., & Vrolijk, H. (2008). Micro-simulation as a tool to assess policy concerning non-point source pollution: The case of ammonia in Dutch agriculture. Presentation at the 107th EAAE seminar ‘modelling of agricultural and rural development policies’ (Vol. 29). Sevilla. The Hague: LEI Wageningen UR. Kruseman, G., Blokland, P. W., Luesink, H., & Vrolijk, H. (2008, August 2629). Ex-ante evaluation of tightening environmental policy: The case of mineral use in Dutch agriculture. XII EAAE Congress. Ghent, Belgium. Kuiper, M., & Ruben, R. (2006, August 1218). Poverty targeting, resource degradation and heterogeneous endowments A microsimulation analysis of a less favored Ethiopian village. Contributed paper prepared for presentation at the International Association of Agricultural Economists conference, Gold Coast, Australia. Kyophilavong, P., & Takamatsu, S. (2011, May). Impact of climate change on poverty in Laos. Selected poster presented at Agricultural and Applied Economics Association annual meeting, Pittsburgh, PA. Labandeira, X., & Labeaga, J. (1999). Combining input-output analysis and micro simulation to assess the effects of carbon taxation on Spanish households. Fiscal Studies, 20(3), 305320. Labandeira, X., Labeaga, J. M., & Rodrı´ guez, M. (2007). Microsimulation in the analysis of environmental tax reforms. An Application for Spain. In Microsimulation as a tool for the evaluation of public policies: Methods and applications. Madrid: Fundacio´n BBVA. Lal, R., & Follett, R. F. (2009). A national assessment of soil carbon sequestration on cropland: A microsimulation modeling approach. In R. Lal, & R. F. Follett (Eds.), Soil carbon sequestration and the greenhouse effect (2nd ed., pp. 113). Soil Science Society of America Special Publication No. 57. Madison, WI: Soil Science Society of America. Leahy, E., Lyons, S., Morgenroth, E. L. W., & Tol, R. S. J. (2009). The spatial incidence of a carbon tax in Ireland. Working Paper No. FNU-174. Research unit Sustainability and Global Change, Hamburg University and Centre for Marine and Atmospheric Science, Hamburg. Ledoux, L., & Turner, R. K. (2002). Valuing ocean and coastal resources: A review of practical examples and issues for further action. Ocean & Coastal Management, 45(9), 583616.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
474
Stephen Hynes and Cathal O’Donoghue
Lee, C., & Miller, E. J. (2001, January). A microsimulation model of CO2 emissions from passenger cars Model framework and applications. Proceedings of the 80th Transportation Research Board annual meeting. Washington, DC. Leontief, W. W. (1951). Input-output economics. Scientific American, 185(4), 1521. Lindgren, U., & Elmquist, H. (2005). Environmental and economic impacts of decision-making at an arable farm: An integrative modelling approach. AMBIO: A Journal of the Human Environment, 34(4), 393401. Lloyd, C. (2014). Exploring spatial scale in geography. Chichester: Wiley. Loomis, J., & Walsh, R. (1997). Recreation economic decisions: Comparing benefits and costs (2nd ed.). State College, PA: Venture Publishing. Martinez, E. (2008). Modelling road tolling in microsimulation. Creative Sciences, 11, 3137. Mavoa, S. (2007). Estimating the social impact of reduced CO2 emissions from household travel using GIS modelling. In J. Morris & G. Rose (Eds.), Proceedings of the 30th Australasian Transport Research Forum (ATRF). Melbourne. McQuinn, K., & Binfield, J. (2002). Estimating the marginal cost to Irish agriculture of reductions in greenhouse gases. Rural Economy Research Centre Working Paper No. 1. Teagasc, Dublin. Minot, N., & Baulch, B. (2005). Poverty mapping with aggregate census data: What is the loss in precision? Review of Development Economics, 9(1), 525. Mitchell, G. (1999). Demand forecasting as a tool for sustainable water resource management. International Journal of Sustainable Development & World Ecology, 6(4), 231241. Moeckel, R., Spiekermann, K., Schu¨rmann, C., & Wegener, M. (2003). Microsimulation of land use. International Journal of Urban Sciences, 7(1), 1431. Morinie`re, L. C. E., Taylor, R., & Hamza, M. (2009). Global footprint mapping and micro-simulation: A tool for risk management. Retrieved from http://www.earthzine.org/2009/04/20/global-footprint-mappingmicro-simulation-tool-risk-management/. Accessed on January 18, 2014. Newbery, D., O’Donoghue, C., Pratten, C., & Santos, G. (2002). Sri Lanka fuel study. Report prepared for the world bank. Washington, DC: World Band. Norton, D., Hynes, S., Doherty, E., Buckley, C., Campbell, D., & Stithou, M. (2012). Using benefit transfer techniques to estimate the value of achieving ‘Good Ecological’ status in Irish water bodies. EPA STRIVE Report No. 94. Retrieved from http://www.epa.ie/pubs/ reports/research/water/strivereport94.html
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
475
Noth, M., Borning, A., & Waddell, P. (2003). An extensible, modular architecture for simulating urban development, transportation, and environmental impacts. Computers, Environment and Urban Systems, 27(2), 181203. Nunes, P. (2002). The contingent valuation of natural parks: Assessing the warmglow propensity factor. New Horizons in Environmental Economics Series. UK: Edward Elgar. O’Donoghue, C. (1997). Carbon Dioxide, Energy taxes and household income. Economic and Social Research Institute (ESRI) Working Paper, No. 90. Dublin. Oltra, V., & Jean, M. S. (2005). The dynamics of environmental innovations: Three stylised trajectories of clean technology. Economics of Innovation and New Technology, 14(3), 189212. Orcutt, G. H., Franklin, S. D., Mendelsohn, R., & Smith, J. D. (1977). Does your probability of death depend on your environment? A microanalytic study. The American Economic Review, 67(1), 260264. Pearce, D. (1991). The role of carbon taxes in adjusting to global warming. The Economic Journal, 101(407), 938948. Pearson, M., & Smith, S. (1991). The European carbon tax: An assessment of the European commission’s proposals. London: Institute for Fiscal Studies. Perez, D., Vautey, C., & Ka¨mpf, J. (2012). Urban energy flow microsimulation in a heating dominated continental climate. In Proceedings of SIMUL 2012, the fourth international conference on advances in system simulation, November 1823, Lisbon, Portugal (pp. 1823). Peters, I., Brassel, K. H., & Spo¨rri, C. (Eds.). (2002). A microsimulation model for assessing urine flows in urban wastewater management: Integrated assessment and decision support. In A. E. Rizzoli & A. J. Jakeman (Eds.), Proceedings of the first biennial meeting of the international environmental modelling and software society. Petersen, B., Gernaey, K., Henze, M., & Vanrolleghem, P. (2002). Evaluation of an ASM1 model calibration procedure on a municipal-industrial wastewater treatment plant. Journal of Hydroinformation, 4, 1538. Poltima¨e, H., & Vo˜rk, A. (2009). Distributional effects of environmental taxes in Estonia. Discussions on Estonian economic policy: Theory and practice of economic policy (Vol. 17, pp. 196211). Berlin: Berliner Wissenschaftsverlag GmbH. Portney, P. R. (1994). The contingent valuation debate: Why economists should care. The Journal of Economic Perspectives, 8, 318. Potter, S. R., Atwood, J. D., Lemunyon, J., & Kellogg, R. L. (2009). A national assessment of soil carbon sequestration on cropland: A microsimulation modeling approach. Soil carbon sequestration and the greenhouse effect. Soil Science Society of America Special Publication No. 57.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
476
Stephen Hynes and Cathal O’Donoghue
Ramilan, T., Scrimgeour, F., & Marsh, D. (2011). Analysis of environmental and economic efficiency using a farm population microsimulation model. Mathematics and Computers in Simulation, 81(7), 13441352. Ramilan, T., Scrimgeour, F. G., Levy, G., & Romera, A. J. (2007). Modelling economic impact of agri-environmental policy on dairy farms A catchment perspective. In MODSIM 2007 International Congress on Modelling and Simulation (pp. 10891095). Modelling and Simulation Society of Australia and New Zealand. Robinson, D., Haldi, F., Ka¨mpf, J., Leroux, P., Perez, D., Rasheed, A., & Wilke, U. (2009, July). CitySim: Comprehensive micro-simulation of resource flows for sustainable urban planning. In Proceedings of building simulation 2009, Eleventh International IBPSA Conference, Glasgow, Scotland. Rosenberger, R. S., & Loomis, J. B. (2001). Benefit transfer of outdoor recreation use values: A technical document supporting the Forest Service Strategic Plan (2000 revision). General Technical Report No. RMRS-GTR-72. Rocky Mountain Research Station, USDA Forest Service. Rozan, A. (2004). Benefit transfer: A comparison of WTP for air quality between France and Germany. Environmental and Resource Economics, 29, 295306. Sander, B., Nizam, A., Garrison, L. P., Postma, M. J., Halloran, M. E., & Longini, I. M. (2009). Economic evaluation of influenza pandemic mitigation strategies in the United States using a stochastic microsimulation transmission model. Value in Health, 12(2), 226233. Serret, Y., & Johnstone, N. (2006). Distributional effects of environmental policy. Paris: OECD. Scarpa, R., & Alberini, A. (2005). Applications of simulation methods in environmental and resource economics. Springer. Smith, V. E., & Strauss, J. (1986). Simulating the rural economy in a subsistence environment: Sierra Leone. In I. Singh, L. Squire, & J. Strauss (Eds.), Agricultural household models: Extensions, applications, and policy (pp. 206232). Johns Hopkins University Press. Smith, V. K. (1993). Nonmarket valuation of environmental resources: An interpretive appraisal. Land Economics, 69(1), 126. Stevanovic, A., Stevanovic, J., & Kergaye, C. (2012). Environmental benefits of adaptive traffic control system: assessment of fuel consumption and vehicular emissions. Transportation Research Board 91st annual meeting (No. 12-0749). Svoray, T., & Benenson, I. (2009). Scale and adequacy of environmental microsimulation. Ecological Complexity, 6(2), 7779. Symons, E., Proops, J., & Gay, P. (1994). Carbon taxes, consumer demand and carbon dioxide emissions: A simulation analysis for the UK. Fiscal Studies, 15(2), 1943.
Downloaded by University of Newcastle At 01:20 11 December 2016 (PT)
Environmental Models
477
Tirumalachetty, S., Kockelman, K., & Nichols, B. (2013). Forecasting greenhouse gas emissions from urban regions: Microsimulation of land use and transport patterns in Austin, Texas. Journal of Transport Geography, 33, 220229. Van Leeuwen, E., Dekkers, J., & Rietveld, P. (2008). The development of a static farm-level spatial microsimulation model to analyse on and off farm activities of Dutch farmers presenting the research framework. Paper presented to the 3rd IsraeliDutch regional science workshop, Jerusalem. Villot, X. L. (1998). The effects of a sulphur tax levied on the Spanish electricity industry. In V Encuentro de Economı´a Pu´blica (p. 39). La Realidad de la Solidaridad en la Financiacio´n Autono´mica. Waduda, Z., Noland, R. B., & Graham, D. J. (2008). Equity analysis of personal tradable carbon permits for the road transport sector. Environmental Science & Policy, 11(6), 533544. Williamson, P. (2001). An applied microsimulation model: Exploring alternative domestic water consumption scenarios. In Regional science in business (pp. 243268). Berlin Heidelberg: Springer. Williamson, P., Mitchell, G., & McDonald, A. T. (2002). Domestic water demand forecasting: A static microsimulation approach. Water and Environment Journal, 16(4), 243248. Wilson, M., & Liu, S. (2008). Evaluating the non-market value of ecosystem goods and services provided by coastal and nearshore marine systems. In M. Patterson & B. Glavovic (Eds.), Ecological economics of the oceans and coasts (pp. 119139). Northampton, MA: Edward Elgar. Wong, F., & Chandra, E. (2012). Using micro-simulation to understand traffic flow and congestion. In Central region engineering conference proceedings (pp. 14). Central Region Engineering Conference 2012 on Regional Engineering Excellence, Queensland, Australia. Yusuf, A. A. (2008). The distributional impact of environmental policy: The case of carbon tax and energy pricing reform in Indonesia. Research Report No. 2008-RR1. Environment and Economy Program for Southeast Asia, Singapore. Yusuf, A. A., & Resosudarmo, B. P. (2007). On the distributional effect of carbon tax in developing countries: The case of Indonesia. Working Papers in Economics and Development Studies (WoPEDS) 200705. Department of Economics, Padjadjaran University. Zhao, N., Liu, Y., & Chen, J. N. (2009). Micro-simulation of firms’ heterogeneity on pollution intensity and regional characteristics. Huan Jing Ke Xue, 30(11), 3190. Zhu, S., & Ferreira, L. (2012). Evaluation of vehicle emissions models for micro-simulation modelling: Using CO2 as a case study. Road and Transport Research, 21(3), 318.
CHAPTER 16
Firm Level Models$ Hermann Buslei, Stefan Bach and Martin Simmler
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.1. Introduction From the very beginning of microsimulation modelling, the approach was regarded as useful for modelling a broad range of types of agents as ‘elemental decision-making entities’, among them individuals, households and firms (see Orcutt, 1957, p. 118, 1960, p. 898). However, the literature on firms is rather scarce compared to the large literature on household modelling.1 If one focuses on microsimulation models which are based on large data sets with individual firms as the unit of observations, as we do in this chapter, the range of models to be considered starts with ‘classic’ government models. These were and are still developed by government bodies or on their behalf to analyse fiscal policy, in particular, to forecast tax revenues and to assess the revenue impact and distributional consequences of tax reforms (see for an overview Ahmed, 2006) or firm demographics (Wissen, 2000). A second type of models could be deemed ‘Advanced models for forecasting and policy analysis’. They tackle several shortcomings of the basic models, which are related to the information content in the database, the regional entities considered and the consideration of
$
Specifically Firm Models based upon large data sets.
1
This finding could in principle be caused by less interest in firm models compared to household models, higher costs of firm models and more severe restrictions in data availability for firms. We will consider the second and the third factor below.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293015
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
Downloaded by Monash University At 09:12 12 April 2016 (PT)
480
Hermann Buslei, Stefan Bach and Martin Simmler
behavioural responses of firms to changes in their environment, especially tax changes. Somewhat different uses of microsimulation modelling are presumably present solely in academic work. We will consider simulated marginal tax rates as invented by Graham (1996), and the use of microsimulation models in order to obtain instruments for endogenous variables in econometric models. Focusing on these models, it is obvious that we have to leave out some related work. Firstly, we will not consider simulation models for entrepreneurs which could in principle also be based on household data (see for an example Fossen, 2009). Due to the strict requirement of (large) microdata, we also do not treat related simulation models with ‘artificial data’ for certain firm types (e.g. the European Tax Analyzer, see Spengel, 1995) or applied general equilibrium models with a few types of corporations (see e.g. de Mooij & Devereux, 2011). Furthermore, we will not treat simulation models for specific markets, which were developed in the field of Industrial Organization. They capture in depth the dynamic interactions between firms in the single firm’s decision setting (dynamic stochastic games). In a typical oligopolistic setting, not only investment, but also entry, exit and pricing are explicitly modelled (see for overview Doraszelski & Pakes, 2007). Contrary to the other models considered in this chapter, they focus on single markets and the practical application of the models still seems to be in its infancy (see Doraszelski & Pakes, 2007, p. 1961).
16.2. Context Besides private households, firms are the second central unit where economic decisions are made. These decisions have many dimensions, most notably decisions on the location, the factor demand (including investment in fixed assets), the production process, the supply of goods at certain prices and on different markets, portfolio investment, and the financing of corporations. In case firms are run as sole proprietorships, the decision-making unit is identical to a part of private households. Household surveys often provide information on this type of firms. For larger firms which are run as partnerships or corporations, the firm itself makes economic decisions, based on the framework prescribed by the owners. For this type of firm, the primary observation units in microdata are the firms themselves and not individuals as partners or shareholders.2 Ideally, models for these firms would capture the firm’s behaviour, the behaviour of the shareholder and the relation of the two (principal agent relation).
2
Often, information on partners and shareholders is not provided at all.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
481
In principle, all stakeholders of firms (among them: the government, unions or single employees, other firms in an industry as competitors in factor or output markets, creditors) might be interested in the results of microsimulation models of firms as they may provide information on firm’s or industries’ future development. Moreover, there is also a purely academic interest in microsimulation as only this type of model is able to assess ex ante the full distributional effects of policy measures for heterogeneous populations. Academic models for example treat micro macro-economic relations (van Tongeren, 1995) or assess the impact of (potential) tax reforms (Bach, Buslei, Dwenger, & Fossen, 2008; Finke, Heckemeyer, Reister, & Spengel, 2013; Heckemeyer, 2012; Reister, 2009). The government seems to be the stakeholder with the highest interest in the modelling of firms and the investigation of specific questions with respect to firm behaviour on the basis of microsimulation models. This is at least indicated by the number of published descriptions of microsimulation models run by governments or on their behalf. However, it is plausible that large firms or associations (like e.g. unions) in some form might make use of microsimulation models (e.g. in the case of large firms for firms in their sector) without disclosing this work. What is most likely an important reason for the dominance of governments in this field is the privileged access of the government to microdata. Another reason for the dominant role of the state is given by the high costs for setting up microsimulation models. Finally, the state is regularly interested in the analyses of the complete firm population because all firms are subject to taxes and are affected by other policy regulation, for example anti-trust policy. Contrary, for other stakeholders, analyses of a small part of the firm population, like for example competitors in a product oligopoly might be sufficient. Microsimulation models provide a description of firm behaviour and the firms’ environment taking into account differences between firms. While aggregated or semi aggregated models are easier to build and maintain, they are not able to address possibly important differences between firms or legal rules for firms. Distributional analyses of policy measures thus require a microsimulation approach. However, microsimulation models may be advantageous even if one is ultimately interested in aggregate figures like a forecast value for total tax revenue. This is the case if there are important non-linearities in corporation tax rules which stem for example from tax exempt amounts and firms differ substantially, for example in size and the development of income. Policy changes for example a preferential treatment of R&D expenses lead to a change in the firms’ taxable income in the first place. The exact impact on taxes due is however determined in a second stage depending on non-linearities in the ‘core’ tax function. Microsimulation models are able to capture both steps. An alternative assessment based on ‘average tax rates’ is prone to missing important impacts of the policies impact on each firm.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
482
Hermann Buslei, Stefan Bach and Martin Simmler
Microsimulation models are also necessary if governments aim to treat different firms differently, for example exclude small firms from a certain tax or grant subsidies for research to SMEs only. Microsimulation can be helpful to analyse the direct fiscal implications of these policies. Moreover, microsimulation models are able to capture the fact that different types of firms might react differently to changes in the environment, for instance on tax rate changes (see van Tongeren, 1995, p. 1). Different reactions might further be expected for firms with different financing structures or at least in the short run firms with large loss carry forwards. Another dimension in which firms may differ is their ability to shift taxable profits over time. This is especially important for forecasting tax revenues (see Shahnazarian, 2004, p. 1). For this application, it is also important that microsimulation models are at least in principle able to consider differences in profit volatility between firm types. These examples illustrate that microsimulation models are in particular advantageous if the results of an applied general equilibrium model with a few types of firms is not able to capture the full range of relevant differences between firms. Despite a lack of non-linearities in the tax function, this might imply that microsimulation models would promise more accurate estimates of revenue changes caused by changes of the tax function. On other hand, for certain important policy changes it might be adequate to pursue both methods. For example, the introduction of Common Consolidated Corporate Tax Base (CCCTB) was studied in several microsimulation models (see below) as well as in applied general equilibrium models (Bettendorf, van der Horst, de Mooij, Devereux, & Loretz, 2009). The latter approach had in particular the advantage to give insights on the expected long run growth effects, an aspect which so far microsimulation models cannot answer. Microsimulation of firms and the microsimulation of households are usually done in separate frameworks although one might argue that firms are ultimately owned by households and should thus be included in household models. However, at least larger firms are typically organized as a partnership or a corporation and may have a large number of partners or shareholders. These firms form separate legal entities and the managers of the firms have a certain scope for action that is independent of the shareholders. It thus seems natural to model firms separately, at least if a unified model including both, households and corporations, is not regarded as feasible. One practical reason for the independent modelling of households and firms are differences in the income tax codes for corporations compared to the income tax code for households or proprietorships in many countries. Moreover, there are decision dimensions for firms which differ from those for single persons. For example, profit shifting to low tax countries via transfer pricing is usually not an option for single persons but for multinational companies.
Firm Level Models
483
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.3. Methodological characteristics and choices What is common to all models addressed in this chapter is that they consider single firms as the basic unit for the analysis. The (starting) information on each firm stems from a microdata set. Most of the models covered here are based on a (large) microdata set, which implies that there exists considerable heterogeneity among the entities covered. This distinguishes the microsimulation models from alternative modelling approaches like applied general equilibrium models and models for ‘average firms’ in subgroups of the whole firm population.3 All models described in this chapter used for policy analysis, replicate in a first step the current state of certain variables, which the model should be able to ‘explain’. An example for such a replication is the calculation of the actual tax liability of a firm (as contained in the data) starting with the taxable income of a firm or even with the basic determinants of the taxable income, for example ordinary income before depreciation and depreciation allowances. Given that the model is able to replicate the current state (at a fixed level of precision) simulations are usually run to assess how changes in the environment affect ‘endogenous’ variables. Some of these simulations are often done to check the sensitivity of the model. However, the central simulations are ‘policy simulations’ where the change in the environment is a policy reform, for instance a reduction in the corporate tax rate. The values for the endogenous variables of the model in the policy simulation are compared for each firm to the respective values in the baseline simulation. The differences are often aggregated to obtain summarizing measures like the change in tax revenue or can be used to calculate alterations in indicators of the distribution of variables of interest. An important caveat of the simulation results is the fact that relations between firms (corporations) and households are not modelled, at least not on a micro (individual) basis. For a complete picture of the distributional impact of policy changes (e.g. a change in the corporate tax rate) these relations must be modelled because different households can be affected differently by such a measure, for example by changes in the payout policy. Although the modelling of the relation between firms and households would thus certainly be desirable, it does not seem to be feasible at present.
16.3.1. What choices are available? All the simulation models covered in this chapter describe certain characteristics of firms in a microdata set and the changes of these
3
An example for this type of model is the ‘European Tax Analyzer’ developed by the University of Mannheim and ZEW (see Spengel, 1995).
484
Hermann Buslei, Stefan Bach and Martin Simmler
characteristics due to exogenous variations in the firms’ environment. To distinguish between different models further, the following modelling choices are useful: (1) ‘static vs. dynamic modelling (short run vs. long run)’, (2) ‘without vs. with behavioural adjustment’, (3) narrow vs. broad set of decision types, (4) closed/open (national/international) and (5) with/without modelling macro repercussions.4 A last choice concerns the database as different types of data sets with specific advantages and disadvantages might be available. We will come to this topic later in the chapter.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.3.1.1. Static versus dynamic modelling An important fundamental choice is whether the model is restricted to a single period or whether it considers the development of firms over time. Most of the basic government models for reform evaluation and revenue forecast described below as perhaps the most important class of models with respect to their impact on public policy are static or assume static aging. Static aging means that the weights of firms observed in a certain period (sampling weights in case of a sample) are adjusted for periods in the future according to expected changes in the composition of firms (with respect to several dimensions like size or industry) in future periods. The expected changes are exogenous with respect to the model, taken from suitable projections. Dynamic models on the contrary do not only re-weight firms but explicitly model the changes in characteristics of (the same) firms over time without (necessarily) changing their weights. Dynamic adjustments are usually modelled using transition probabilities between states in two subsequent periods in case of discrete choice variables and a relation between the current and lagged dependent variable (autoregressive part) and other explanatory variables. The resulting dynamics can further be adjusted to other exogenously provided aggregate information5 (without vs. with alignment). In principle, entry and exit of firms over time should be included in dynamic models. However, we are not aware of such models with the exception of the models for specific markets in the Industrial Organization literature mentioned in the introduction.
4
5
A basic feature of microsimulation models is interactions between the agents modelled (see Orcutt, 1957, p. 119). However, as most firm models (see for an exception van Tongeren, 1995) consider the case without interactions, we will not discuss this topic. If for example, the share of firms with a positive investment according to the simulation is higher (say 60%) than the expected share for example in a macroeconomic forecast data (say 50%), the model can be ‘aligned’ in the sense that the firms in the model are ranked according to their probability for a positive investment, and a positive investment is assumed only for firms with a probability for a positive investment which lies above the median.
Firm Level Models
485
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Dynamic models are closely related to the modelling choice on behavioural adjustments. In principle, behavioural adjustments can be included in static models as well. This would be appropriate for adjustments, which can be implemented in a short period of time, and is certainly restrictive if adjustments take a considerable time and the transition process as well as the new long run state is of interest. Most policy analyses with microsimulation models are done without considering behavioural adjustments and are thus restricted to ‘first round effects’. The simple reason is that already these models require a considerable effort and costs. Moreover, they might provide a good approximation at least in the short run and are less contentious than ‘second round effects’.6 This point will be reconsidered below in a subsection which considers ‘basic government models’. 16.3.1.2. Behavioural adjustments Behavioural adjustments may be considered in microsimulation models in different forms. Obviously, the demands on data and effort may be high already for approaches, which tackle a quite restricted set of firm decisions. Thus, it is not surprising that in some analyses of the impact of tax changes on firms’ actual tax liabilities, the ‘behavioural impact’ as a part of the total impact of the reform on the firm’s tax base (taxable income) is set by assumption. The assumption can be based on the assessments of experts. The total change in the tax basis comprises of this behavioural effect and the mechanical impact of a reform on the tax base. In the next step, the microsimulation model is used to determine the tax due, given the total changes in the tax base that are caused by the reform. An example for this approach are simulations with the BIZ tax model in order to assess the impact of the German corporate tax reform in Bach et al. (2008) (see below). Given that the estimates for the change in taxable income are ‘reliable’, this approach is at least able to capture the impact of all nonlinearities in the tax code which determine the tax liability, given a specific amount of taxable income. This approach might be justified especially in cases where the tax law refers to characteristics of firms which are not included in tax or accounting data, because for example the specific type of income was tax exempted in the past and no alternative data sources are available. An obvious disadvantage of this rough approach is that the way the estimates of the behavioural impact were made is not transparent and in case of simple expert guesses cannot be made transparent.
6
The time dimension of the model (short run vs. long run) might also be regarded as a matter of modelling choice.
486
Hermann Buslei, Stefan Bach and Martin Simmler
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.3.1.3. Narrow versus broad set of decision types More advanced approaches have to consider which dimensions of firms decisions to model and at which detail.7 Probably the most important decision of firms or the owners of firms is the basic investment decision. This decision is especially important if the medium and long-term impact of public policy on firms is considered. Besides the choice of investment, further important decisions are made on financing, location, legal form, production technologies and labour demand. With respect to the impact of tax changes, profit shifting to countries with lower tax rates is a further important choice variable. A promising and tractable approach to incorporate the most important decision variables is the use of elasticities, which relate (changes in) tax rates to (changes in) capital accumulation, financing and other dimensions of firm behaviour. These elasticities might stem from econometric studies based on firm responses to earlier reforms or other types of data providing variation in the variables of interest. To illustrate the procedure, we chose a specific example from a huge amount of investigations: Chetty and Saez (2005) have studied how dividend taxation affects firms’ payout policy. Based on specific tax reform in the United States, they find ‘an elasticity of regular dividend payments with respect to the marginal tax rate on dividend income of 0.5’ (Chetty & Saez, 2005, p. 793). In principle this elasticity could be used to assess firms’ reaction to changes in dividend taxation under ‘sufficiently’ similar circumstances. The ‘elasticity approach’ is followed in a systematic way by Heckemeyer (2012) (see also Finke et al., 2013). The choice variables considered are debt ratio, investment, profit shifting,8 location and organizational form. For the first three, responses to tax changes are captured using elasticities from the literature (condensed applying a meta-analysis). The latter two are considered only on an aggregate level (see Heckemeyer, 2012, p. 26). The application of the elasticities for the debt ratio, investment and profit shifting refers to changes in the dichotomous tax rate (incremental interest expense that is effectively tax-deductible times the statutory tax rate on profits), the effective marginal tax rate on profits and the statutory tax rate on profits. The elasticities are differentiated for a few types of firms. Further, a time path for the adjustments is considered. Applying the elasticities changes taxable income. A microsimulation model based on accounting data is used to determine the level of taxes due under the new law. A more structural modelling approach resembles the basic modelling of firms in applied general equilibrium models. That means that firms’
7
8
These choices are obviously similar to the choices which have to be made in Applied General Equilibrium Models. As far as we know, this is the only model which considers this decision.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
487
decision rules are explicitly stated and actual behaviour is determined by applying the decision rules assuming specific values for the structural parameters. In principle, these models could rely on standard behavioural assumptions of the firm, that is the maximization of discounted profits or the firm value over time, given conditions on technology, demand and market form.9 However, we are not aware of any microsimulation model that follows this approach. One reason might be that any modelling of forward looking behaviour that relates to several periods is costly in terms of computing effort. Perhaps more important is, that it is obviously hard to model expectations of single firms on all the data that a firm considers as exogenous. The only model that follows in principle this approach is van Tongeren (1995). This model does not consider the maximization of a certain reward function, but explicitly models firm decisions like investment and financing over time. The focus of van Tongeren (1995) is on the impact of individual firm decisions on industry and macroeconomic variables and the respective repercussions (circular flow (multi-layer) model, see van Tongeren, 1995, pp. 46, 70). In line with behavioural models of the firm, decisions are based on rules of thumb. They can be broadly divided into two categories: routine (short run) decisions on prices, output, employment demand and the allocation of cash flow and strategic choices, foremost investment and financing. Firms realize the outcome of their decisions and update them each year. The planning is based on detailed information represented in the income statement, the balance sheet and a cash-flow statement. The model determines household income as the sum of wage income and firm’s dividend payments. Taxes on household income and corporate profits determine the state budget. Net income and taxes determine private and public consumption demand. Sectoral demand is attributed to individual firms according to their market share which depends on their sales price. Deviations between supply and demand for a firms’ output in the short run are cleared by a rationing scheme (see van Tongeren, 1995, p. 62). Investment decisions are determined in a process which encompasses an ‘assessment of market trends, available technologies, project profitability and financial solvency’ (see van Tongeren, 1995, p. 71). In the model application, it is assumed that the expected sector demand is given by an external forecast, from which firms in a sector infer their demand changes (see van Tongeren, 1995, p. 139). An advantage of this approach is the rather flexible modelling that is able to take into account specific circumstances of economies. Moreover, the model captures repercussions at the aggregate level, which is outstanding among the models considered in this chapter. On the other
9
See for a review of theoretical approaches to firm modelling, Hart, 2011.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
488
Hermann Buslei, Stefan Bach and Martin Simmler
hand, there are possible disadvantages: (1) Rules of thumb may seem plausible, but it is not guaranteed that they adequately describe actual behaviour. (2) Model results might to some extent depend on the rationing scheme chosen and empirical evidence for alternative schemes seems to be scarce. (3) The input information on expectations of future demand which dominates the development of the model is exogenous. This feature could be restrictive if policies are studied which might themselves have an impact on firms’ expectations of future output demand. A second approach that (potentially) goes beyond the application of a few point elasticities is an ‘econometric’ simulation model which explicitly considers consecutive time periods, like calendar years. The equations estimated govern transitions between different states in subsequent time periods and changes of continuous variables over time. The estimation requires panel data. Behavioural equations to be estimated, especially the demand for investment goods, can be deducted from an optimizing approach.10 Expectations on future developments (prices, demand, etc.) have to be approximated. It has to be chosen whether decisions are assumed to be taken simultaneously or in specific order. The more interdependencies between decisions are considered, the richer, but also the more complex, the model gets. The only model we are aware of which in principle follows this approach is Shahnazarian (2004, 2011). This model assumes (forward looking) optimizing behaviour, but uses some ad-hoc assumptions to derive a recursive system of equations where some important dependent variables like investment in machines depend on their own values in the previous period. The model is based on pooled accounting and tax data for 1997 to 1999 (only stock companies) and considers stock and flow variables (financial statement modelling, see Clarke & Tobias, 1995). It further includes a detailed representation of the Swedish tax system and thus is able to capture the impact of changes in this system on dependent variables. 16.3.1.4. Comparison to household models Above, it was mentioned that microsimulation models for firms are quite rare compared to models for households. Moreover, our survey on applications below will show that most firm models are of the static type with no behavioural adjustments and are thus less ‘elaborate’ compared to many household models which take into account dynamics and behavioural adjustments. Several reasons for this finding have been discussed briefly in
10
Important choices are whether to consider adjustment costs for investment which is important for assessing the impact of taxes especially in the short run and how to model the firm’s financing which is generally important for assessing the impact of taxes.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
489
the literature. A first type of argument relates to differences in the tax bases of households and firms. While for households ‘economic income’ is usually equal to taxable income, the two might differ for firms (accounting sphere and tax sphere). Further, relations between firms may be important, for example if firms form tax groups. Moreover, contrary to the rules for firms, the tax code for households does not provide major inter-temporal offsetting opportunities. And a third reason may be given by interdependencies between different firm taxes and the fact that tax rules may differ for different legal forms of firms (see preface by Spengel in Reister, 2009, pp. 5, 9 10). Firm models are thus more complex mainly because tax regulations for firms are more complex. Moreover, access to firm data, especially tax [data,] is certainly more restricted than has been the access to household data (see Reister, Spengel, Heckemeyer, & Finke, 2008, p. 1). It has also been stated that firm behaviour is more complex than household behaviour (see Reister, 2009, p. 10 and the references therein). However, it is not that obvious whether household behaviour which also regularly includes inter-temporal aspects (e.g. investment in education, saving) and complex personal decisions, which are related to the economic sphere (e.g. household formation) is less complex than firm behaviour. The time dimension is in principle similarly important in household and firm models (see for a survey of dynamic household models Li & O’Donoghue, 2013). What seems to make behavioural modelling of households more tractable is the fact that probably all households can adequately be modelled as price takers while this assumption is not adequate for at least part of the firms. If firms are aware of their impact on prices and thus on the well-being of other firms, their expectations on the reactions of their competitors have to be explicitly modelled. A general approach for capturing this type of interdependency is the modelling of dynamic stochastic games. A related increase in complexity would occur on the household side if one would consider interdependencies in the utilities of individuals, especially in networks. 16.3.2. What are the data requirements? Firm microsimulation models require microdata on the key variables for the specific analysis for which they are designed. For example, models which intend to assess the fiscal effects of tax reforms require representative data on the tax base of the taxes considered for the specific unit of a firm which is liable to the tax. This is complicated by the fact that the legal entity and the taxable entity might differ due the formation of tax groups. Many firm simulation models which include a large number of firms rely on tax data (income or corporation tax) and financial accounting data. However, if taxes are not the focus of simulation models, other data sources might be preferable or even indispensable. This might be the case
Downloaded by Monash University At 09:12 12 April 2016 (PT)
490
Hermann Buslei, Stefan Bach and Martin Simmler
for example for models which consider in detail specific production technologies and consider the impact of changes in market conditions for firms. These models require among other information data on technological relations, budget sets and prices. An example for this type of modelling is the FLIPSIM model.11 This model is (recursively) dynamic and was designed for the analysis of specific public policies on farms. Probably due to the considerable information requests and the dynamic approach, simulations with this model are usually run for a few representative firms. Because we focus in this chapter on simulation models based on large data sets, we refrain from describing this model in detail. An important characteristic of firms in our context is whether they are incorporated (including partnerships) or sufficiently large single proprietorships so that they are legally required to disclose accounting data and are thus represented in the respective data sets. For these firms, tax files from the corporation tax and accounting data provide alternative or if present complementary data sources for simulation models. For small unincorporated businesses, general household surveys and income tax data are the best available data bases for simulation models. As already mentioned in the introduction, we will only consider simulation models for incorporated respectively large firms in this chapter. Tax files are a natural candidate for all simulation models which intend to investigate the impact of taxes on tax revenue and earnings distribution. This is due to the fact that tax codes often contain complex and detailed rules for the determination of the tax base as well as for the calculation of the tax due given the tax base and usually the necessary information details are not covered in other types of firm data sets.12 Tax data are however only excellent for the assessment of policy reforms if these reforms relate to information already relevant under the current law as often tax data only contain information which is essential for the tax assessment. In all other cases, the analysis is restricted to regulation changes that apply to a given profit level. Of minor importance is further that tax data usually do not contain information which could be of interest for assessing the results of the simulation exercises, like for example the number of employees. Although in many cases, cross-section tax data are sufficient, panel tax data might allow more reliable results. Firstly, tax systems may allow some form of inter-temporal income shifting, for example loss carry back or carry forward regulations. Knowledge of possible loss offsetting is
11
12
See http://www.afpc.tamu.edu/models/flipsim/ for a brief description of the current version of the model and references to applications. The use of tax data may be limited if the data set made available by tax authorities or a Statistical Office does not contain all information from the assessment.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
491
especially important for revenue forecasting. A second argument for panel data is behavioural adjustments of firms which are not implemented instantaneously but may take several periods. Moreover, panel data are used to estimate the strength of firms’ reactions to certain changes in their environment. We will come back to this later after considering accounting as the alternative database. The use of accounting data for microsimulation models has been introduced in particular by academic research. Compared to tax data, the data is not subject to confidentiality restrictions as it is based on legal publishing requirements in the respective countries. Furthermore, the data is often available for several years and may include information on firms’ shareholders, stock prices etc. as well. However, there are two potential shortcomings related to the use of accounting data, which are Sample selection and Book-tax differences. Since the data sources for accounting data are published financial statements, the differences in publication requirements between types of firms are reflected in the available information for different types of firms. In Germany for example, only medium size and large companies (based on size, number of employees and turnover) have to publish their income and loss statements. Thus, studies on firm behaviour relying on flow variables can typically only be done for larger firms. Further, the international differences in publication requirements restrict comparability between countries. Although Germany and France are two countries of similar size, the international firm database ORBIS covers in 2003 around 75k French compared to 15k German firms (Devereux & Loretz, 2008). This sample selectivity is in particular a problem for the quantification of distributional and revenue effects of tax reforms. Academic research in contrast is less affected as the focus on a particular set of firms still allows to gain insights on firm behaviour (although it might be limited to the particular set of firms). Sample selection in studies analysing distributional impact of tax reform are often addressed by extrapolation using information from the tax statistics on the whole firm universe for instance (Finke et al., 2013). This works however only if the sample on which the microsimulation is applied is non-selective; otherwise the extrapolation hinges on the comparability of the observed and non-observed firms. Given the often found differences with respect to firm size or public vs. private firms, this assumption is questionable.13
13
Most of the firm research in the United States is based on COMPUSTAT data, which contains information for listed companies in the United States. These firms are different from non-listed as shown for example by Bargeron, Schlingemann, Stulz, and Zutter (2008) for mergers and acquisitions, by Brav (2009) for capital structure choice, or by Asker, Farre-Mensa, and Ljungqvist (2012) for investment behaviour.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
492
Hermann Buslei, Stefan Bach and Martin Simmler
The relevance of the second shortcoming of financial statements, that is the missing information on firms’ tax payment determinants, depends on the institutional framework in the country of the analysis as well. If tax determination follows accounting rules, differences are negligible. For the United States for example the difference between tax and accounting data does not seem to be overwhelming as Graham and Mills (2008) find the calculation of marginal tax rate based on accounting data a very good approximation for marginal tax rate based on tax data.14 If however, the tax code includes several rights to opt, the potential differences and thus the relevance of the book-tax differences increase. To gain at least aggregated information on how voting rights are applied in practice, the insights of tax accountants might be used. Reister (2009) for example used the results of a survey among tax consultants to determine how firms use tax-related options. Merging both data sources is the best way to include key variables of firm behaviour while keeping the precision of calculation firms’ tax burden using tax data. Devereux Liu, and Loretz (2014) combined UK tax return data with financial statements data and are able to match 90% of all firms. Another example is Shahnazarian (2011) who uses combined accounting and tax data in a pooled data set for a 3 year-period (1997 1999). The data were prepared by Swedish Ministry of Finance and Statistics Sweden. In dynamic models a second source of data is needed, which is either the set of underlying parameters to determine the behavioural responses (Shahnazarian, 2011; van Tongeren, 1995) or the parameters (elasticities) for the behavioural response directly (Heckemeyer, 2012). van Tongeren (1995) uses accounting data for a quite small subset of Dutch companies for the period 1978 1987 which he revises in order to achieve consistency. Parameters of the model are ‘calibrated’ on these data in a trial and error procedure making substantial simplifying assumptions. Goodness of fit to observations is assessed for only a few endogenous variables, among them investment (see van Tongeren, 1995, p. 149). Shahnazarian (2011) estimates recursive equations for the endogenous variables (most notably investments, funds, income) of the model. The data set contains only three observation years (with some retrospective information). It is obvious that this might be restrictive as it hampers for example the estimation of firm fixed effects or specific lag structures. The last approach we consider is the systematic use of elasticities. The choice of the values for elasticities for the behavioural responses faces a trade-off between accuracy of the estimates and accounting for firm heterogeneity. To meet the aim of accuracy, meta-analysis condensing the
14
See however Desai (2005) for a discussion of the development of book-tax-values in the United States.
Firm Level Models
493
Downloaded by Monash University At 09:12 12 April 2016 (PT)
empirical findings of the literature can be applied (e.g. Heckemeyer, 2012). Although the derived parameter values have a high external validity, they neglect potential differences between firms. To account for firm heterogeneity, estimates for subgroups can be derived, which are however more likely to be sensitive to the studies included in the meta-analysis. For the study of specific topics like the impact of the business cycle on tax revenue, even existing panel data might not provide the necessary information. An option in this case is to generate synthetic data sets. Creedy and Gemmell (2009, 2010) draw synthetic data from observed firm distributions, and use parameters on behavioural adjustments from the literature in order to simulate the development of the UK corporate tax revenue. 16.4. Uses and applications The analysis undertaken by microsimulation firm models depends on the model type. Basic Government models are used to forecast tax revenue as well as to quantify the revenue and distributional effects of (potential) tax reforms without considering behavioural adjustments. More advanced models for forecasting and policy analysis which include behavioural adjustments are mainly developed by academics. The same is the case for the application of microsimulation models as an intermediate step in the analysis of firm behaviour. 16.4.1. Basic government models A widespread application of microsimulation modelling of firms is the field of corporate taxation. In most OECD countries, corporate tax revenue accounts for a remarkable share of total tax revenue (OECD, 2013). Corporate taxation mainly focuses on business income. In addition, some countries charge taxes on businesses at the regional or local level, which are sometimes levied on business properties, payroll or even financing expenses (as it is the case in Germany or Italy). Moreover, firms are involved in financing social security by employer’s contributions on behalf of their employees or payroll taxes. Governments and policy makers have a strong interest to assess the fiscal implications of corporate taxation. Since firms are central units in economic decision-making, information on the distributional and economic impact of business taxation is highly relevant for economic policy. The traditional tasks of microsimulation models on corporate taxation are revenue forecast and tax reform evaluation (Ahmed, 2006). Basically, corporate tax rates are widely proportional with respect to taxable income. However, effective tax rates related to actual corporate income, as observed in financial accounting or national accounts, could widely
Downloaded by Monash University At 09:12 12 April 2016 (PT)
494
Hermann Buslei, Stefan Bach and Martin Simmler
differ due to specific tax treatments. This is the case due to special depreciation allowances, tax incentives, non-deductible expenses (e.g. for undue financing expenses to abroad), specific allowances (e.g. for SMEs), or the offset of losses carried forward from past tax years. As these issues markedly affect tax liabilities, detailed information on their distribution is required for proper evaluation of revenue and distributional effects of tax reforms. Even under current legislation these issues could impact tax revenue over the business cycle or across industries. Compared to aggregated tabulations provided by financial authorities or statistical agencies, microdata from firm accounts provide complete information on all tax-relevant items, such that interactions between different items are fully taken into account. Microsimulation models are able to exploit this detailed information. Ideally, the models would moreover make reliable predictions on the firms’ behavioural responses to taxation. Many governments of the large OECD countries apply microsimulation models on corporate taxation based on representative data sets from tax files (Ahmed, 2006), examples are the United States, Canada, the United Kingdom, France, Ireland, Italy15 or Sweden. For Germany, Bach et al. (2008) have built-up and applied a corporate taxation microsimulation model on behalf of the federal government. In Italy, a microsimulation model was built-up by the Statistical Office and academics, based on a survey of large companies and published financial statements (DIECOFIS, Parisi, 2003). A tax calculator is programmed to recalculate the tax liability according to different legislation. Relevant items, for instance gross income before and after adjustments for tax assessment, offset of losses from other tax years, taxable income, the assessed tax liability, tax credits, and owed tax payments could be grossed up to aggregate levels. Distributional analysis breaks down these outcomes across firm characteristics such as industries, regions, firm size, or specific tax-relevant items. Tax reform scenarios could be analysed as far as the changes in tax law refer to the items included in the dataset. Otherwise, information from additional sources has to be imputed, or ad-hoc assumptions have to be made. Scenario analysis provides information on the revenue and distributional effects of the tax reform, comparing the results from the policy scenario with those under current legislation. Detailed information from the firms’ financial statements, that is the balance sheet and the income statement, would be interesting in order to analyse the impact of income determination rules regarding the accounting of specific assets and liabilities, for example depreciation allowances, provisions, non-deductible expenses, etc. As far as data sets from tax files
15
Balzano, Oropallo, and Parisi (2011).
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
495
either do not include such information or only capture the main items from the financial accounts, the relevant information might be imputed from published financial statements or business surveys. Whether imputations based on individual firm level are feasible depends on the data availability and the data access, including restrictions due to data protection laws. However, different income determination rules between financial accounting and tax accounting might impair such imputations. Moreover, in the case of multinationals, most countries only tax the domestic subsidiaries, which restrict the use of consolidated financial statements. Since the collection and editing of corporate microdata bases takes some time, the available data bases are often several years old. Revenue projections and reform evaluations refer to future years. To forecast the model database to the future, traditionally static aging procedures are implemented (see Ahmed, 2006). The dataset is re-weighted to structural changes in the corporate sector, for instance with respect to firm size, industry, legal form, etc., using eligible data from current business surveys, more recent tax files (e.g. from VAT assessment) and projections on these items to future years. However, such simple static aging procedures cannot model well the profit fluctuations and the volatility of taxable income over the business cycle, which strongly affect tax revenue. The impact may be caused by running profits and losses, by inter-temporal loss-offset, or by the utilization of deductions and other behavioural responses. Therefore, some models utilize more sophisticated methods in dynamic simulation of endogenous adjustments for selected key variables (see the case of the UK Inland Revenue tax model, Eason, 2000, or the studies by Creedy & Gemmell, 2009, 2010). Data availability often restricts such analysis. Panel data would improve reliable estimations on the relevant transition probabilities, but are not available for tax records in many countries. The more accurate and recent the database the greater is the reliability of corporate tax modelling not only for policy evaluation, but also for revenue forecast (see Ahmed, 2006; Eason, 2000). Modelling adjustments over the business cycle or in response to tax reforms could provide better insights into fluctuations in corporate income taxation, which are often very high. Relevant elasticities estimated in econometric models (possibly including dependent variables generated by using microsimulation, see below) could be used in macro forecasting models. In many cases, government models on corporate taxation are basically ‘static’ as they do not take into account behavioural responses with regard to the main firms’ decisions, such as on location, investment, financing, labour demand, legal form, production technologies, etc. As estimates on these ‘second round’ effects are more uncertain or even contentious, governments are rather reluctant to utilize point estimates in their official statements. However, tax reforms are often motivated for efficiency reasons or by giving incentives for investments, thus aiming at taxpayers to
496
Hermann Buslei, Stefan Bach and Martin Simmler
react less or more to taxation. In the course of globalization, the impact on location choice and international tax avoidance became more important. Thus, it is increasingly relevant for governments and policy makers to deal with at least rough estimates on likely behavioural responses. Therefore, elasticities taken from the literature, at best differentiated across business characteristics such as firm size or industry, could be plugged into the microsimulation model on an ad-hoc basis. However, this has not yet been done (at least systematically) by governments while first attempts were made by academic researchers (see below).
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.4.2. Advanced approaches for forecasting and policy analysis If basic government models solely rely on tax data, they often do not contain information on the determinants of corporate profits. Moreover, they often lack further information of interest, like for example the number of employees. This shortcoming is especially important if reform proposals are related to these firm characteristics (see below).16 Traditional government models also reach their limits, when the impact of cross-borders/ international economic relations is important as in the case of the introduction of a common consolidated corporate tax base (CCTB) in Europe. To contain profit shifting activities in Europe and to reduce compliance costs of multinational companies, the European Commission proposed the CCTB since the 2000 (Commission of the European Communities, 2001). Microsimulation has been seen as an important step in the direction of the introduction as all the member states are not willing to introduce a concept with unclear consequences of the national tax revenue. Since the concept links tax bases across borders, tax data and ownership information for firms belonging to a multinational group in every member state is needed to draw a clear picture. As this kind of data is not available, financial statements data has been used for a first quantification (Devereux & Loretz, 2008; Fuest Hemmelgarn, & Ramb, 2007; Oestreicher & Koch, 2011). Although all microsimulation studies assumed that firms do not adjust their behaviour, they differ strongly in their estimate for tax revenue. Main reasons are sample selectivity and book-tax differences. Nevertheless, the studies added important insights to the debate, for example the role of apportionment factors, mandatory or compulsory introduction.
16
In some circumstances, imputation might provide a satisfactory solution (see above). Another option is to rely on accounting data as the basic data set and adjust them for the simulation of taxes. An example for this approach is Reister (2009), who developed a firm simulation model for Germany based on the Dafne data set. He applied the model to the German Tax reform 2008 and a switch to a dual income tax without considering behavioural adjustments. This was done in later work relying on this model (Finke et al., 2013; Heckemeyer, 2012).
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
497
A further limitation of the basic government type model is the static modelling and the fact that they forego to consider behavioural responses to policy changes. Several approaches promise to improve the accuracy of policy analyses. We start with a model which highlights the importance of the business cycle on tax revenue and does only in a broader sense meet our requirement for the use of large microdata sets. Creedy and Gemmell (2009) construct a synthetic dynamic microsimulation model to analyse the revenue elasticity of the UK corporation tax system over the business cycle. They use stylized distributions on profits and their changes over time. The corporate model accounts for group relief, capital allowances and losses. They find high volatility in revenue elasticities to be especially associated with economic downturns. In mild economic downturns, corporation tax revenue elasticities may rise because tax growth falls less than profit growth. In more severe downturns, large but temporary decreases in revenue elasticities and even negative elasticities can be expected. Creedy and Gemmell (2010) extend the model to the firms’ behavioural responses to changes in tax rates, which are influenced by the utilization of deductions over the business cycle. A second approach is to take into account behavioural adjustments using elasticities for several firm choice variables. This has been done in recent work by Heckemeyer (2012) and Finke et al. (2013). The authors assess the distributional impact of the German 2008 tax reform accounting for firms’ behavioural response with its finance structure, investment, profit shifting, location and organizational choice. To account for the sample selectivity of the underlying financial statements data, data and results are extrapolated using tax revenue statistics. The model was used to assess the various elements of the German corporate tax reform 2008 which followed the international trend of broadening the tax base and reducing the tax rate. Another promising approach was developed by van Tongeren (1995) who builds a ‘calibrated’ simulation model where firms follow certain decision rules (see above). The model is used to assess the impact of changing sales expectations and of the abolition of an investment subsidy present in the Netherlands as the only policy analysis. Note that a model without behavioural responses would ‘simply’ calculate the revenue gains due to the abolition, given a fixed level of investment. On the other hand, more aggregated models could not consider the institutional details of the policy (see van Tongeren, 1995, p. 183). In van Tongeren’s model, firms’ investment and other decision margins, among other labour demand, may react to the policy. Overall, van Tongeren (1995, p. 190) finds only small effects for the investment subsidy analysed in the model. As mentioned above, the model developed by Shahnazarian (2004, 2011) allows for the analysis of adjustments to changes in the environment of the firm, for example reactions of a firm to tax changes. Shahnazarian (2011) used the model in order to assess a change in the corporate tax
498
Hermann Buslei, Stefan Bach and Martin Simmler
rate. First, the reference path under current law is simulated for five consecutive years (2000 2004). A comparison of simulated tax payments with actual tax payments in the period shows a good approximation for the total, however the distributions of tax payments over firms differ considerably (see Shahnazarian, 2011, p. 9). In the first policy simulation, the corporate tax rate is reduced by 3 percentage points (10.7%). One important result is that the amount of taxes paid is reduced somewhat stronger (by around 12%) in the first three years after the policy change.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
16.4.3. Microsimulation as a component of broader academic studies Two further applications of microsimulation for academic research can be distinguished. On the one hand, microsimulation is used to derive more precise expressions for firms’ expectations on their future tax burden. Most prominent in this regard is the calculation of marginal tax rate (MTR) as proposed by Shevlin (1990) and Graham (1996). Tax payments are to some extent uncertain as firms may for example make losses or might even go bankrupt. Statutory tax rates might thus provide a misleading figure for firms’ tax burden. The MTR incorporate the uncertainty and give the present value of current and expected future taxes paid on an additional Dollar of income earned today. To derive the MTR Graham (1996) proposed to forecast firms’ earnings using a random walk with drift and applying microsimulation to derive firms’ taxable income. Although the method is mainly applied using financial statements data, Graham and Mills (2008) show that the simulated MTR provide a sufficient approximation to the one derived from tax data. This method has been used to study a wide array of corporate decisions for example capital structure and payout policy as well as cost of capital and investment.17 The second main application of microsimulation lies in the field of academic research when implementing instrumental variable technique. An important concern in the econometric analysis of taxation is often endogeneity of the variables of interest, either as the dependent variable affects the explanatory variable directly (e.g. firms’ finance structure and marginal tax rate) or as both are affected by a third variable (e.g. asset structure). One way to address the resulting endogeneity bias as proposed by Gruber and Saez (2002) is to implement an instrumental variable strategy where the instrument is the simulated tax rate that would have prevailed without behavioural changes studying (e.g. using the lagged asset structure). This approach has been used, among other things, to determine the elasticity of the corporate tax base (Dwenger & Steiner,
17
See Graham (2003) for a survey.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
499
2012; Gruber & Rauh, 2007) and firms’ capital structure choice with respect to taxation (Dwenger & Steiner, 2014; Fossen & Simmler, 2012). Government models are a valuable tool especially for the forecasting of tax revenue and the analysis of policy changes. First round effects which are measured in basic government models play an important role in policy discussions. However, there are important shortcomings, most notably the dominance of static modelling and the fact that behavioural adjustments are taken into account in most cases only in a quite rough form. In the field of academic research, microsimulation models have been used to advance the basic government models to capture firms’ reactions to changes in their environment and to policy changes. Although different in the methodology, all approaches demonstrate the potential gains of a systematic consideration of behavioural responses for the assessment of policy changes. Moreover, simulation models play an important role in understanding firm behaviour. It can either be used to derive a more precise estimate on firms’ expectations or to apply an instrumental variable strategy taking behavioural responses of firms into account.
16.5. Summary and future directions Firm microsimulation models are an important tool especially for tax revenue forecasts and tax policy evaluations. However, most of the models are static and restricted to the analysis of ‘first round effects’. Only few models include behavioural adjustments besides the use of global ‘guesstimate’ for the impact of a certain policies on taxable profits. An important progress is provided in models which systematically use elasticities from the empirical literature to incorporate behavioural responses. These elasticities are assumed to be more or less the same for all types of firms. However, firms’ reactions to changes in their environment, for instance changes in the tax regulation, might differ a lot and suitable elasticities might not be available for groups of interest. We expect that microsimulation models of firms will consider these reactions in much more detail in the future. The alternative approaches to include behavioural responses in the literature consist of an econometric model and a calibrated model with behavioural rule of thumb decision-making. Although promising, these models require further development and testing. As the second type is closely related to applied general equilibrium models it should be rewarding to develop this type of model in parallel to applied general equilibrium models and compare the results. This would also help to consider general equilibrium effects which are currently neglected or only implicitly included (as is probably the case in the ‘elasticity approach’. A further promising field is the development of international models which extend current approaches that consider more or less independently
500
Hermann Buslei, Stefan Bach and Martin Simmler
models for different countries in order to assess policies which affect several countries, such as environmental and tax policies. Finally, another desirable improvement concerns the integration of shareholders taxation into firm microsimulation models. This would allow a more accurate modelling of firms’ financing decisions and extend the range of possible distributional analyses.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
References Ahmed, S. (2006). Corporate tax models: A review. SBP Working Paper No. 13. State Bank of Pakistan, Karachi, Pakistan. Asker, J., Farre-Mensa, J., & Ljungqvist, A. (2012). Comparing the investment behavior of public and private firms. Unpublished working paper. New York University and Harvard University. Bach, S., Buslei, H., Dwenger, N., & Fossen, F. (2008). Dokumentation des Mikrosimulationsmodells BizTax zur Unternehmensbesteuerung in Deutschland. DIW Berlin Data Documentation 29. Balzano, S., Oropallo, F., & Parisi, V. (2011). On the Italian ACE and its impact on enterprise performance: A PLS-path modeling analysis. International Journal of Microsimulation, 4(2), 14 26. Bargeron, L. L., Schlingemann, F. P., Stulz, R. M., & Zutter, C. J. (2008). Why do private acquirers pay so little compared to public acquirers? Journal of Financial Economics, 89, 375 390. Bettendorf, L., van der Horst, A., de Mooij, R., Devereux, M., & Loretz, S. (2009). The Economic Effects of EU-Reforms in Corporate Income Tax Systems, Study for the European Commission Directorate General for Taxation and Customs Union, Contract No. TAXUD/2007/DE/324. Brav, O. (2009). Access to capital, capital structure, and the funding of the firm. Journal of Finance, 64, 263 308. Chetty, R., & Saez, E. (2005). Dividend taxes and corporate behavior: Evidence from the 2003 dividend tax cut. Quarterly Journal of Economics, 120(3), 791 833. Clarke, S., & Tobias, A. M. (1995). Complexity in corporate modelling: A review. Business History, 37(2), 1 44. Commission of the European Communities. (2001). Final communication from the commission to the council, the European Parliament and the economic and social committee. Towards an Internal market without tax obstacles, Brussels, 582, 23.10.2001. Retrieved from http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2001: 0582:FIN:EN:PDF. Accessed on December 5, 2013. Creedy, J., & Gemmell, N. (2009). Corporation tax revenue growth in the UK: A microsimulation analysis. Economic Modelling, 26(3), 614 625.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Firm Level Models
501
Creedy, J., & Gemmell, N. (2010). Modelling responses to profit taxation over the economic cycle: The case of the UK corporation tax. FinanzArchiv/Public Finance Analysis, 66(3), 207 235. de Mooij, R. A., & Devereux, M. P. (2011). An applied analysis of ACE and CBIT reforms in the EU. International Tax and Public Finance, 18, 93 120. Desai, M. A. (2005). The degradation of reported corporate profits. Journal of Economic Perspectives, 19(4), 171 192. Devereux, M., Liu, L., & Loretz, S. (2014). The elasticity of corporate taxable income: New evidence from UK tax records American economic journal. Economic Policy, 6(2), 19 53. Devereux, M., & Loretz, S. (2008). The effects of EU formula apportionment on corporate tax revenues. Fiscal Studies, 29(1), 1 33. Doraszelski, U., & Pakes, A. (2007). A framework for applied dynamic analysis in IO. Handbook of Industrial Organization, 3, 1887 1966. Dwenger, N., & Steiner, V. (2012). Profit taxation and the elasticity of the corporate income tax base. Evidence from German corporate tax return data. National Tax Journal, 65, 117 150. Dwenger, N., & Steiner, V. (2014). Financial leverage and corporate taxation Evidence from German corporate tax return data. International Tax and Public Finance, 21(1), 1 28. Eason, R. J. (2000). Modelling corporation tax in the United Kingdom. In A. Gupta & V. Kapur (Eds.), Microsimulation in government policy and forecasting (133 151). Amsterdam: Elsevier. Finke, K., Heckemeyer, J. H., Reister, T., & Spengel, C. (2013). Impact of tax rate cut cum base broadening reforms on heterogeneous firms Learning from the German tax reform 2008. Finanzarchiv, 69(1), 72 114. Fossen, F. (2009). Would a flat-rate tax stimulate entrepreneurship in Germany? A behavioural microsimulation analysis allowing for risk. Fiscal Studies, 30(2), 179 218. Fossen, F., & Simmler, M. (2012). Differential taxation and firms financial leverage Evidence from the introduction of a flat tax on capital income. DIW Discussion Paper No. 1190, Berlin. Fuest, C., Hemmelgarn, T., & Ramb, F. (2007). How would the introduction of an EU-wide formula apportionment affect the distribution and size of the corporate tax base? An analysis based on German multinationals. International Tax and Public Finance, 14, 605–626. Fullerton, D., King, A. T., Shoven, J. B., & Whalley, J. (1981). Corporate tax integration in the United States: A general equilibrium approach. American Economic Review, 71(4), 677 691. Graham, J. R. (1996). Debt and the marginal tax rate. Journal of Financial Economics, 41, 41 73.
Downloaded by Monash University At 09:12 12 April 2016 (PT)
502
Hermann Buslei, Stefan Bach and Martin Simmler
Graham, J. R. (2003). Taxes and corporate finance: A review. Review of Financial Studies, 16(4), 1075 1129. Graham, J. R., & Mills, L. F. (2008). Using tax return data to simulate corporate marginal tax rates. Journal of Accounting & Economics, 46(2), 366 388. Gruber, J., & Rauh, J. (2007). How elastic is the corporate income tax base. In A. J. Auerbach, J. R. Hines, Jr., & J. Slemrod (Eds.), Taxing corporate income in the 21st century (pp. 140 163). Cambridge, UK: Cambridge University Press. Gruber, J., & Saez, E. (2002). The elasticity of taxable income: Evidence and implications. Journal of Public Economics, 84(1), 1 32. Hart, O. (2011). Thinking about the firm: A review of Daniel Spulber’s The Theory of the Firm. Journal of Economic Literature, 49(1), 101 113. Heckemeyer, J. (2012). The effects of corporate taxes on business behavior Microsimulation and meta-analyses. Heidelburg: University of Heidelberg. Retrieved from http://archiv.ub.uni-heidelberg.de/ volltextserver/13632/ Li, J., & O’Donoghue, C. (2013). A survey of dynamic microsimulation models: Uses, model structure and methodology. International Journal of Microsimulation, 6(2), 3 55. OECD. (2013). Revenue statistics 2013. Paris: OECD. Oestreicher, A., & Koch, R. (2011). The revenue consequences of using a common consolidated corporate tax base to determine taxable income in the EU member states. FinanzArchiv, 67(1), 64 102. Orcutt, G. H. (1957). A new type of socio-economic system. Review of Economic Studies, 39(2), 116 123. Orcutt, G. H. (1960). Simulation of economic systems. American Economic Review, 50(5), 893 907. Parisi, V. (2003). A cross country simulation exercise using the DIECOFIS corporate tax model. European Commission IST Programme DIECOFIS, Work Package No. 7, Deliverable No. 7.2. Reister, T. (2009). Steuerwirkungsanalysen unter Verwendung von unternehmensbezogenen Mikrosimulationsmodellen. Wiesbaden: Gabler. Reister, T., Spengel, C., Heckemeyer, J. H., & Finke, K. (2008). ZEW Corporate Taxation Microsimulation Model (ZEW TaxCoMM). ZEW Discussion Paper No. 08-117. Shahnazarian, H. (2004). A dynamic microeconometric simulation model for incorporated businesses. Sveriges Riksbank Occasional Papers No. 11. Stockholm. Shahnazarian, H. (2011). A dynamic micro-econometric simulation model for firms. International Journal of Microsimulation, 4(1), 2 10. Shevlin, T. (1990). Estimating corporate marginal tax rates with asymmetric tax treatment of gains and losses. Journal of the American Taxation Association, 12, 51–67.
Firm Level Models
503
Downloaded by Monash University At 09:12 12 April 2016 (PT)
Spengel, C. (1995). Europa¨ische Steuerbelastungsvergleiche. Deutschland Frankreich Großbritannien. Du¨sseldorf: IDW-Verlag. van Tongeren, F. W. (1995). Microsimulation modelling of the corporate firm. Berlin: Springer. Wissen, L. (2000). A micro-simulation model of firms: Applications of concepts of the demography of the firm. Papers in Regional Science, 79(2), 111 134.
CHAPTER 17
Farm Level Models
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
17.1. Introduction Farm simulation is a combination of biological, business and policy modelling. Biological modelling is how to simulate farm production of crops, meat and milk production. Farming is also a business so one must simulate the monthly or annual generation of receipts, payment of expenses, principal payments, interest and income taxes as well as account for asset appreciation, depreciation and replacement. Public policy can affect farm incomes in a number of ways, whether it be via direct income supports, policies that affect market prices, regulatory policy that constrains or incentivises particularly on-farm activity and subsidy or tax policy that incentives on-farm activity such as more environmentally sensitive agriculture. Farm-level simulation modelling has historically developed as a parallel field to microsimulation modelling in that relatively few farm-level papers appear in microsimulation conferences or journals or vice versa. However, fundamentally the objectives are similar, micro-level simulation of policy and economic change. Parallel models have been developed for farm-level units of analysis that are similar to those applied at the household level in other chapters of this Handbook, from Hypothetical Farm Models to Static Incidence to Cross-country comparative analyses to Behavioural Response to Expenditure Demand to Macro Impact to Dynamic Intertemporal modelling to Spatial Impact to Environmental Impact. Thus, while the objectives and modelling types have been similar there has been relatively little mutual learning between the fields. This chapter aims to communicate some of the areas in common and difference to those in the other fields.
CONTRIBUTIONS TO ECONOMIC ANALYSIS VOLUME 293 ISSN: 0573-8555 DOI:10.1108/S0573-855520140000293016
© 2014 BY EMERALD GROUP PUBLISHING LIMITED ALL RIGHTS RESERVED
506
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
Farm-based micro-level simulation modelling differs from other microsimulation-based models in this Handbook in that incomes partially derive from biological processes. Farms also have specific business structures, but in concept, in terms of profit, output and costs not too different from firm level models. The farming sector in much of the world is also affected by distinct agricultural policy. Simulation models for farming can address two types of problems:
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
1. Calculation of profits and net worth for a given level of input use and output (positive models). 2. Optimisation of inputs and outputs to maximise profits (normative models). Positive impact analysis has similar objectives to the rest of the literature, whether it be static incidence analysis of an economic or policy change on a heterogeneous population and/or the behavioural response to those changes. Farm-level modelling, however, places a higher emphasis on optimisation than most other sub-fields of microsimulation, with outcome optimisation only being referred in this Handbook in the Chapters 7 and 8. In Section 17.2, we describe the policy context in which farm-level simulation models are developed. Section 17.3 then describes a variety of applications of these models, particular in relation to policy evaluation. In Section 17.4, we describe a number of methodological choices made by these models.
17.2. Policy context Agriculture in many parts of the world is one of the sectors most influenced by public policy. Given the importance of food production in the production of goods that are essential for survival, a biological-based sector that is prone to risk and volatility, one that faces pressures in meeting the food requirements of the growing world population particularly in the face of climate and environmental constraints and one that impacts upon the wider environment as one of the most significant land uses, it is unsurprising that a modelling field has developed to look at these issues at farm level. In Europe, the ongoing reform of the Common Agricultural Policy has created a steady demand for policy analyses over the last decades and indeed this is a demand that is likely to continue into the future. The most recent reform of the Common Agricultural Policy (CAP), which concluded in 2013, aimed to tackle the unequal distribution of direct payments both between and within Member States. A number of complex payment schemes, which are optional at a Member State level, were designed to achieve this objective. This reform required detailed farm-level
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
507
models disaggregated at a regional level for each Member State to enable a comprehensive ex ante assessment. The most recent reform of the CAP has also marked a somewhat ‘renationalization’ of the policy. Many of the decisions regarding the implementation of Pillar I payments have been left to Member States, while there is a wide array of schemes for Member States to choose from under Pillar II. This is likely to lead to policies differing by Member State and as such makes EU-wide modelling of policy more complex. The ‘Grand Challenge’ of producing more to feed a growing world population in an environmentally sustainable manner is currently one of the key considerations in policy design. The Greening of the CAP was a key element of the 2013 reform and the inclusion of agri-environmental schemes in Pillar I of the CAP for the first time is evidence of the growing importance of environmental issues in European policy. As such, it is important that farmlevel policy models can (i) demonstrate the impact of environmental policies on farm performance and (ii) also assess the impact of agricultural policies on the environment. As discussed above a number of these interlinked models already exist but the further development and more widespread adoption of these models is likely to be met with a number of challenges. First, the data requirements of such models are vast and finding appropriate data that can be matched with the farm-level economic data is challenging. Furthermore, many environmental impacts, such as those relating to soil and water, are very site specific and as such detailed spatial data are required as are spatially disaggregated models. Finally, large multi-disciplinary teams are required to build truly effective interlinked models. The recently passed US Farm Bill increased its reliance on insurance, but still relies heavily on reference prices and counter cyclical payments. Farmers are given a one-time opportunity to choose between the insurance option and the reference price option. The best method for farmers to use to make an informed decision on their risk management policy options is Monte Carlo simulation. Mostly all US grain and cotton farmers will use simulation to assess their options. Simulation-based decision aids will be available to farmers via the world wide web in a format that they will not need to know how the options were analysed. Simulation decision aids have been used successfully to help farmers make policy participation decisions in the past (Richardson & Outlaw, 2005) when more than 410,000 farmers used the base and yield analyser and few realised that they were using a simulation model. Increased reliance on insurance to help farmers manage revenue risk will continue the reliance by Congress on Monte Carlo simulation modelling.
17.3. Applications In this section, we provide an overview of a selection of farm-level microsimulation models utilised in the literature. We consider the types of
508
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
models that have been developed to understand the farm-level implications for the five dimensions of complexity addressed in this Handbook: Population, Policy, Behaviour, Time and Space/Environment. In order to bridge the gap between the microsimulation field and the farm-level simulation field, we will follow the thematic structure of the areas covered in this Handbook and describe relevant analyses undertaken at the farm level.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
17.3.1. Hypothetical analyses Abstracting from population complexity, to enhance understanding, a variety of models have been developed internationally that have simulated biological, market and policy changes at farm level. In addition to deliberately abstracting from complexity, the models are also used where micro data does not exist (Zander, Thobe, & Nieberg, 2007). For example, at a biological level, farm systems bio-economic simulation models have been developed to assess the economic impact of alternative management practices and technologies at farm level (Crosson, O’Kiely, O’Mara, & Wallace, 2006; Shalloo, Dillon, Rath, & Wallace, 2004). Understanding the impact of different systems combined with differences in local market prices for inputs, outputs and other factors of production, the International Farm Comparison Network (Hemme, Deblitz, Isermeyer, Knutson, & Anderson, 2000) has been established to simulate farm-level profits to compare farm systems across the world at a synthetic farm level (Manjunatha, Shruthy, & Ramachandra, 2013; Prochorowicz & Rusielik, 2007; Thorne & Fingleton, 2006). They have also been used for policy analysis (Doucha & Vaneˇ k, 2006; McCormack, O’Donoghue, & Stephen, 2014).
17.3.2. Policy analysis Within a European context, considerable emphasis is placed on the role of policy analysis. Ex ante assessments of all European Commission proposals are now a mandatory part of the policy formation process. Many types of models can be used to simulate the impact of policy on the agri-food sector and the types of models used as well as the questions addressed have evolved over the years. They range from highly aggregated equilibrium models, like GTAP or MIRAGE, which aim to simulate a whole economy, to partial equilibrium models such as FAPRI, CAPRI or AGLINK-COSIMO simulating the agricultural sector to single behavioural models like FARMIS or FSSIM, see Ciaian et al. (2013) for a discussion of these models. This chapter focusses on farm-level models only and discusses their evolution and application over recent years.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
509
This policy models overlap to some extent with models described in chapters in relation to Static Modelling, Cross-country Modelling, Behavioural Modelling and Dynamic Modelling. Optimisation models of representative farms have been used for many years for policy analysis. Some of the earliest studies date back to the 1960s, see for example, Plaxico and Tweeten (1963) and Sharples (1969). Such models were used to simulate the behaviour of farmers and profit outcomes under different policy scenarios. The advantage of using an optimisation approach for policy analysis is that optimisation models, unlike econometric models, do not rely on time-series data and do not extrapolate future relationships from historical ones, and it is therefore possible to go beyond the realm of past observations and analyse unprecedented policy changes. Furthermore, the constraint structure used in optimisation models is well suited to simulating a farm business and representing the constraints that may be associated with particular policy scenarios, for example a production quota. Throughout the second half of the last century, the farm-level simulation approach ‘slipped out of fashion’ as agricultural policy modellers tended to favour macro models, either partial equilibrium models of the agriculture sector or computable general equilibrium models of the economy, solved using time-series econometrics. Over this time models such as GTAP (Hertel, 1997), FAPRI (Westhoff & Young, 2001) and AGLINK (OECD, 2006) dominated the agricultural policy analysis field. An exception to this is the US Congress reliance on FLIPSIM (Richardson & Nixon, 1986) to analyse the farm-level impacts of farm programs. One explanation for the exception is that FLIPSIM is used in conjunction with the FAPRI sector-level model to analyse farm policy. Over the past decade, agricultural policy has become more farm and less market focussed. Especially in a European context, the Common Agricultural Policy has shifted away from supporting farmers through market prices and programmes to direct income support applied at the farm level. According to Ciaian et al. (2013), there is a greater need for a detailed description of the economic and environmental impact of policies at a disaggregated level, such as the farm, as agricultural policies continue to be increasingly targeted and more farm specific. They argue that the new policy orientation will affect farms differently according to their resource endowment and socio-economic contexts, but also localisation (i.e. agro-ecological conditions) and this may call for a new thinking in terms of modelling approaches. Some of the most recent important policy analyses in a European context have included assessments of the impact of decoupling and such policy questions require a farm level rather than a sector-level approach. As such, policy modellers are turning once again to a micro approach to gain a better understanding of how agricultural policy changes might affect the sector. A number of comprehensive reviews of the development
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
510
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
of farm-level simulation models for policy analysis have been published. See for example, Klein and Narayanan (2008) for a review of the North American literature and Ciaian et al. (2013) for a review of the European literature. Farm-level simulation for policy analysis has experienced a growing demand in the United States over the past 30 years. The FLIPSIM model, developed by Richardson and Nixon in 1980 at Texas A&M University, is the most widely known Monte Carlo farm simulation model with more than 10,500 hits in Google Scholar (type FLIPSIM, simulation for individual citations). The model has been used extensively in the United States to analyse the impacts of alternative farm policies for the US Congress since 1981. The Texas A&M University Agricultural and Food Policy Center is funded by Congress to develop and maintain a database of 100 representative crop, dairy and beef farms for FLIPSIM. The representative farms are located in the principal production regions for feed grains, oilseeds, wheat, rice, cotton, milk and beef across the United States. The FLIPSIM model simulates the representative farms for a base situation (current farm policy and economic outlook) and alternative farm programs and trade policies to estimate the probable impact of the policy change on net present value (NPV), ending net worth, ending cash reserves and annual net cash income for 10 years. The US Congress has relied heavily on FLIPSIM’s farm-level simulation results for analysing every farm programme change since 1985. Analysis of the US farm policies that provide a safety net when prices and/or yields decline require models that incorporate both price and yield risk, so the importance of Monte Carlo farm simulation has increased greatly over the last 30 years. 17.3.3. Impact of macro-economic change As one of the largest globally traded commodities that is influenced by global markets, global weather and climate patterns, differential global demand trade agreements, tariffs and other trade constraints, macroeconomic phenomena and policy can be influential in relation to agricultural markets and its impact at the micro-level. As a result, a relatively large sub-components of the Macro-Micro literature described in Chapter 9 relates to Agriculture and Food. Methodologies applied include both computable general equilibrium (CGE)-microsimulation (Boccanfuso & Savard, 2007) and Partial Equilibrium-Micro (Hennessey & Thorne, 2006). Some papers have focussed on top-down impacts of the liberalisation of agricultural trade regimes such as the impact of general trade liberalisation (Chemingui & Thabet, 2009), potential World Trade Organisation (WTO) agreements (Breen & Hennessy, 2003; Hennessey & Thorne, 2006; Morley & Pin˜eiro, 2004), specific sectoral liberalisation (Boccanfuso & Savard, 2008) or the distributional impact of agricultural price protection (Mabugu & Chitiga, 2009).
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
511
Given the importance of agricultural policy, a number of analyses have focussed on looking at the micro impact of these policies. Arndt, Benfica, Tarp, Thurlow, and Uaiene (2010) look at the impact of bio-fuels policy on growth and poverty. Boccanfuso and Savard (2007) study the impacts of cotton subsidies in Mali. Chitiga and Mabugu (2005) consider the impact of land redistribution policy in Zimbabwe. Cororaton and Corong (2007) analyse the impact of agriculture-sector policies and poverty in the Philippines. Tax policy has also been examined. De Souza Ferreira Filho, dos Santos, and do Prado Lima (2010) model changes in taxes on food and agricultural inputs in Brazil, while Ahmed, Abbas, and Ahmed (2007) model changes in agricultural income taxes in Pakistan. As the impact of market price, particularly, agricultural commodity price, volatility becomes more important, there is increasing interest in understanding the micro-level impact of this volatility (Warr & Yusuf, 2009). This impact can be at farm, firm and household levels. Dartanto (2010, 2011) modelled the impact of rice and soybean price respectively on poverty. Ferreira, Fruttero, Leite, and Lucchetti (2013) considered the impact on both farms and households in Brazil. Diao, Alpuerto, and Nwafor (2009) modelled the impact of alternative risk sources in terms of Avian Flu. More generally a number of studies have focussed the general distributional implications of growth in the Agricultural sector. Benin, Thurlow, Diao, McCool, and Simtowe (2008) considered the situation in Malawi, Pauw and Thurlow (2011) in Tanzania and Thurlow (2008) in Mozambique. 17.3.4. Labour supply As in the case of many other households reliant on income from entrepreneurial activity, farm households often rely on income from other sources. Other labour income is important. While there is a large econometric literature in relation to off-farm employment (Huffman & Lange, 1989), the microsimulation literature is relatively sparse. Callan and Van Soest (1993) simulated the female off-farm labour supply elasticity to wage rates using a structural labour supply model in Ireland. van Leeuwen and Dekkers (2013) used a spatial microsimulation methodology to understand the determinants of off-farm labour supply in the Netherlands. 17.3.5. Spatial models As a sector that largely depends on the local environment in terms of soils and weather, there is a significant spatial heterogeneity in agriculture. It is important therefore to understand this spatial heterogeneity so as to be able to better target policy interventions. In particular, the spatial
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
512
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
distribution of agricultural income and the consequential impact of policy reform such as CAP reform are important in targeting, for example, specific localised or the development of localised rural development interventions. The challenge in understanding the spatial distribution of farm incomes and of policy reform is one of data. While it may be possible to simulate the spatial pattern of farm direct payments using administrative data as in the case of Bergmann, Noack, and Kenneth (2011) at a spatial scale in Scotland and Hanrahan and Hennessy (2013) at an aspatial scale in Ireland, these datasets often lack contextual information, limiting the depth of analysis possible. Typically Censuses of Agriculture and Administrative data provide spatial information on the structure of agriculture, but have no income or farm structure data. On the other hand, farm survey data contain excellent farm income and structural data, but have weak spatial dimensions. Data imputation/enhancements methods known as spatial microsimulation (Hermes & Poulsen, 2012; O’Donoghue, Hynes, Morrissey, Ballas, & Clarke, 2013) however have been developed for to combine the strengths of both types of data. Small area statistical analysis can be used for this purpose (see Ghosh & Rao, 1994). However, for our purposes, we are interested not only in inter-spatial variation in incomes but also intra-spatial area variation of incomes. Therefore, we require a method that maintains both spatial variability and micro-level variability. Spatial microsimulation (Clarke, 1996) is a potential methodology achieving both of these dimensions within its data enhancement process. There is an extensive literature described in O’Donoghue, Lennon, and Morrissey (forthcoming) covering many different policy areas, utilising various methodologies described in Hermes and Poulsen (2012). The methodology has been applied in a number of instances within agriculture and rural development. Ballas, Clarke, and Wiemers (2006) utilised iterative proportional fitting to examine CAP reform as part of the Luxembourg agreement. Hynes, Morrissey, O’Donoghue, and Clarke (2009b) developed a model of spatial farm incomes utilising simulated annealing (SA), which has been used to examine the impact of EU Common Agricultural Policy Changes (Shrestha, Hennessy, & Hynes, 2007). This forms part of the Simulation Model of the Irish Local Economy (SMILE) (O’Donoghue et al., 2013). O’Donoghue (2013) extended the farm focussed models to include wider household income sources to be able to assess the wider economic sustainability of farm households. Clancy, Breen, Morrissey, O’Donoghue, and Thorne (2013) utilised the model in Ireland to assess the optimal spatial location for the growth of willow and miscanthus for biomass production. van Leeuwen (2010) applied a spatial microsimulation method to look at farm and nonfarm households in a rural setting.
Farm Level Models
513
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
17.3.6. Environmental analysis In response to the demand from policy makers for information on the impact of farming and farm policies on the environment, many models have been expanded and further enhanced to focus on the impact of policy scenarios on environmental as well as economic indicators. Thomas (2013) provides a comprehensive review of the development of interlinked farm-level policy and environmental impact models. He describes interlinked models as farm level, usually optimisation models, that have a detailed representation of both the economic and production processes on the farm, in terms of the choice of inputs and so forth. Such models can be used for the assessment of environmental policies such as the impact of a nitrogen tax or a water allocation quota. The AROPAj model (De Cara & Jayet, 2011), a farm-level programming model with full EU coverage, is cited as one of the models most capable of assessing the economic consequences of changes in the biophysical environment and the impact of agricultural production decisions on the environment. The economic model is linked to a number of biophysical models and the interlinked models have been used to assess the impact of a number of different climate scenarios and the impact of farm management on water quality, for example. There have been a number of explicitly environmental focussed microsimulation models that have simulated the distributional incidence of environmental policy and issues at farm level. Berntsen, Petersen, Jacobsen, Olesen, and Hutchings (2003) modelled the incidence of potential environmental taxes on farm-level nitrogen emissions, while Hynes, Morrissey, O’Donoghue, and Clarke (2009b) and Hynes, Morrissey, and O’Donoghue (2013) simulated farm-level taxes on methane emissions from cattle, and Doole, Marsh, and Ramilan (2013) examined the distributional impact of a farm-level cap and trade instruments. Kruseman et al. (2008a) and Kruseman, Blokland, Luesink, and Vrolijk (2008) modelled the impact of tightening environmental policy on phosphate emissions, while Villot (1998) modelled a tax related to sulphur emissions and Kruseman et al. (2008b) focussed on Ammonia emissions. Given the potential impact of agricultural practices on the environment, a number of microsimulation models have simulated pollution impacts of agriculture. Potter, Atwood, Lemunyon, and Kellogg (2009) modelled carbon sequestration from cropland in the United States. Dijk, Leneman, and van der Veen (1996) modelled the nutrient flow model in Dutch agriculture, while Ramilan, Scrimgeour, and Marsh (2011) modelled the environmental and economic efficiency of dairy farms. In relation to greenhouse gas emissions, farm-based microsimulation models have simulated greenhouse gas emissions (Hynes, Morrissey, et al., 2013; Hynes, Morrissey, O’Donoghue, & Clarke, 2009a). Lal and Follett (2009)
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
514
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
modelled carbon sequestration in the soil on crop land, while Kimura, Anton, and Cattaneo (2012) applied a microsimulation model to simulate crop risk management choices under climate change. From a biological perspective, there is a literature that models the impact at the hypothetical farm systems level of changes to management practices and technology (see e.g. Crosson et al., 2011; Lovett, Shalloo, Dillon, & O’Mara, 2006, 2008; O’Brien et al., 2010). Lindgren and Elmquist (2005) evaluated the economic and environmental impact of alternative farm management practices. Microsimulation models were also used to look to model biodiversityrelated issues including wildlife-recreation interaction (Bennett et al., 2009) and wild bird conservation (Hynes, Farrelly, Murphy, & O’Donoghue, 2013), and participation in Agri-Environmental Protection Schemes (Hynes, Farrelly, Murphy, & O’Donoghue, 2008; Hynes, Farrelly, et al., 2013; Hynes & Garvey, 2009; Kelley, Rensburg, & Yadav, 2013). A small literature has been developed using farm-level simulation models to model marginal abatement cost (MAC) curves for agriculture. Chyzheuskaya, O’Donoghue, and O’Neill (2014) developed a dairy farm farm-level MAC for Ireland, while Doole (2012) modelled a farm-level nitrogen MAC curve at farm level.
17.4. Methodological choices In this section, we will discuss a number of methodological choices made by farm-level microsimulation models. Given the overlaps with methodologies in other dimensions of microsimulation modelling elsewhere in this Handbook, we will focus only on methodologies that are specific to farm-level modelling. Specifically we will look at: • • • •
Farm-level Monte Carlo simulation Optimisation modelling Farm-level policy simulation modelling Farm-level spatial modelling.
17.4.1. Farm-level Monte Carlo simulation Using simulation to address What if …? questions facing farmers and policy makers usually involves calculating the farm business’ profit under specified or assumed levels of input, technology and management control variables. Stochastic variables that impact production, costs and prices are embedded in the model using probability distributions that represent the risk associated with these variables. The basic equations for simulating a farm business can be simplified to the equations presented in this section.
Farm Level Models
515
A Monte Carlo farm simulation model begins with equations to simulate the biological component of production or stochastic yield for each enterprise i in period t:
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
~ ~ ~ Yield it = f ðTechnologyt ; Inputst ; Weathert Þ þ e
ð17:1Þ
where e~ represents the residuals for an ordinary least squares (OLS) regression equation to forecast yield as a function of the variables listed. The terms Technology and Inputs in Eq. (17.1) represent the average level of yield given the technology being employed and the quantity of inputs employed for crop or livestock enterprise i. Weather is a stochastic variable and can shift the production function for yield up or down depending on temperature and precipitation. The e~ or residual term in the yield equation is the stochastic part of yield which cannot easily be explained by the other variables. Of course Eq. (17.1) can be expanded to include more exogenous variables such as fertilizer, irrigation and herbicides. To simplify the description of the simulation process for yield we can assume that stochastic yield is modelled as a normal distribution with mean, Y, and standard deviation, s (Eq. (17.2)). The term Y is expected yield given inputs, technology and normal weather and ‘s’ represents variability about the mean that is not explained by input level, current technology and normal weather: ~ ~ it = Yit þ si × SND Y it
ð17:2Þ
where SND is a random number called a standard normal deviate which is distributed with mean zero and standard deviation of one. When the farm produces more than one enterprise the model must account for the correlation of yields among the enterprises. This is easily done by substituting a correlated SND (or CSND) in Eq. (17.2). The CSND for N enterprises can be simulated by: ~ CSND N × 1 = ½RN × N × WN × 1
ð17:3Þ
~ where CSND is a vector of correlated SNDs, R is the square root of the correlation matrix, and W is a vector of independent SNDs. Richardson, Klose, and Gray (2000) present this method for a general case, where the random variables can be simulated using a multivariate empirical or nonnormal distribution. Additional information for developing and simulating Monte Carlo Models is available in Pouliquen (1970), Reutlinger (1970), Richardson (2005), and Richardson, Schumann, and Feldman (2005). Production for each enterprise, i, is simulated by applying Eq. (17.4) which simply multiplies stochastic yield for each enterprise by its own level of activity, land area for crops (hectares) or number of head for livestock: ~ it = Y ~ it × Ait Q
ð17:4Þ
516
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
where Ait represents the land area for each crop, i, in time period t or number of cows milked, or number of head sold. In the case of a livestock farm the Yit variable represents kilograms of weight sold or litres of milk produced. Prices received for each enterprise are stochastic unless fixed by contract and must be simulated using a probability distribution (PDF). The PDF for prices must be simulated using a multivariate distribution that correlates all product prices and optimally the distribution will correlate yields and their prices for all enterprises.1 For simplicity, we can assume the stochastic prices are distributed normal or:
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
~ P~ it = Pit þ si × CSND it
ð17:5Þ
where Pit is the expected or forecasted average price for enterprise i in ~ period t, si is the standard deviation for price i, and CSND t is the correlated SND in period t for price i. Parameters Pi and si can be estimated from historical price series for the enterprises and/or taken from current forecasts by outside forecasting agencies and institutes. The next step to simulate a farm is to calculate total receipts for all enterprises. X ~ it ~t = R P~ it × Q ð17:6Þ Total receipt is simply the product of quantity and price summed over all enterprises. Production costs must be calculated for each enterprise and are generally assumed to be a function of production and fixed and variable costs or: ~ it = Ait × ðVðcÞit × Y ~ it þ f ðcÞit Þ C
ð17:7Þ
where f(c) is the fixed cost per hectare or head for each enterprise (e.g. seed and fertilizer costs for a crop or feed per cow), V(c) is the variable cost per yield unit for an enterprise (e.g. harvesting and transportation costs expressed on a yield unit basis), A is defined earlier as land area or num~ is the stochastic level of yield. Total costs ber of head of animals and Y for a farm include the sum of the costs for the enterprises plus all fixed costs not allocated to the individual enterprises (F) or: X ~ it þ Ft þ It T~ t = C ð17:8Þ
1
Ignoring correlation among random variables in a farm simulation will bias the variance and mean of total receipts inversely with the sign on the correlation between price and yield. In other words if price and yield are negatively correlate ignoring this correlation will overstate the mean and variance for cash receipts.
Farm Level Models
517
where It is the sum of annual interest costs and Ft is unallocated annual fixed costs such as communication costs, electricity for the shop, property taxes and insurance. Net cash farm income is equal to total receipts minus total cash costs or: ~ ~ ~ NCFI t = Rt − T t
ð17:9Þ
Net cash farm income minus annual depreciation (D) provides the net income or profit variable used to measure the profitability of a farm business:
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
~ t = NCFI ~ NI t − Dt
ð17:10Þ
Ending cash flow for a farm business is a key output variable (KOV) that is calculated as: ~ t = EC ~ t − 1 þ NCFI ~ ~ ~ EC t þ ðir × ECt − 1 Þ − PPayt − Taxt − Def t − 1 − FLt ð17:11Þ where ECt − 1 is positive beginning cash (ending cash in the previous year), positive cash inflows earn interest rate at an interest rate, ir, which maybe for a fraction of the year, PPay is principal payments for all loans, Tax is total income taxes, Def is repayment of cash flow deficit loans for the previous year if ending cash last year was negative and FL is family living withdrawals. Because of the risk in farming, ending cash flows are not always positive and when ending cash is negative, short-term loans to cover the deficit are obtained. Short-term deficit loans are assumed to be ~ t − 1. fully repaid the next year as Def Ending net worth for a farm is simulated as follows: ~ ~ ENW t = ΣAsseti;t − 1 × ð1 þ rit Þ − ΣðDebtit − 1 − PPayit Þ þ ECt − Def t ð17:12Þ where each asseti (e.g. land, machinery and livestock) is re-valued annually based on their own rate of inflation (rit), each debt at the start of the year is reduced by its respective principal payment, PPayit, and ending cash ~ t if EC ~ < 0 or enters Eq. (17.12) as an asset or a liability in the form of Def ~ > 0. as a positive ECt if EC The KOVs for a farm business simulation model are annual net cash farm income, annual net income, annual ending cash reserves, ending net worth and NPV. The farm’s NPV is calculated as the present value change in the net worth plus the present value of income earned to the investor (farmer) and consumed over the planning horizon or: t ~ TX NWt = 0 þ NW FLt ~ NPV = ð17:13Þ ð1 þ dÞt ð1 þ dÞT t=1 where T indicates the last year of the planning horizon so N˜WT is ending net worth, FL is annual withdrawals of cash for family living expenses,
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
518
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
and d is the discount rate or the investor’s opportunity cost of capital. In Eq. (17.13) the term NWt = 0 represents beginning net worth so the first two terms in Eq. (17.13) simulate the change in real net worth over the planning horizon. The NPV is a critical variable for indicating the performance of the farm’s business plan because it summarises the economic performance of the farm over a multi-year planning horizon into a single value. If NPV is negative it indicates that the business plan did not have an internal rate of return greater than the discount rate. For a multi-year planning horizon Monte Carlo simulation model, the model is simulated for, say, 10 years and the 10-year-planning horizon is repeated 500 or more iterations using different draws of the stochastic variables each year. The model is recursive in that the simulated values for assets, debts and ending cash in year one flow into year two as their beginning cash flow, asset values and liabilities. By simulating the farm model for 500 or more iterations we obtain 500 or more realisations for each KOV. The 500 realisations for a KOV represent an estimate of the PDF for the variable and can be represented graphically as a PDF or cumulative distribution function (CDF) in Figure 17.1. The PDF shows the riskiness of the KOV, while the CDF allows one to easily calculate the probability that the KOV will be less than a given value, or P(X < xi). In Figure 17.1 there is a 60% chance that NPV will be less than 75. Richardson and Mapp (1976) defined the probability of economic success for a business as the probability that NPV is greater than zero because this probability is the chance that the businesses’ internal rate of return will exceed the firm’s discount rate. The example in Figure 17.1 shows the firm has a 90% chance of economic success. In addition to displaying the estimated PDFs for the KOVs, the summary statistics for the output variables can be presented. Another method of presenting PDFs for the KOVs is to use a StopLight chart which is a three-colour bar chart showing the probability of the variable being below a lower target in red (bottom segment of the bars) and the probability of exceeding an upper target in green (upper segment) and the remaining probability in yellow (middle segment) (Figure 17.2). A single simulation of a farm business plan is an excellent start but to be truly useful for providing a positive answer to a What if … or a farm policy question, a simulation model should be simulated for multiple scenarios. Multiple scenarios can be run to simulate alternative farm policies or business plans, such as price and income supports, crop or revenue insurance, technology changes, crop mix changes, different combinations of crops and livestock, different marketing regimes and alternative financial/debt financing programs. Each scenario produces a unique probability distribution estimate for NPV and the other KOVs. Selecting the best management plan is the next step for simulating of a farm business.
519
Farm Level Models
Figure 17.1.
Example of probability density function and cumulative distribution functions from a simulation of NPV for a farm business PDF Approximation of NPV
–100.00
–50.00
0.00
50.00
100.00
150.00
200.00
250.00
CDF of NPV 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Prob
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Base
–150
–100
Figure 17.2.
–50
0
50
100
150
200
250
Example of a stoplight chart to rank risky alternatives StopLight Chart for Probabilities that NPV will be Less Than 30 and Greater Than 120
100% 90%
0.07
0.12 0.25
0.21
0.25
80% 0.50 70% 60%
0.61
50%
0.63 0.66
40%
0.93
0.74
30%
0.46
20% 10%
0.27 0.09
0% Base
Scen 2
0.16 0.01 Scen 3
0.04 Scen 4
Scen 5
0.00 Scen 6
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
520
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
The management plans or farm programme options can be ranked using alternative risk ranking procedures, such as mean variance, stochastic dominance and stochastic efficiency (Richardson & Outlaw, 2008). The alternative risk ranking tools generally rely on ranking the empirical estimates of the PDFs for NPV because this variable summarises the economic wellbeing of a business over the planning horizon. Ranking risky scenarios based on their means only assumes the farmer is risk neutral. First- and second-degree stochastic dominance can be used for ranking but these procedures ignore risk aversion and often result in mixed messages with no clearly dominant winner. Stochastic dominance with respect to a function requires knowledge of the farmer’s absolute risk aversion coefficient (RAC) and can result in an inconclusive ranking of the risky scenarios. Stochastic efficiency by Hardaker, Richardson, Lien, and Schumann (2004), uses a range of possible risk aversion levels and ranks risky alternatives based on their certainty equivalents calculated for 25 RACs over the range of risk neutral to extremely risk averse. The procedure is portrayed graphically and is easy for modellers/analysts to explain ranking of risky alternatives to decision makers. See Richardson and Outlaw (2005) for a direct comparison of risk ranking procedures. The Monte Carlo farm simulation model described in this section is a bare bones model. Possible additions to the base model include detailed equations to calculate government payments, income taxes, crop and revenue insurance options, machinery depreciation and replacement schedules, marketing strategies including hedging and options, firm growth and retirement and an array of livestock/crop enterprise combinations. As the additions to the model are programmed, the model will more accurately simulate the farm to be analysed. 17.4.1.1. Data sources Data to simulate a farm can come from primary and secondary sources. Data directly obtained from farmers has an advantage in that the model’s ability to simulate a farm can be validated against the farmer’s experience. Briefly the data to develop and apply a Monte Carlo farm simulation model include assets; liabilities and their interest rates and number of years; unallocated fixed costs; variable costs; machinery inventory with values, replacement costs, and depreciation and replacement schedules, income tax deductions and schedules; variable and fixed costs for each enterprise and historical prices and yields for each enterprise. The historical prices and yields are used to estimate forecasted means for these variables and parameters for multivariate probability distributions for these variables. Exogenous variables the modeller can use to formulate alternative scenarios include interest rates, rates of inflation, debt levels, debt structure and refinancing, technology (e.g. fertilizer and seed varieties), farming practices, farm policy variables, crop mix, herd
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
521
growth strategies, insurance (yield and revenue), marketing strategies and machinery replacement strategies. For each scenario, a Monte Carlo farm simulation model will estimate the empirical PDF for NPV to answer the question of how will the probability of economic success change if the farm is operated differently. This type of simulation model addresses the What if … question but does not tell us the optimal combination of inputs. That problem is best handled in a linear programming (LP) approach to farm simulation. In summary, Monte Carlo simulation is a robust analytical tool for quantitatively analysing alternative risky business plans so it is easily adapted to simulating risky businesses such as farms. Due to the flexibility of Monte Carlo farm simulation for modelling the risk management aspects of US farm programs, the methodology has been relied upon by the US Congress to analyse each farm policy change since 1985. The FLIPSIM model has gained popularity with the Congressional Agricultural Committees to analyse alternative farm policy options put forth by different interest groups. During the 2014 Farm Bill debate the FLIPSIM model was used for numerous analyses to show the economic impacts of alternative farm programme permutations on representative crop, dairy and beef cattle farms in major production regions. One such report is the Richardson et al. (2013) report comparing the Senate and House farm bills. Annual reports have been prepared for Congress using the FLIPSIM model and about 100 representative farms since 1988 which has added to their acceptance of the representative farm simulation approach. Simulation programming software tools are available to help analysts develop, validate and apply stochastic simulation models. A popular simulation tool is Simetar which is an add-in for Excel. Numerous publications are available to help modellers apply Monte Carlo simulation to farm modelling. Early publications on simulation by Reutlinger and Poliquen are suggested readings in this area. Listed below are some of the most relevant publications in the area of firm/business Monte Carlo simulation: • Richardson and Nixon Description of FLIPSIMV: A General Farm Level Policy Simulation Model • Richardson, Klose and Gray Multivariate Simulation Procedure explained in detail • Richardson Simulation for Applied Risk Management with an Introduction to Simetar • Hardaker et al. Coping with Risk.
17.4.2. Optimisation modelling Optimisation models operate by maximising, or minimising, an objective function subject to a set of constraints. LP is one of the most commonly
522
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
used optimisation models and it operates on the assumption that both the objective function and the problem constraints are all linear. This assumption makes LP relatively easy to use but limits its practical use in real world applications. Most LP farm-level models are designed to maximise a profit function subject to a set of resource constraints, representing the land, labour and capital available to the farm. The general form of a LP model can be expressed as: Max=Min ƒðx1 ; x2 ; xn Þ : = c1 x1 þ c2 x2 …cn xn
ð17:14Þ
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
subject to ai1 x1 þ ai2 x2 þ … þ ain xn ð ≤ or ≥ Þbi ; ∀i
ð17:15Þ
xj ≥ 0; ∀j
ð17:16Þ
where Eq. (17.14) represents the objective function with xn representing the level of activity n, cn representing the return to activity n or cost of activity n, ain representing the resource requirement per unit of activity n and bi representing the resource constraint. The optimal outcome is the one that returns the highest, or lowest, value for the objective function Eq. (17.14) subject to the constraints (Eqs. (17.15) and (17.16)). The simplest form of LP is the mono-period model. However, given that many farm planning problems and production processes involve activities that occur over several time periods, multi-period models are typically more useful. In such models the objective function and constraints span multiple periods and different prices, costs and resource endowments can be specified for each time period. The periods can be linked together by transfer constraints specifying the closing inventory in period t − 1 is equal to the opening inventory for period t, for example. The objective function can be maximised across all periods or with recursive modelling each period can be solved as an independent problem. LP models assume linearity, meaning that in a profit maximising LP model a constant profit and cost rate per unit of output are assumed and the same constant combination of inputs or resources are required per unit of production. In other words, it must be assumed that there are no economies of scale or no diminishing returns to resources. This has implications for modelling economic systems because it ignores some of the most basic principles of economics and means that the average variable cost is constant and the law of diminishing marginal returns cannot be modelled. To allow for these economic phenomena, nonlinear functions can be constructed using nonlinear programming (NLP). Programming models can also be made more sophisticated by including multiple objective functions. Multiple-goal programming can be used to more realistically simulate farmer behaviour. For example, the model can be used to
Farm Level Models
523
maximise profit while also minimising risk which may more accurately reflect farmer behaviour than a single objective function model.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
17.4.3. Farm-level policy simulation modelling While farm-level simulation models for policy analysis are now in vogue once again, the methods of profit maximising models solved for representative farms, as was used in the 1960s, is now somewhat outdated. First, there has been a shift away from the representative farm approach. As data sources have become richer and computing power greater, modellers have tended to favour individual farm modelling of large datasets rather than representative or farm-type modelling. As explained by Ciaian et al. (2013), individual farm modelling has advantages over the representative farm approach: (i) it can better represent the heterogeneity among farms in terms of policy representation and impacts; (ii) it provides the most possible disaggregation regarding farms and activities and (iii) it reduces aggregation bias in response to policy and market signals. Albeit some highly regarded models of European farming, frequently used by the European Commission for impact assessment, continue to use a farm typology approach such as CAPRI (Gocht & Britz, 2011) or a representative farm approach such as AROPAj (De Cara & Jayet, 2011). There has also been a shift away from the profit maximising approach. The normative nature of the optimisation model limits its usefulness for policy analysis. Policy makers are typically interested in how a new policy will affect real farmers and how such farmers are likely to respond to the policy change. The normative nature of optimisation modelling means that it is very difficult to replicate reality without knowing and specifying an accurate objective function for each farmer, behaviour cannot be simulated accurately and hence the real impact of a policy change cannot be understood. Multiple-goal programming can be used to make optimisation models more positive, but again it is quite prescriptive and judgements must be made about the farmers’ objective functions and the ordering of those objectives in terms of importance. It is clear that farmers are motivated by a multiplicity of factors including, but not limited to, profit and hence a predefined objective function or set of objective functions are unlikely to accurately simulate profit. Furthermore, optimisation models are not capable of simulating structural change patterns and showing the impact of a policy scenario on farm numbers, an issue of key concern to policy makers. In response to these weaknesses, some advances have been made over the years to farm-level simulation models for policy analysis. Most notable is the linking of mathematical programming and economic models using positive mathematical programming (PMP). PMP uses econometric techniques to calibrate the outcome of programming models to bring them closer to actual observed outcomes, see Howitt (1995) for a detailed
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
524
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
discussion. PMP modelling has been embraced by the policy analysis community and a number of large-scale farm-level policy analysis models use a PMP approach, see the CAPRI model, for example. Econometrically estimated farm-level models have also been used for policy analysis. Rather than simulating the whole farm business, these models typically use farm-level panel data to examine the impact of a particular policy on a specific issue, such as land or labour allocation, efficiency or investment, for example. There are countless examples of such models in both the American and European literature. For example, Sckokai and Moro (2009) and O’Toole, Newman, and Hennessy (2014) both used panel data from the European Farm Accountancy Data Network database to econometrically estimate the impact of various types of direct payments and capital grants on farm investment. Ahearn, El-Osta, and Dewbre (2006) and Hennessy and Rehman (2007) both used econometric methods to model farmers’ labour allocation decisions and to examine the impact of direct payments, both coupled and decoupled, on off-farm labour supply. Econometric farm-level models have also been used to simulate farmers’ decisions to enter policy schemes and programmes. Hynes and Garvey (2009), for example, used farm-level panel data to identify the types of farmers most likely to enter agrienvironmental schemes and used this information to draw inference on the effectiveness of such schemes. While econometric approaches offer many advantages over both optimisation and simulation models, one of the major drawbacks of econometric modelling for policy analysis is it relies on past observations and as such cannot be used to investigate the impact of new policy levers. In more recent years, much of the developments in the area of simulation for farmer behaviour have moved beyond optimisation and tended to focus more on behavioural economics and have attempted to better understand, explain and simulate farmer behaviour. A number of modelling approaches, for example, draw from both social-psychological theory and the principles of economics to better explain farmers’ behaviour using approaches such as the Theory of Planned Behaviour, see for example, Rehman et al. (2007). These approaches tend to be highly data intensive and while progress has been made in describing the factors that motivate farmers, such approaches have not yet been applied in large-scale policy analysis studies.
17.4.4. Farm-level spatial modelling Given the spatial dimension of agriculture and related policy, there is an increased use of spatial microsimulation models in agriculture. O’Donoghue et al. (forthcoming) and Hermes and Poulsen (2012) and the chapter in this Handbook dealing with Spatial Microsimulation
Farm Level Models
525
describes a number of potential methodologies to do this. Potential options include:
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
• • • •
Iterative proportional fitting Deterministic reweighting Combinatorial optimisation Quota sampling (QS).
The deterministic approach to reweighting national sample survey data is an attempt to fit small area statistics tables or benchmarks for each small area without the use of random sampling procedures (Ballas, Clarke, & Wiemers, 2005). Iterative Proportional Fitting may be used to generate cross-tabulated control totals at the small area. These are compared with similar cross-tabulated totals from the survey data to produce weights. An alternative mechanism for generating weights for generating spatial micro data is to use a regression-based reweighting method. An example is GREGWT, which is a generalised regression reweighting algorithm written by the Australian Bureau of Statistics (ABS) which was developed to reweight their survey data to constraints from other Australian data sources (see Tanton, Vidyattama, Nepal, & McNamara, 2011). An alternative approach to generating spatially disaggregated microdata is the use of combinatorial optimisation-based SA methods which can be used to reweight an existing microdata sample to fit small area population statistics. For example, aspatial microdata sets can be reweighted to estimate the micro population at a local spatial scale (Williamson, Birkin, & Phil, 1998). The method differs from IPF primarily in that it reweights or samples from a micro-dataset until a new micro-dataset is generated that reflects the characteristics of the small area. In an Agricultural context, this method has been applied to the spatial modelling of agri-environmental policy (Hynes et al., 2009b). However there a number of challenges faced in developing farm-level spatial microsimulation models using the other methods described in relation to avoiding the income smoothing concern of the weighting methodology, having improved computational efficiency and that could be adjusted to improve the spatial heterogeneity of stocking rates. O’Donoghue, Grealis, et al. (2014) have employed a method known as QS to deal with these issues. QS is a probabilistic reweighting methodology which operates in a similar fashion to SA, whereby survey data are reweighted according to key constraining totals for each small area, with amendments made in the sampling procedure in order to improve computational efficiency. Similar to SA, QS selects observations at random and considers whether they are suitable for selection for a given small area based on conformance with aggregate totals for each small area characteristic. Unlike SA, QS only assigns units that conform to aggregate constraint totals and once a unit is deemed selected, it is not replaced; the main computational improvement.
526
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
17.5. Conclusions and future directions Farm-level policy analysis has evolved considerably over the last number of decades and as discussed, there are now many types of policy models: hypothetical, optimisation, simulation, econometric, econometrically linked mathematical programming, behavioural models, macro-micro models, spatial models and environmental models. The type of policy model used depends on the policy question as well as data availability. Typically programming models continue to be the most frequently used models for large-scale policy analyses in the EU covering many farm types and Member States, while simulation models dominate in the United States. However, the use of farm-level models that use other simulation approaches is increasing. Future research directions for the field including deepening our understanding of the interaction between the environment and agricultural productivity as more geo-referenced data becomes available and vice versa enhancing our knowledge of the impact of agriculture on the environment. Most bio-economic systems models simulate farm-level changes using single farms. However, given the heterogeneity of responses on different farms, given different systems and efficiency, the development of bioeconomic systems models using heterogeneous data as in the case of Janssen et al. (2010) can help us to understand the potential impact across agricultural sectors of market, practice and technological change. Future developments could help us to understand better from an ex ante perspective the impact of extension or policy interventions. While an area that is also under-developed is the intersection between farms and the wider rural economy.
References Ahearn, M., El-Osta, H., & Dewbre, J. (2006). The impact of coupled and decoupled government subsidies on off farm labour participation of U.S. farm operators. American Journal of Agricultural Economics, 88, 393408. Ahmed, V., Abbas, A., & Ahmed, S. (2007). Taxation reforms: A CGEmicrosimulation analysis for Pakistan. In PEP research network general meeting, Lima, Peru. Arndt, C., Benfica, R., Tarp, F., Thurlow, J., & Uaiene, R. (2010). Biofuels, poverty, and growth: A computable general equilibrium analysis of Mozambique. Environment and Development Economics, 15(1), 81105. Ballas, D., Clarke, G. P., & Wiemers, E. (2005). Building a dynamic spatial microsimulation model for Ireland. Population, Space and Place, 11, 157172.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
527
Ballas, D., Clarke, G. P., & Wiemers, E. (2006). Spatial microsimulation for rural policy analysis in Ireland: The implications of CAP reforms for the national spatial strategy. Journal of Rural Studies, 22(3), 367378. Benin, S., Thurlow, J., Diao, X., McCool, C., & Simtowe, F. (2008). Agricultural growth and investment options for poverty reduction in Malawi. Washington, DC: International Food Policy Research Institute. Bennett, V. J., Beard, M., Zollner, P. A., Fernandez-Juricic, E., Westphal, L. & LeBlanc, C. L. (2009). Understanding wildlife responses to human disturbance through simulation modelling: A management tool. Ecological Complexity, 6, 113134. Bergmann, H., Noack, E. M., & Kenneth, J. T. (2011, 1820 April). The distribution of CAP payments Redistributional injustice or spatially adapted policy? 85th annual conference of the agricultural economics society, Warwick University. Berntsen, J., Petersen, B., Jacobsen, B., Olesen, J., & Hutchings, N. (2003). Evaluating nitrogen taxation scenarios using the dynamic whole farm simulation model FASSET. Agricultural Systems, 76(3), 817839. Boccanfuso, D., & Savard, L. (2007). Impacts analysis of cotton subsidies on poverty: A CGE macro-accounting approach. Journal of African Economies, 16(4), 629659. Boccanfuso, D., & Savard, L. (2008). Groundnut sector liberalization in Senegal: A multi-household CGE analysis. Oxford Development Studies, 36(2), 159186. Breen, J., & Hennessy, T. (2003). The impact of the MTR and WTO reform on Irish farms. Teagasc: The Irish Agriculture and Food Development Authority. Callan, T., & Van Soest, A. (1993). Female labour supply in farm households: Farm and off-farm participation. Economic and Social Review, 24(4), 313334. Chemingui, M. A., & Thabet, C. (2009). Agricultural trade liberalisation and poverty in Tunisia: Micro-simulation in a general equilibrium framework. Aussenwirtschaft: Schweizerische Zeitschrift fu¨r Internationale Wirtschaftsbeziehungen. [The Swiss Review of International Economic Relations], 64(1), 71. Chitiga, M., Mabugu, R. (2005). The impact of tariff reduction on poverty in Zimbabwe: A CGE top-down approach. South African Journal of Economics and Management Sciences, 8(1), 102116. Chyzheuskaya, A., O’Donoghue, C., & O’Neill, S. (2014) Using farm micro-simulation model to evaluate the impact of the nitrogen reduction mitigation measures on the farm income in Ireland. International Journal of Agricultural Management, 3(4), 232242.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
528
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
Ciaian, P., Espinosa, M., Gomez y Paloma, S., Heckelei, T., Langrell, S., Louhichi, K., … Vard, T. (2013). Farm level modelling of CAP: A methodological overview. Seville, Spain: European Commission, Joint Research Centre, Institute for Prospective Technological Studies. Clancy, D., Breen, J., Morrissey, K., O’Donoghue, C., & Thorne, F. (2013). The location economics of biomass production for electricity generation. In C. O’Donoghue, S. Hynes, K. Morrissey, D. Ballas, & G. Clarke (Eds.), Spatial microsimulation for rural policy analysis. Advances in Spatial Science. Berlin: Springer-Verlag. Clarke, G. (1996). Microsimulation for urban and regional policy analysis. London: Pion. Cororaton, C., & Cockburn, J. (2007). Trade reform and poverty Lessons from the Philippines: A CGE microsimulation analysis. Journal of Policy Modelling, 20(1), 141163. Crosson, P., O’Kiely, P., O’Mara, F. P., & Wallace, M. (2006). The development of a mathematical model to investigate Irish beef production systems. Agricultural Systems, 89(2), 349370. Crosson, P., Shalloo, L., O’Brien, D., Lanigan, G. J., Foley, P. A., Boland, T. M., & Kenny, D. A. (2011). A review of whole farm systems models of greenhouse gas emissions from beef and dairy cattle production systems. Animal Feed Science and Technology, 166, 2945. Dartanto, T. (2010). Volatility of world rice prices, import tariffs and poverty in Indonesia: A CGE-microsimulation analysis. Ekonomi Dan Keuangan Indonesia, 58(3), 335. Dartanto, T. (2011). Volatility of world soybean prices, import tariffs and poverty in Indonesia A CGE-Microsimulation analysis. Margin: The Journal of Applied Economic Research, 5(2), 139181. De Cara, S., & Jayet, P. A. (2011). Marginal abatement costs of greenhouse gas emissions from European agriculture, cost effectiveness, and the EU non-ETS burden sharing agreement. Ecological Economics, 70, 16801690. De Souza Ferreira Filho, J. B., dos Santos, C. V., & do Prado Lima, S. M. (2010). Case study: Tax reform, income distribution and poverty in Brazil: An applied general equilibrium analysis. International Journal of Microsimulation, 3(1), 114117. Diao, X., Alpuerto, V., & Nwafor, M. (2009). Economywide impact of avian flu in Nigeria A dynamic CGE model analysis. HPAI research brief no. 15. IFPRI, Washington, DC. Dijk, J., Leneman, H., & van der Veen, M. (1996). The nutrient flow model for Dutch agriculture: A tool for environmental policy evaluation. Journal of Environmental Management, 46(1), 4355. Doole, G. J. (2012). Cost-effective policies for improving water quality by reducing nitrate emissions from diverse dairy farms: An
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
529
abatementcost perspective. Agricultural Water Management, 104, 1020. Doole, G. J., Marsh, D., & Ramilan, T. (2013). Evaluation of agri-environmental policies for reducing nitrate pollution from New Zealand dairy farms accounting for firm heterogeneity. Land Use Policy, 30(1), 5766. Doucha, T., & Vaneˇ k, D. (2006). Interactions between agricultural policy and multifunctionality in Czech agriculture. Coherence of agricultural and rural development policies (p. 239). Paris: OECD. Ferreira, F. H., Fruttero, A., Leite, P. G., & Lucchetti, L. R. (2013). Rising food prices and household welfare: Evidence from Brazil in 2008. Journal of Agricultural Economics, 64(1), 151176. Ghosh, M., & Rao, J. N. K. (1994). Small area estimation: An appraisal. Statistical Science, 9, 5576. Gocht, A., & Britz, W. (2011). EU-wide farm type supply models in CAPRI How to consistently disaggregate sector models into farm type models. Journal of Policy Modelling, 33, 146167. Hanrahan, K., & Hennessy, T. (Eds.). (2013). Teagasc submission made in response to the Department of Agriculture, Food and the Marine CAP Public Consultation Process. Teagasc Working Group on CAP Reform. Hardaker, J. B., Huirne, R. B. M., Anderson, J. R., & Lien, G. (2004a). Coping with risk in agriculture. Cambridge, MA: CABI Publishing. Hardaker, J. B., Richardson, J. W., Lien, G., & Schumann, K. D. (2004b). Stochastic efficiency analysis with risk aversion bounds: A simplified approach. The Australian Journal of Agricultural and Resource Economics, 48(2), 253270. Hemme, T., Deblitz, C., Isermeyer, F., Knutson, R., & Anderson, D. (2000). The International Farm Comparison Network (IFCN) Objectives, organisation and first results on international competitiveness of dairy production. Zu¨chtungskunde, 72(6), 428439. Hennessey, T., & Thorne, F. (2006). The impact of WTO Doha development round on farming in Ireland. Galway, Ireland: Teagasc Rural Economy Research Centre. Hennessy, T., & Rehman, T. (2007). An investigation into the factors affecting the occupational choices of farm heirs. Journal of Agricultural Economics, 58(1), 6175. Hermes, K., & Poulsen, M. (2012). A review of current methods to generate synthetic spatial microdata using reweighting and future directions. Computers, Environment and Urban Systems, 36(4), 281290. Hertel, T. W. (1997). Global trade analysis: Models and applications. Cambridge: Cambridge University Press. Howitt, R. E. (1995). A calibration method for agricultural economic production models. Journal of Agricultural Economics, 46(2), 147159.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
530
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
Huffman, W. E., & Lange, M. D. (1989). Off-farm work decisions of husbands and wives: Joint decision making. The Review of Economics and Statistics, 71, 471480. Hynes, S., Farrelly, N., Murphy, E., & O’Donoghue, C. (2008). Modelling habitat conservation and participation in agri-environmental schemes: A spatial microsimulation approach. Ecological Economics, 66(2), 258269. Hynes, S., & Garvey, E. (2009). Modelling farmers’ participation in agrienvironmental schemes using panel data: An application to the rural environmental protection scheme in Ireland. Journal of Agricultural Economics, 60(3), 546562. Hynes, S., Morrissey, K., & O’Donoghue, C. (2013). Modelling greenhouse gas emissions from agriculture. In C. O’Donoghue, S. Hynes, K. Morrissey, D. Ballas, & G. Clarke (Eds.), Spatial microsimulation for rural policy analysis (pp. 143157). Berlin: Springer. Hynes, S., Morrissey, K., O’Donoghue, C., & Clarke, G. (2009a). A spatial microsimulation analysis of methane emissions from Irish agriculture. Journal of Ecological Complexity, 6, 135146. Hynes, S., Morrissey, K., O’Donoghue, C., & Clarke, G. (2009b). Building a static farm level spatial microsimulation model for rural development and agricultural policy analysis in Ireland. International Journal of Agricultural Resources, Governance and Ecology, 8(2), 282299. Janssen, S., Louhichi, K., Kanellopoulos, A., Zander, P., Flichman, G., Hengsdijk, H. … van Ittersum, M. K. (2010). A generic bioeconomic farm model for environmental and economic assessment of agricultural systems. Environmental Management, 46(6), 862877. Kelley, H., Rensburg, T. M. V., & Yadav, L. (2013). A micro-simulation evaluation of the effectiveness of an Irish grass roots agri-environmental scheme. Land Use Policy, 31, 182195. Kimura, S., Anton, J., & Cattaneo, A. (2012, August 1824). Effective risk management policy choices under climate change: An application to Saskatchewan crop sector. In 2012 conference, Foz do Iguacu, Brazil (No. 126736). International Association of Agricultural Economists. Klein, K., & Narayanan, S. (2008). Farm level models: A review of developments, concepts and applications in Canada. Canadian Journal of Agricultural Economics, 40(3), 351368. Kruseman, G., Blokland, P. W., Bouma, F., Luesink, H., Mokveld, L., & Vrolijk, H. (2008a). Micro-simulation as a tool to assess policy concerning non-point source pollution: The case of ammonia in Dutch agriculture. The Hague: LEI Wageningen UR. Kruseman, G., Blokland, P. W., Bouma, F., Luesink, H., Mokveld, L., & Vrolijk, H. (2008b). Micro-simulation as a tool to assess policy concerning non-point source pollution: The case of ammonia in Dutch
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
531
agriculture. In presentation at the 107th EAAE seminar on modelling of agricultural and rural development policies (Vol. 29), Sevilla. Kruseman, G., Blokland, P. W., Luesink, H., & Vrolijk, H. (2008, August 2629). Ex-ante evaluation of tightening environmental policy: The case of mineral use in Dutch agriculture. In XII EAAE Congress, Ghent, Belgium. Lal, R., & Follett, R. F. (2009). A national assessment of soil carbon sequestration on cropland: A microsimulation modelling approach. In R. Lal & R. F. Follett (Eds.), Soil carbon sequestration and the greenhouse effect. Madison, WI: Soil Science Society of America. ISBN: 978-0-89118-859-9. Lindgren, U., & Elmquist, H. (2005). Environmental and economic impacts of decision-making at an arable farm: An integrative modelling approach. AMBIO: A Journal of the Human Environment, 34(4), 393401. Lovett, D. K., Shalloo, L., Dillon, P., & O’Mara, F. P. (2006). A systems approach to quantify greenhouse gas fluxes from pastoral dairy production as affected by management regime. Agricultural Systems, 88(2), 156179. Lovett, D. K., Shalloo, L., Dillon, P., & O’Mara, F. P. (2008). Greenhouse gas emissions from pastoral based dairying systems: The effect of uncertainty and management change under two contrasting production systems. Livestock Science, 116(1), 260274. Mabugu, R., & Chitiga, M. (2009). Is increased agricultural protection beneficial for South Africa? Economic Modelling, 26(1), 256265. Manjunatha, A. V., Shruthy, M. G., & Ramachandra, V. A. (2013). Global marketing systems in the dairy sector: A comparison of selected countries. Indian Journal of Marketing, 43(10), 515. McCormack, M., O’Donoghue, C., & Stephen, H. (2014, March 14). Trends in CAP over time: A hypothetical farm analysis. Paper presented to the Agricultural Economics Society of Ireland, Dublin. Retrieved from http://www.aesi.ie/aesi2014/mmccormack. pdf. Accessed on June 8, 2014. Morley, S., & Pin˜eiro, V. (2004). The effect of WTO and FTAA on agriculture and the rural sector in Latin America (No. 3). Washington, DC: International Food Policy Research Institute (IFPRI). O’Brien, D., Shalloo, L., Grainger, C., Buckley, F., Horan, B., & Wallace, M. (2010). The influence of strain of Holstein-Friesian cow and feeding system on greenhouse gas emissions from pastoral dairy farms. Journal of Dairy Science, 93(7), 33903402. O’Donoghue, C. (2013). Modelling farm viability. In C. O’Donoghue, S. Hynes, K. Morrissey, D. Ballas, & G. Clarke (Eds.), Spatial microsimulation for rural policy analysis. Advances in Spatial Science. Berlin: Springer-Verlag.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
532
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
O’Donoghue, C., Grealis, E., Farrell, N., Loughrey, J., Hanrahan, K., Hennessy, T., Meredith, D. (2014, March 14). Modelling the spatial distributional effect of common agricultural policy reform. Paper presented to the Agricultural Economics Society of Ireland, Dublin. Retrieved from http://www.aesi.ie/aesi2014/codonoghue. pdf. Accessed on June 8, 2014. O’Donoghue, C., Hynes, S., Morrissey, K., Ballas, D., & Clarke, G. (2013). Spatial microsimulation for rural policy analysis. Advances in Spatial Science. Berlin: Springer-Verlag. O’Donoghue, C., Lennon, J., & Morrissey, K. (forthcoming). Survey of spatial microsimulation modelling. International Journal of Microsimulation, 7(2). OECD. (2006). Documentation of the AGLINK model. Working Party on Agricultural Policies and Markets, AGR-CAAPM(2006) 16/FINAL. Directorate for Food, Agriculture and Fisheries, Committee for Agriculture, OECD, Paris. O’Toole, C., Newman, C., & Hennessy, T. (2014). Financing constraints and agricultural investment: Effects of the Irish financial crisis. Journal of Agricultural Economics, 65(1), 152176. Pauw, K., & Thurlow, J. (2011). Agricultural growth, poverty, and nutrition in Tanzania. Food Policy, 36(6), 795804. Plaxico, J. S., & Tweeten, L. G. (1963). Representative farms for policy and projection research. Journal of Farm Economics, 45, 14581465. Potter, S. R., Atwood, J. D., Lemunyon, J., & Kellogg, R. L. (2009). A national assessment of soil carbon sequestration on cropland: A microsimulation modelling approach. Soil Carbon Sequestration and the Greenhouse Effect, (Soilcarbonseque), 111. Pouliquen, L. Y. (1970). Risk analysis in project appraisal. World Bank Staff Occasional Papers Number Eleven, p. 135. Prochorowicz, J., & Rusielik, R. (2007). Relative efficiency of oilseed crops production in the selected farms in Europe and the world in 2005. ACTA Scientiarum Polonorum, 57, 5762. Ramilan, T., Scrimgeour, F., & Marsh, D. (2011). Analysis of environmental and economic efficiency using a farm population microsimulation model. Mathematics and Computers in Simulation, 81(7), 13441352. Rehman, T., McKemey, K., Yates, C. M., Cooke, R. J., Garforth, C. J., Tranter, R. B., … Dorward, P. T. (2007). Identifying and understanding factors influencing the uptake of new technologies on dairy farms in SW England using the theory of reasoned action. Agricultural Systems, 94, 287293. Reutlinger, S. (1970). Techniques for Project Appraisal Under Uncertainty (pp. 113). Baltimore, Maryland: John Hopkins University Press. Richardson, J. W. (2005, January). Simulation for applied risk management. College Station, Texas: Department of Agricultural Economics, Texas A&M University.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
Farm Level Models
533
Richardson, J. W., Klose, S. L., & Gray, A. W. (2000). An applied procedure for estimating and simulating Multivariate Empirical (MVE) probability distributions in farm-level risk assessment and policy analysis. Journal of Agricultural and Applied Economics, 32(2), 299315. Richardson, J. W., & Mapp, H. P., Jr. (1976). Use of probabilistic cash flows in analysing investments under conditions of risk and uncertainty. Southern Journal of Agricultural Economics, 8(December), 1924. Richardson, J. W., & Nixon, C. J. (1986, July). Description of FLIPSIM V: A general firm level policy simulation model. Texas Agricultural Experiment Station, Bulletin B-1528. Richardson, J. W., & Outlaw, J. L. (2005, August). Web delivery of a Monte Carlo simulation model: The base and yield analyser experience. Journal of Agricultural and Applied Economics, 37(2), 425431. Richardson, J. W., & Outlaw, J. L. (2008). Ranking risky alternatives: Innovations in subjective utility analysis. In C. A. Brebbia & E. Beriatos (Eds.), Risk analysis VI, simulation and hazard mitigation. WIT Transactions on Information and communication (Vol. 39, pp. 213224). Southampton, United Kingdom: WIT Press. Richardson, J. W., Outlaw, J. L., Knapek, G. M., Raulston, J. M., Bryant, H. L., Herbst, B. K., & David, P. E. (2013, October). Economic impacts of the safety net provisions in the Senate (S. 954) and House (H.R. 2642) 2013 Farm Bills on AFPC’s representative crop farms. Texas AgriLife Research, Texas AgriLife Extension Service, Texas A&M University, Department of Agricultural Economics, Agricultural and Food Policy Center Working Paper No. 13-3. Richardson, J. W., Schumann, K., & Feldman, P. (2005, January). Simetar: Simulation for excel to analyse risk. College Station, Texas: Department of Agricultural Economics, Texas A&M University. Sckokai, P., & Moro, D. (2009). Modelling the impact of the CAP single farm payment on farm investment and output. European Review of Agricultural Economics, 36, 395423. Shalloo, L., Dillon, P., Rath, M., & Wallace, M. (2004). Description and validation of the Moorepark dairy system model. Journal of Dairy Science, 87(6), 19451959. Sharples, J. A. (1969). The representative farm approach to estimation of supply response. American Journal of Agricultural Economics, 51, 353361. Shrestha, S., Hennessy, T., & Hynes, S. (2007). The effect of decoupling on farming in Ireland: A regional analysis. Irish Journal of Agricultural & Food Research, 46(1), 113. Tanton, R., Vidyattama, Y., Nepal, B., & McNamara, J. (2011). Small area estimation using a reweighting algorithm. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 931951.
Downloaded by KAI NAN UNIVERSITY At 11:47 24 July 2016 (PT)
534
James W. Richardson, Thia Hennessy and Cathal O’Donoghue
Thomas, A. (2013). Linking farm level models with environmental impact models. In P. Ciaian, M. Espinosa, S. Gomez y Paloma, T. Heckelei, S. Langrell, K. Louhichi, … & T. Vard (Eds.), Farm level modelling of CAP: A methodological overview. Seville, Spain: European Commission, Joint Research Centre, Institute for Prospective Technological Studies. Thorne, F. S., & Fingleton, W. (2006). Examining the relative competitiveness of milk production: An Irish case study (19962004). Journal of International Farm Management, 3(4), 4961. Thurlow, J. (2008). Agricultural growth options for poverty reduction in Mozambique. Regional Strategic Analysis and Knowledge Support System (ReSAKSS) Working Paper No. 20, p. 14. van Leeuwen, E., & Dekkers, J. (2013). Determinants of off-farm income and its local patterns: A spatial microsimulation of Dutch farmers. Journal of Rural Studies, 31, 5566. van Leeuwen, E. S. (2010). Microsimulation of rural households. In Urban-rural interactions (pp. 115135). Heidelberg, Germany: Physica-Verlag HD. Villot, X. L. (1998). The effects of a sulphur tax levied on the Spanish electricity industry. In V Encuentro de Economı´ a Pu´blica: La Realidad de la Solidaridad en la Financiacio´n Autono´mica (p. 39) Warr, P., & Yusuf, A. A. (2009). International food prices and poverty in Indonesia. Australia: Australian National University (ANU), College of Asia and the Pacific, Arndt-Corden Division of Economics. Westhoff, P., & Young, R. (2001). The status of FAPRI’s EU modelling effort. In T. Heckelei, H. P. Witzke, & W. Henrichsmeyer (Eds.), Agricultural sector modelling and policy information systems (pp. 256263). Kiel: Wissenschaftsverlag Vauk Kiel KG. Williamson, P., Birkin, M., & Phil, H. R. (1998). The estimation of population microdata by using data from small area statistics and samples of anonymised records. Environment and Planning A, 30(5), 785816. Zander, K., Thobe, P., & Nieberg, H. (2007). Economic impacts of the adoption of the common agricultural policy on typical organic farms in selected new member states. Jahrbuch der O¨sterreichischen Gesellschaft fu¨r Agraro¨konomie, 16, 8596.