349 10 6MB
English Pages 504 Year 2009
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS Volume II
ADVANCED TOPICS IN ENVIRONMENTAL SCIENCE SERIES
SERIES EDITOR Grady Hanrahan John Stauffer Endowed Chair of Analytical Chemistry California Lutheran University Thousand Oaks, California, USA This series of high-level reference works provides a comprehensive look at key subjects in the field of environmental science. The aim is to describe cutting-edge topics covering the full spectrum of physical, chemical, biological and sociological aspects of this important discipline. Each book is a vital technical resource for scientists and researchers in academia, industry and government-related bodies who have an interest in the environment and its future sustainability.
Published titles Modelling of Pollutants in Complex Environmental Systems, Volume I Edited by Grady Hanrahan Modelling of Pollutants in Complex Environmental Systems, Volume II Edited by Grady Hanrahan
Forthcoming titles Practical Applications of Environmental Statistics and Data Analysis Edited by Yue Rong
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS Volume II
Edited by
Grady Hanrahan
Published in 2010 by ILM Publications Oak Court Business Centre, Sandridge Park, Porters Wood, St Albans, Hertfordshire AL3 6PH, UK 6635 West Happy Valley Road, Suite 104, #505, Glendale, AZ 85310, USA www.ilmpublications.com/www.ilmbookstore.com Copyright # 2010 ILM Publications ILM Publications is a trading division of International Labmate Limited All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the publisher. Requests to the publisher should be addressed to ILM Publications, Oak Court Business Centre, Sandridge Park, Porters Wood, St Albans, Hertfordshire AL3 6PH, UK, or emailed to [email protected]. Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. The publisher is not associated with any product or vendor mentioned in this book. This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2009922569 ISBN 978-1-906799-01-4 Commissioning Editor: Lindsey Langston Cover Designer: Paul Russen Typeset by Keytec Typesetting Ltd, Dorset, UK Printed and bound in the UK by MPG Books Group, Bodmin and King’s Lynn For orders, visit the ILM Bookstore website at www.ilmbookstore.com or email [email protected]
TABLE OF CONTENTS
FIgure Captions for the Colour Insert
xi
Key Abbreviations
xv
The Editor
xvii
The Contributors
xix
Foreword
xxiii
Acknowledgements
xxv
Preface
Part I
xxvii
Decision Support and Assessment Modelling
Chapter 1 Environmental Fate and Bioaccumulation Modelling at the US Environmental Protection Agency: Applications to Inform Decision-Making Elsie M. Sunderland, Christopher D. Knightes, Katherine von Stackelberg and Neil A. Stiber 1.1 Introduction 1.2 Atmospheric fate and transport modelling 1.3 Surface water quality modelling 1.4 Bioaccumulation modelling 1.5 Contaminated site remediation 1.6 Use of expert elicitation in modelling assessments 1.7 Summary: regulatory applications of fate and bioaccumulation models Endnotes References Chapter 2 Contaminated Land Decision Support: A Review of Concepts, Methods and Systems Aisha Bello-Dambatta and Akbar A. Javadi 2.1 Introduction
3
3 6 14 20 24 32 35 37 37 43 43
vi
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
2.2 Contaminated land management 2.3 Computer-based contaminated land management 2.4 Contaminated land decision support 2.5 Decision support systems for contaminated land management 2.6 Contaminated land DSS trends 2.7 Conclusion References Chapter 3 An Overview of Exposure Assessment Models Used by the US Environmental Protection Agency Pamela R.D. Williams, Bryan J. Hubbell, Eric Weber, Cathy Fehrenbacher, David Hrdy and Valerie Zartarian 3.1 Introduction 3.2 Background information 3.3 Methods 3.4 Results 3.5 Discussion Acknowledgements Endnotes References
Part II
46 47 50 52 54 56 57 61
61 65 67 68 118 122 122 122
Aquatic Modelling and Uncertainty
Chapter 4 Modelling Chemical Speciation: Thermodynamics, Kinetics and Uncertainty Jeanne M. VanBriesen, Mitchell Small, Christopher Weber and Jessica Wilson 4.1 Introduction 4.2 Chemical speciation modelling 4.3 Analytical methods 4.4 Experimental and modelling methods 4.5 Results and discussion 4.6 Conclusions Endnotes References Chapter 5 Multivariate Statistical Modelling of Water- and Soil-Related Environmental Compartments Aleksander Astel 5.1 Introduction 5.2 Cluster analysis 5.3 Principal component analysis (PCA) 5.4 Principal component regression (PCR) 5.5 N-way PCA
135
135 136 140 141 142 148 148 148 153 153 153 158 160 161
TABLE OF CONTENTS
5.6 Self-organising maps (SOM, Kohonen maps) 5.7 Case studies 5.8 Conclusions References Chapter 6 Modelling the Fate of Persistent Organic Pollutants in Aquatic Systems Tuomo M. Saloranta, Ian J. Allan and Kristoffer Næs 6.1 Introduction 6.2 Modelling the fate and transport of persistent organic pollutants in aquatic systems 6.3 Data quality and model results: dealing with uncertainties 6.4 The Grenland fjords dioxin case: simulating contaminated sediment remediation alternatives 6.5 Summary Acknowledgements References Chapter 7 A Bayesian-Based Inverse Method (BBIM) for Parameter and Load Estimation in River Water Quality Modelling under Uncertainty Yong Liu 7.1 Introduction 7.2 Study area 7.3 Bayesian-based inverse method 7.4 Results and discussion 7.5 Conclusion Acknowledgements References
Part III
vii 163 167 205 207 215 215 219 224 230 238 239 239
243 243 245 247 250 256 258 258
Soil, Sediment and Subsurface Modelling and Pollutant Transport
Chapter 8 Colloid-Facilitated Contaminant Transport in Unsaturated Porous Media Arash Massoudieh and Timothy R. Ginn 8.1 Introduction/evidence 8.2 Colloid transport in unsaturated media 8.3 Colloid-facilitated transport 8.4 Heterogeneity 8.5 Unknowns and future directions References
263 263 265 276 281 283 286
viii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Chapter 9 Regional Modelling of Carbon, Nitrogen and Phosphorus Geospatial Patterns Sabine Grunwald, Gustavo M. Vasques, Nicholas B. Comerford, Gregory L. Bruland and Christine M. Bliss 9.1 Introduction 9.2 Materials and methods 9.3 Results and discussion 9.4 Conclusions Acknowledgements References Chapter 10 Modelling of Solute Transport in Soils Considering the Effect of Biodegradation and Microbial Growth Mohaddeseh M. Nezhad and Akbar A. Javadi 10.1 Introduction 10.2 Governing equations of fluid flow 10.3 Mechanisms of contaminant transport 10.4 Numerical solution 10.5 Numerical results 10.6 Conclusions References
Part IV
293
293 294 299 308 308 309 311 311 314 316 321 325 332 333
Forest Ecosystem and Footprint Modelling
Chapter 11 Flux and Concentration Footprint Modelling T. Vesala, N. Kljun, U¨. Rannik, J. Rinne, A. Sogachev, T. Markkanen, K. Sabelfeld, Th. Foken and M.Y. Leclerc 11.1 Introduction 11.2 Basic 11.3 Lagrangian stochastic trajectory approach 11.4 Large-eddy simulation approach 11.5 Closure model approach 11.6 Footprints of reactive gases 11.7 Future research and open questions Acknowledgements References Chapter 12 Past and Future Effects of Atmospheric Deposition on the Forest Ecosystem at the Hubbard Brook Experimental Forest: Simulations with the Dynamic Model ForSAFE Salim Belyazid, Scott Bailey and Harald Sverdrup 12.1 Introduction 12.2 A short description of the ForSAFE model
339
339 341 343 346 347 349 351 352 352
357 357 357
TABLE OF CONTENTS
12.3 Input 12.4 Model calibration 12.5 Model validation 12.6 Results 12.7 Conclusions Acknowledgements Endnotes References Further reading
Part V
ix 360 363 364 367 375 375 375 376 377
Air Quality Modelling and Sensitivity Analysis
Chapter 13 The Origins of 210 Pb in the Atmosphere and its Deposition on and Transport through Catchment Lake Systems Peter G. Appleby and Gayane Piliposian 13.1 Introduction 13.2 222 Rn production and diffusion into the atmosphere 13.3 Production and fallout of atmospheric 210 Pb 13.4 Post-depositional transport of atmospherically deposited 210 Pb 13.5 Discussion Acknowledgements References Chapter 14 Statistical Modelling and Analysis of PM2:5 Control Levels Sun-Kyoung Park and Seoung Bum Kim 14.1 Introduction 14.2 Method 14.3 Results and discussion 14.4 Concluding remarks References
381 381 386 387 395 402 403 403 405 405 406 408 415 416
Chapter 15 Integration of Models and Observations: A Modern Paradigm for Air Quality Simulations 419 Adrian Sandu, Tianfeng Chai and Gregory R. Carmichael 15.1 Introduction 419 15.2 Integration of data and models: a modern simulation paradigm 421 15.3 Mathematical framework for air quality modelling 422 15.4 Chemical data assimilation 423 15.5 Chemical sensitivity analysis and data assimilation with the CMAQ model 429 15.6 Concluding remarks 432
x
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Acknowledgements Endnotes References
433 433 433
Chapter 16 Airshed Modelling in Complex Terrain Peyman Zawar-Reza, Andrew Sturman and Basit Khan 16.1 Introduction 16.2 Background 16.3 Application of airshed models 16.4 Concluding comments References
435
Epilogue
455
Index
457
435 436 441 453 453
FIGURE CAPTIONS FOR THE COLOUR INSERT
Figure 1.2: Modelled atmospheric mercury deposition in the US using the CMAQ model for the year 2001 from model version 4.3, which was developed for the Clean Air Mercury Rule using a 36 km horizontal grid cell spatial resolution: (a) 2001 total mercury deposition; (b) percentage reduction in total mercury deposition achieved with a zero-out of US coal-fired utility emissions. Source: US EPA (2005a). Technical Support Document for the Final Clean Air Mercury Rule: Air Quality Modeling, 24 pp. Office of Air Quality Planning and Standards, US Environmental Protection Agency, Research Triangle Park, NC.
Figure 1.3: Example of coupled atmospheric–terrestrial–oceanic biogeochemical model for mercury. Source: Selin, N., Jacob, D., Yantosca, R., Strode, S., Jaegle, L. and Sunderland, E. (2008). Global 3-D land–ocean–atmosphere model for mercury: present-day versus preindustrial cycles and anthropogenic enrichment factors for deposition. Global Biogeochemical Cycles. 22, GB2011.
Figure 2.1: Breakdown of industrial and commercial activities causing soil contamination, as a percentage of the number of sites for each branch of activity. Source: EEA (2007). Progress in Management of Contaminated Sites. European Environment Agency, Copenhagen.
Figure 2.2: Breakdown of industries causing soil contamination, as a percentage of the number of sites for each branch of activity. Source: EEA (2007). Progress in Management of Contaminated Sites. European Environment Agency, Copenhagen.
Figure 5.8: SOMs for all sampling sites, parameters and time period. The U-matrix visualises distances between neighbouring map units, and helps to identify the cluster structure of the map. High values indicate a cluster border; uniform areas of low value indicate clusters themselves. Each component plane shows the values of one variable in each map unit; both the colour-tone pattern and colour-tone bar labelled ‘‘d’’ provide information regarding species abundance calculated through the SOM learning process. Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
xii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.9: Quality parameters similarity pattern obtained by self-organising mapping. The distance between variables on the map, connected with the analysis of colour-tone patterns, provides semi-quantitative information about the nature of the correlations between them. Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
Figure 6.2: Overview of the Grenland fjords. The different colours denote the horizontal compartment division in the model application. Figure 6.5: Model simulation results for the three remediation scenarios assumed to take place in 2010. (b) Time evolution (2010–2050) of the probability of cod belonging to the five Norwegian pollution classes in (i) no remediation and (ii) full capping scenarios. The probabilities are calculated from the 2000 simulations of mean dioxin concentrations in a pooled random sample of 20 cod livers conducted in the uncertainty analysis. Figure 7.4: The model-fitting results for BOD between observed data and modelled mean, median and 2.5% and 97.5% credible level values. Figure 9.1: Spatial distribution patterns of C, N and P in the topsoil (0–30 cm) across the Santa Fe River Watershed. Figure 9.3: Site-specific observations of C : N ratios in four layers across the Santa Fe River Watershed. Figure 9.4: Spatial distribution patterns of C : N in four layers across the Santa Fe River Watershed. Figure 9.5: Site-specific observations of N : P ratios in four layers across the Santa Fe River Watershed. Figure 9.6: Spatial distribution patterns of N : P in four layers across the Santa Fe River Watershed. Figure 11.2: 75%-level source area for flux (red) and concentration (blue) measurements for convective conditions. The triangle indicates the receptor location. Figure 11.3: Three-dimensional flux footprint for strongly convective conditions. The triangle indicates the receptor location. Figure 11.4: Examples of flux footprint (3 104 m2 ) for ground sources derived by the SCADIS model (Sogachev et al., 2002) for a sensor located at a height of 2h (twice the vegetation height) above a forest growing: (a, b) on flat terrain; (c) on a bell-shaped hill; (d) in a bell-shaped valley. Vegetation height is 15 m and leaf area index is 3 m2 m2 . Sensor located at x, y ¼ 300 m, 300 m, and in (c) and (d) is exactly at the hill summit and the valley bottom. The footprint in (a) was estimated without the turning wind direction. Vector plots show wind flow near a forest floor at a height of 13h. The arrow and 1.5 show the wind velocity vector of 1.5 m s1 .
FIGURE CAPTIONS FOR THE COLOUR INSERT
xiii
Figure 15.2: Receptor ozone sensitivity to surface O3 concentrations at (a) 0 hours (showing the target region), (b) 8 hours, (c) 16 hours and (d) 32 hours before the target time. See text for the receptor definition. Figure 15.3: Receptor ozone sensitivity to surface NO2 concentrations at (a) 4 hours, (b) 8 hours, (c) 16 hours and (d) 32 hours before the target time. See text for the receptor definition. Figure 15.4: Sensitivity to grid-based emissions for (a) NO, (b) HCHO, (c) olefin and (d) isoprene. The temporal profile is assumed to be unchanged during the 32-hour simulation. The units are ppbv mol1 s1 for each 12 km 3 12 km grid cell. See text for the receptor definition. Figure 15.5: Surface ozone predictions (in ppbv) at 1800Z, 5 August 2007: (a) without data assimilation; (b) with assimilation. Figure 15.6: Surface ozone predictions (in ppbv) at 1800Z, 6 August 2007: (a) without data assimilation; (b) after assimilation. Figure 16.4: Streamlines showing the average wind field over the depth of the drainage wind or down-slope wind layer 3 hours after sunset. Red indicates the urban extent of Christchurch, New Zealand, and blue is water. The westerly flow is a drainage wind originating from the Southern Alps (outside the frame), which interacts with drainage winds from the Port Hills (positioned to the south of the city). Figure 16.6: Modelled planetary boundary layer (Pbl) height for the Auckland region using WRF at (a) 1500 and (b) 1800 LT on 18 March 2008. The warmer colours indicate a more unstable atmosphere or higher level of convective activity. Figure 16.12: (a) Back-trajectories illustrating air parcel movement during the build-up of air pollution at Kaiapoi during worst-case conditions; (b) the resulting establishment of clean air zones around the town. Source: (a) GoogleTM Earth; (b) Environment Canterbury, New Zealand.
KEY ABBREVIATIONS
(See additional abbreviations in Tables 3.1 and 3.2 on pages 62 and 69, respectively) AHP AI ANC ANOVA ARCing AWI BASINS BASS BBIM BTA CA cdf CE CFD CFT CMAQ CMB CSM CTM CV CVRE DA DIC DSM DSS EC EFDC ELECTRE ES EXAMS FRAMES
Analytical Hierarchy Process artificial intelligence acid-neutralising capacity analysis of variance adaptive resampling and combining air/water interface Better Assessment Science Integrating point and Non-point Sources Bioaccumulation and Aquatic System Simulator Bayesian-based inverse method batch training algorithm cluster analysis cumulative density function capillary electrophoresis computational fluid dynamics colloid filtration theory Community Multiscale Air Quality chemical mass balance conceptual site model chemical transport model cross-validation cross-validated relative error decision analysis deviance information criterion distributed source model decision support systems eddy covariance Environmental Fluid Dynamics Code Elimination and Choice Expressing Reality expert systems Exposure Analysis Modeling System Framework for Risk Analysis of Multimedia Environmental Systems
xvi GBMM GCM HUDTOX HYSPLIT IBL IPM KBS KS LEA LES MCDA MCMC MLE PBDEs PAH PC PCA PCB pdf POPs PRESS PROMETHEE REV SMART SOMs SQP STA VADOFT WASP
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Grid Based Mercury Model general circulation model Hudson River Toxic Chemical Model Hybrid Single-Particle Lagrangian Integrated Trajectory model internal boundary layer Integrated Planning Model knowledge-based systems Kolmogorov–Smirnov test local equilibrium assumption large eddy simulation multi-criteria decision analysis Markov chain Monte Carlo maximum likelihood estimation polybrominated diphenyl ethers polyaromatic hydrocarbon principal component principal component analysis polychlorinated biphenyl probability density function persistent organic pollutants predictive residual sum of squares Preference Ranking Organisation METHod for Enrichment Evaluations representative elemental volume Simple Multi-Attribute Rating Technique self-organising maps sequential quadratic programming sequential training algorithm Vadose Zone Flow and Transport Model Water Quality Analysis Simulation Program
THE EDITOR
Grady Hanrahan received his PhD in environmental analytical chemistry from the University of Plymouth, UK. With experience in directing undergraduate and graduate research, he has taught in the fields of environmental science and analytical chemistry at California State University, Los Angeles and California Lutheran University. He has written or co-written numerous technical papers and is the author of Environmental Chemometrics: Principles and Modern Applications (CRC Press, 2009). Dr Hanrahan has also edited Modelling of Pollutants in Complex Environmental Systems, Volume I (ILM Publications, 2009) and co-edited Chemometric Methods in Capillary Electrophoresis (John Wiley & Sons, 2010). He actively employs environmental modelling techniques in his research and teaching activities.
THE CONTRIBUTORS
Ian J. Allan Norwegian Institute for Water Research (NIVA) Oslo, Norway
Peter G. Appleby Environmental Radiometric Research Centre and Department of Mathematical Sciences University of Liverpool Liverpool, UK
Aleksander Astel Biology and Environmental Protection Institute Pomeranian Academy Słupsk, Poland
Scott Bailey USDA Forest Service, Northern Research Station Center for the Environment Plymouth State University Plymouth, New Hampshire, USA
Aisha Bello-Dambatta School of Engineering, Computing and Mathematics University of Exeter Devon, UK
Salim Belyazid Swedish Environmental Research Institute (IVL) Gothenburg, Sweden
Christine M. Bliss USDA Forest Service, Southern Research Station Pineville, Louisiana, USA
Gregory L. Bruland College of Tropical Agriculture and Human Resources Department of Natural Resources and Environmental Management University of Hawai’i Manoa Honolulu, Hawaii, USA
Gregory R. Carmichael Center for Global and Regional Environmental Research The University of Iowa Iowa City, Iowa, USA
Tianfeng Chai Department of Computer Science Virginia Polytechnic University Blacksburg, Virginia, USA
Nicholas B. Comerford Soil and Water Science Department University of Florida Gainesville, Florida, USA
Cathy Fehrenbacher US Environmental Protection Agency Office of Prevention, Pesticides and Toxic Substances Office of Pollution Prevention and Toxics Washington, DC, USA
Th. Foken Department of Micrometeorology University of Bayreuth Bayreuth, Germany
Timothy R. Ginn Department of Civil & Environmental Engineering University of California, Davis California, USA
xx
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Sabine Grunwald Soil and Water Science Department University of Florida Gainesville, Florida, USA
David Hrdy US Environmental Protection Agency Office of Prevention, Pesticides and Toxic Substances Office of Pesticide Programs Washington, DC, USA
Bryan J. Hubbell US Environmental Protection Agency Office of Air and Radiation Office of Air Quality Planning and Standards North Carolina, USA
Akbar A. Javadi School of Engineering, Computing and Mathematics University of Exeter Devon, UK
Basit Khan Centre for Atmospheric Research Department of Geography University of Canterbury Christchurch, New Zealand
Seoung Bum Kim Division of Information Management Engineering Korea University Seoul, Korea
N. Kljun Department of Geography Swansea University Swansea, UK
Christopher D. Knightes US Environmental Protection Agency Office of Research and Development Ecosystems Research Division Athens, Georgia, USA
M.Y. Leclerc Laboratory for Environmental Physics The University of Georgia Griffin, Georgia, USA
Yong Liu College of Environmental Sciences and Engineering Peking University Beijing, China and School of Natural Resources and Environment University of Michigan Michigan, USA
T. Markkanen Climate and Global Change Research Finnish Meteorological Institute Helsinki, Finland
Arash Massoudieh Department of Civil Engineering Catholic University Washington, DC, USA
Kristoffer Næs Norwegian Institute for Water Research (NIVA) Oslo, Norway
Mohaddeseh M. Nezhad School of Engineering, Computing and Mathematics University of Exeter Exeter, Devon, UK
Sun-Kyoung Park Department of Environmental Engineering Seoul National University of Technology Seoul, Korea
Gayane Piliposian Environmental Radiometric Research Centre and Department of Mathematical Sciences University of Liverpool Liverpool, UK
¨ . Rannik U Department of Physics University of Helsinki Helsinki, Finland
J. Rinne Department of Physics University of Helsinki Helsinki, Finland
xxi
THE CONTRIBUTORS
K. Sabelfeld Institute of Computational Mathematics and Mathematical Geophysics Russian Academy of Sciences Novosibirsk, Russia
Tuomo M. Saloranta Norwegian Institute for Water Research (NIVA) Oslo, Norway
Adrian Sandu Department of Computer Science Virginia Polytechnic University Blacksburg, Virginia, USA
Mitchell Small Department of Civil and Environmental Engineering Department of Engineering and Public Policy Carnegie Mellon University Pittsburg, Pennsylvania, USA
A. Sogachev Wind Energy Division Risø National Laboratory for Sustainable Energy Technical University of Denmark Roskilde, Denmark
Neil A. Stiber US Environmental Protection Agency Office of the Science Advisor Washington, DC, USA
Andrew Sturman Centre for Atmospheric Research Department of Geography University of Canterbury Christchurch, New Zealand
Elsie M. Sunderland Harvard University School of Engineering and Applied Sciences Boston, Massachusetts, USA
Harald Sverdrup Department of Chemical Engineering Lund University Lund, Sweden
Jeanne M. VanBriesen Department of Civil and Environmental Engineering Carnegie Mellon University Pittsburg, Pennsylvania, USA
Gustavo M. Vasques Soil and Water Science Department University of Florida Gainesville, Florida, USA
T. Vesala Department of Physics and Department of Forest Ecology University of Helsinki Helsinki, Finland
Katherine von Stackelberg Harvard University School of Public Health Center for Risk Analysis Boston, Massachusetts, USA
Christopher Weber Department of Civil and Environmental Engineering Carnegie Mellon University Pittsburg, Pennsylvania, USA
Eric Weber US Environmental Protection Agency Office of Research & Development National Exposure Research Laboratory Athens, Georgia, USA
Pamela R.D. Williams E Risk Sciences, LLP Boulder, Colorado, USA
Jessica Wilson Department of Civil and Environmental Engineering Carnegie Mellon University Pittsburg, Pennsylvania, USA
Valerie Zartarian US Environmental Protection Agency Office of Prevention, Pesticides and Toxic Substances Office of Pesticide Programs Washington, DC, USA
xxii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Peyman Zawar-Reza Centre for Atmospheric Research Department of Geography University of Canterbury Christchurch, New Zealand
FOREWORD
‘Globalisation’ is often understood by the general public to be a stronger interconnection between international economies and thus the ease with which products can be exchanged around the world, for example, we can drink an Italian mineral water in New York or wear a pair of jeans in Paris that were made in Vietnam. But there is much more to it than this. We have also made pollution a global phenomenon. This is not only true for carbon dioxide emissions that are changing the world’s climate, but also for more traditional pollutants such as sulphur oxides that contribute to acid rain and consequently to the acidification of soil and water, far from the emission sources. Also, nitrogen oxide emissions, originating largely from motor vehicles that have crowded industrialised countries and are rapidly jamming the rest of the world, are projected to increase in 2010 by about 20% compared with 1990. The case studies presented in this book provide a vivid representation of these phenomena. They range from the modelling of PCBs in the Hudson River, to the pollution effects of a tsunami in Thailand, to lead in the sediments of an English lake, to air pollution in Auckland and Tehran. The globalisation of environmental problems will sooner or later force a change in the definition of ‘pollution’ itself. Since there is no longer a single drop of water or breath of air that is not used, modified, diverted, or sequestered by humans, there is no sense in defining ‘pollution’ as being a variation with respect to a ‘natural’ condition. We would have to compare with a ‘reference’ environmental condition that was measured in the past and that will almost never be achievable again in the future. This is also why we will have to replace the intuitive static concept of ‘pollution’ with the more dynamic concept of ‘sustainability’. We cannot avoid modifying the environment where we live; any species on Earth does so. The problem is how to modify the environment in a reversible way, that is in a way that other natural processes can recover; we will continue to discharge ‘pollutants’ into the environment, but we must find a way to do this at a rate such that biochemical processes can degrade them and, in the long run, return our wastes into the natural cycle with minimal impact. Fortunately, another effect of globalisation is the wide diffusion of knowledge and, as a consequence, of consciousness. Today, we have the potential to learn about pollution problems at our fingertips, and in a few clicks we can download models that can greatly speed up at least a screening study of most environmental systems. The first part of this book provides an extensive account of the world’s largest set of public domain environmental models developed by the US EPA and details about how they
xxiv
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
are now an intrinsic part of environmental law. Indeed, using GoogleTM Scholar to look for the occurrence of papers containing the words ‘complex’, ‘environmental’, ‘systems’ and ‘pollution’, there are 850,000 quotations – 50,000 since the year 2000! It is an amazing body of knowledge that demonstrates how widespread is the concern for pollution problems and how diffused is the activity on the topic. However, consciousness has also added to the complexity of the problem. When it comes to the safeguarding or the correct exploitation of environmental resources, the number of interested stakeholders has multiplied and it now concerns individual citizens. Therefore, any decision has become a tremendous task because the objectives and constraints that must be taken into account are more diversified and thus more conflicting. We have to consider that an ‘optimal’ solution satisfying all parties at the maximum degree does not exist, and that the integrated modelling tools presented in the following chapters can be used to support discussions, illustrate options, and evaluate trade-offs, but they cannot indicate a definite alternative. This also has important implications for the concept of ‘uncertainty’, which is amply analysed in this book. It is evident that models are only a simplified representation of the real world and, as such, they are just approximations. Traditionally, the appropriateness and accuracy of simulation models has been evaluated by comparing their results with field data in statistical terms. However, if these models are to be part of an integrated decision support tool, we will need to define other performance measures, such as the risk of taking one decision or the regret of selecting one option and discarding others. Much remains to be done in this area to guarantee a higher sustainability of human development, particularly to the vast majority of mankind that has had, up to now, only limited access to natural resources and thus has a much smaller responsibility for the present disruption of environmental equilibria, but this book is a good example of the depth and breadth that environmental modelling has reached today. For those involved in the discipline during the last two or three decades, it is exciting to realise how far we have progressed; for students and neophytes it is fascinating to see how diverse methods, ranging from partial differential equations to statistical approaches, can be applied to effectively tackle so many environmental problems. The solutions to global problems, however, require not only the joint application of our best technical tools, but also the dedication of our efforts at all levels, as teachers, students, researchers, decision-makers and citizens. We should always recall the heritage of wisdom left to us by Native Americans: ‘We do not inherit the Earth from our Ancestors; we borrow it from our Children.’
Giorgio Guariso Politecnico di Milano, Italy Fellow of the International Environmental Modelling & Software Society
ACKNOWLEDGEMENTS
I express strong gratitude to Lindsey Langston and ILM Publications for believing in the concept of this book, and to Lionel Browne for efficient editing. I am especially grateful for the work of the expert contributors for making this an informative and engaging book for professionals in the field. I acknowledge Sarah Muliadi and Jennifer Arceo for painstakingly assisting in the manuscript formatting process. Finally, I am thankful for my family’s support of my often-consuming academic endeavours.
PREFACE
This book showcases modern computational approaches to environmental modelling in a vast array of application areas. It includes chapters from an extensive list of global experts actively engaged in adaptive modelling, simulation and decision-support activities in numerous complex environmental systems. The invited contributors have presented timely topics and applications of the study of environmental contaminants, with detailed discussion on the mathematical concepts used. Part I presents two detailed chapters reviewing the regulatory background and policy applications driving the use of various types of environmental fate, bioaccumulation and exposure assessment models at the United States Environmental Protection Agency. This work is complimented by a chapter on contaminated land decision support models in the contemporary European setting. Part II presents key studies on modern aquatic modelling endeavours, and discusses the importance of uncertainty analysis in such efforts. Included in these chapters are applications highlighting multivariate statistical techniques, chemical speciation, and the use of Bayesian-based methods for load estimation. Part III examines efforts concentrated on soil, sediment and subsurface modelling, with extended discussions on pollutant fate and transport. Included are studies examining colloid-facilitated transport in unsaturated porous media and the modelling of solute transport in soils. Part IV contains two novel chapters on footprint and forest ecosystem modelling. The first defines footprint modelling, and details relative contributions from each element of the surface area source/sink to the measured vertical flux or concentration. The latter details the ForSAFE model and, in particular, validation of this model in the Hubbard Brook Forest ecosystem. Finally, Part V presents four key chapters on air quality modelling and sensitivity analysis. Topics include the origins of 210 Pb in the atmosphere and its deposition on and transport through lake systems, statistical modelling and analysis of PM2:5 control levels, a modern paradigm for air quality simulations, and airshed modelling in complex terrain. Overall, this second volume of work presents an extended look at modern environmental modelling efforts through a widened cast of global experts and application areas. It is intended to compliment Modelling of Pollutants in Complex Environmental Systems, Volume I, and to showcase an even greater array of environmental modelling applications. This book will likely be of benefit to those intimately involved in environmental modelling including academics, policymakers, regulatory
xxviii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
agencies, technicians, health professionals, engineers and advanced undergraduate and graduate students. Grady Hanrahan Los Angeles, October 2009
PART I Decision Support and Assessment Modelling
CHAPTER
1
Environmental Fate and Bioaccumulation Modelling at the US Environmental Protection Agency: Applications to Inform Decision-Making Elsie M. Sunderland, Christopher D. Knightes, Katherine von Stackelberg and Neil A. Stiber
1.1
INTRODUCTION
Environmental fate and bioaccumulation models provide a framework for understanding how biogeochemical processes affect pollutant transport and bioavailability in the atmosphere and aquatic and terrestrial systems. They provide a basis for predicting the impacts of human activities on natural ecosystems and biological exposures. Models can also be viewed as testable hypotheses of processes driving the fate and bioaccumulation of environmental contaminants. The overall hypothesis embodied by any given model is tested when the model is applied to a natural system and compared with observational data and/or other models with different functional forms. In this way, models are valuable tools for improving understanding of how ecosystems function, and for prioritising areas for future research. Environmental models are also used as applied tools for environmental management. The application of environmental models to inform policy is an iterative process where, despite pervasive scientific uncertainties, decisions must be made. Models assist in synthesising best-available scientific understanding to predict how different policy choices will affect environmental and human health risks. Given the diversity of environmental modelling applications across disciplines, defining a ‘‘regulatory environmental model’’ can be challenging. An expert panel convened by the US National Research Council (NRC) provides the following broad definition for a model (NRC, 2007): ‘‘a simplification of reality that is constructed to gain insights into select attributes of a particular physical, biological, economic, or social system’’. Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
4
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
The subset of models most often used to provide information for policy determinations is computational models. The NRC panel defines these models as ‘‘those that express the relationships among components of a system using mathematical relationships’’ and use measurable variables, numerical inputs, and mathematical relationships to produce quantitative outputs (NRC, 2007). This definition of computational models is intentionally broad, to allow for the diversity of regulatory environmental models, which range from simple, site-specific, empirical relationships to computationally intensive, process-based simulations on the global scale (US EPA, 2009a). The use of environmental models in the decision-making process has increased over the last several decades. In the 1970s, technology-based standards in the US dominated the laws governing air, surface and subsurface water, and drinking water protection (NRC, 2007). In 1980, a Supreme Court decision on workplace standards for benzene exposure catalysed a shift from technology-based standards to regulatory assessments that consider the nature and significance of risks posed by environmental contaminants. In this case, the Supreme Court struck down a standard that would have reduced benzene concentrations as far as technologically possible, because it did not consider whether existing concentrations posed a ‘‘significant risk of material health impairment’’ (448 USC § 607, 10 ELR, 20489). This decision implied that some form of quantitative assessment is needed to decide whether a risk is large enough to warrant regulation (Charnley and Eliott, 2002). Another factor driving demand for quantitative model-based policy analysis in the US is the mandated development of regulatory impact assessments (RIAs). According to Executive Order 12,866, issued in 1993, RIAs must include a formal cost–benefit analysis for any new regulation with costs exceeding $100 million, or with a significant impact on the economy and/or society (Exec. Order No. 12,866, 3 CFR § 638, 1993). Such an approach is meant to improve the efficiency of environmental regulations by showing when the benefits of a regulation are likely to exceed its costs, and whether alternatives to that regulation are more or less costly (Hahn et al., 2000). The increasing prevalence of environmental modelling applications in policy analysis also reflects the need for decisions to keep pace with rapidly growing databases of environmental information. These databases have been enhanced by real-time monitoring systems and relatively new data collection methods using satellites and remote sensing. In addition, advances in computing over the past several decades according to Moore’s law (Moore, 1965) have greatly enhanced capabilities for simulating complex environmental fate and transport processes over large spatial scales. Figure 1.1 illustrates the role of fate and transport models in the integration of environmental and social/economic information needed for risk assessments and RIAs. Predicting how biogeochemical processes affect the fate and bioavailability of contaminants is a critical component of risk assessments that assess human exposures, health effects and ecosystem impacts. The purpose of risk assessments is to characterise potential adverse impacts associated with particular activities or events, where these impacts are evaluated based on both the magnitude of effects and the likelihood of their occurrence (Suter, 2007). The philosophy of protecting public health with an ‘‘adequate margin of safety’’ is specified in a variety of statutes, including the total
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
5
Figure 1.1: Basic modelling elements relating human activities and natural systems to environmental impacts.
Natural system processes
Human activities
Emission
Fate and transport
Exposure
Human health/environment response
Economic impacts
Non-economic impacts
Source: Reprinted with permission from Models in Environmental Regulatory Decision Making, # 2007 by the National Academy of Sciences (NRC, 2007). Courtesy of the National Academies Press, Washington, DC.
maximum daily load (TMDL) programme within the Clean Water Act (CWA section 303(d)(1)(C)) and the Clean Air Act (CAA section 109(b)(1)). This concept is now the primary objective of quantitative risk assessments, and also introduces the idea of uncertainty in modelling assessments. Risk assessments must recommend emissions limits within an adequate margin of safety for specific contaminants based on their environmental exposure pathways, bioaccumulation potential, and persistence in the environment. Findings need to stand up under the extensive scrutiny associated with judicial review in the US, which places a significant burden of proof on regulatory agencies for demonstrating the human health risks associated with exposures to environmental contaminants (Charnley and Eliott, 2002). Models used in regulatory applications face more scrutiny than those used internally for screening level assessments and research models. For example, one senior agency official in the US EPA’s Office of Water estimated that ,10% of the information compiled for regulatory actions is needed to make the decision, and the remaining 90% is needed to support the judicial review process and ensure that the decision stands up in court (Charnley and Eliott, 2002). All final agency actions are subject to judicial review (McGarity and Wagner, 2003). An action is normally not considered final until it has direct consequences for the person or entity attempting to
6
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
challenge that action in court, and thus many modelling applications at the US EPA are not subject to this level of review. This chapter reviews the regulatory background and policy applications driving the use of various types of environmental fate and bioaccumulation model at the US EPA (air quality, surface water and watersheds, contaminated sites). Comparing current research frontiers with contemporary policy applications within each modelling area helps to illustrate the interactions between research fields and regulatory models intended for application by environmental practitioners. We highlight, in particular, the interrelationship between emerging research, ongoing data collection, and regulatory model improvement efforts. The chapter concludes with a summary of the US EPA’s recent efforts to enhance the quality and transparency of modelling applications used to inform environmental regulations.
1.2
ATMOSPHERIC FATE AND TRANSPORT MODELLING
National Ambient Air Quality Standards (NAAQS) for criteria pollutants must ‘‘protect public health’’ ‘‘allowing an adequate margin of safety’’. Clean Air Act (CAA), 42 USC § 7409(b)(1)
1.2.1
Regulatory background: the Clean Air Act
Air quality models are used to calculate pollutant concentrations and deposition rates using information on emissions, transport, and atmospheric chemistry. Application of models during the regulatory process helps in identifying source contributions to air quality problems, and in evaluating the effectiveness of regulations in reducing human and ecological exposures. National standards for air quality were established in 1970 with the Clean Air Act (CAA). Title 1 of the CAA put in place the National Ambient Air Quality Standards (NAAQS) for area, emission and stationary sources of six ‘‘criteria pollutants’’ considered harmful to public health and the environment.1 NAAQSs are promulgated at the Federal level and implemented by state implementation plans (SIPs) that are approved by the US EPA. Generally, SIPs require states to model the impacts of proposed emissions reductions targets to determine whether reductions in ambient atmospheric concentrations are acceptable. These plans include detailed descriptions of how a proposed emission reduction programme will bring states into compliance with NAAQS. Regional-scale modelling has become an integral part of SIP development for ozone and fine particulate matter (PM2:5 ), and these standards are reviewed every 5 years. Other areas of the CAA also use information from air quality models to develop regulatory air quality thresholds and implement standards. Title II of the CAA regulates mobile sources, and Title III requires that the US EPA list hazardous air pollutants (HAPs) that could ‘‘cause, or contribute to, an increase in mortality or an increase in serious irreversible or incapacitating reversible illness’’. Title IV regulates acid deposition, and titles V and VI deal with permits and stratospheric ozone
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
7
protection. The US EPA listed only eight HAPs in the first 18 years after the CAA was established, prompting Congress to identify an additional 188 with known adverse effects on human health that require regulation. Following the 1990 amendments to section 112 of the CAA, the US EPA was required to list and regulate, on a prioritised schedule, ‘‘all categories and subcategories of major sources and area sources that emit one or more HAPs’’.
1.2.2
Model selection and evaluation criteria
Because of the widespread use of air quality models for Federal and state-level regulatory activities, the US EPA’s Office of Air and Radiation (OAR) has developed extensive guidelines for model selection, application and evaluation.2 Three main types of modelling application support regulatory activities: •
•
•
Dispersion models are used in a state’s permitting processes to determine compliance with NAAQS and other regulatory requirements. These models use emissions data and meteorological inputs to predict pollutant concentrations at downwind locations. Photochemical models are used to simulate pollutant concentrations and deposition resulting from multiple emissions sources at a variety of spatial scales. These models are widely used to determine the effectiveness of emissions control strategies at keeping pollution levels below regulatory thresholds, or reducing atmospheric deposition rates. Receptor models are used to identify the contribution of different emissions sources and source regions to deposition and ambient concentrations at a given receptor, based on the chemical and physical characteristics of gases and particles. Unlike photochemical and dispersion air quality models, receptor models use the chemical and physical characteristics of gases and particles measured at sources and receptors.
Table 1.1 provides a summary of some of the most frequently used models within each class. For dispersion modelling applications, the US EPA lists preferred/recommended models that must be used by the state in the development of SIPs (Appendix W, 40 CFR Part 51, Vol. 70, No. 216, 68218–68261). Alternative models are also provided, but must be justified on a case-by-case basis.3 Photochemical models are needed to develop SIPs and determine the effectiveness of different regulatory strategies for pollutants such as O3, which is a secondary product of other atmospheric precursors. Two major classes of photochemical models are: •
•
Lagrangian trajectory models, which simulate the emission, transport and expansion, chemistry, and deposition of individual parcels of pollutants moving through the atmosphere; Eulerian models, which simulate the emission, advective transport, diffusion, chemistry, and deposition of pollutants through a three-dimensional matrix of finite grid volumes (i.e., ‘‘boxes’’).
8
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 1.1: Overview of frequently used air quality models at the US EPA. All information adapted from: http: //www.epa.gov/ttn/scram/aqmindex.htm. Preferred/recommended dispersion models Steady-state plume model that incorporates air dispersion based on planetary boundary layer turbulence structure and scaling concepts, including vertical treatment of sources, and varied terrain. AERMOD was promulgated as a replacement to ISC3 in 2006. CALPUFF A non-steady-state puff dispersion model that simulates the effects of varying meteorological conditions on pollution transport, transformation, and removal. CALPUFF can be applied for long-range transport over complex terrain. It includes algorithms for subgrid scale effects such as terrain impingement, as well as pollutant removal due to wet and dry deposition, chemical transformations, and effects of particulate matter on visibility. BLP A Gaussian plume dispersion model designed to handle unique modelling problems associated with aluminium reduction plants and other industrial sources where plume rise and downwash effects from stationary line sources are important. CALINE3 CALINE3 is a steady-state Gaussian dispersion model designed to determine air pollution concentrations at receptor locations downwind of highways located in relatively homogeneous terrain. CALINE3 is incorporated into the more refined CAL3QHC and CAL3QHCR models. CAL3QHC/ A CALINE3-based CO model with queuing and hotspot calculations, and with a CAL3QHCR traffic model to calculate delays and queues that occur at signalised intersections; CAL3QHCR is a more refined version based on CAL3QHC that requires local meteorological data. CTDMPLUS Complex Terrain Dispersion Model Plus Algorithms for Unstable Situations (CTDMPLUS) is a refined point-source Gaussian air quality model for use in all stability conditions for complex terrain. The model contains, in its entirety, the technology of CTDM for stable and neutral conditions. CTSCREEN is the screening version of CTDMPLUS. OCD Offshore and Coastal Dispersion Model Version 5 (OCD) is a straight-line Gaussian model developed to determine the impact of offshore emissions from point, area or line sources on the air quality of coastal regions. OCD incorporates over-water plume transport and dispersion, as well as changes that occur as the plume crosses the shoreline. Hourly meteorological data are needed from both offshore and onshore locations. AERMOD
Examples of frequently used photochemical models CMAQ
The Community Multiscale Air Quality (CMAQ) model is a regional Eulerian model that includes capabilities for conducting urban- to regional-scale simulations of multiple air quality issues, including tropospheric ozone, fine particles, toxics, acid deposition, and visibility degradation. ( continued)
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
9
Table 1.1: ( continued ) Examples of frequently used photochemical models CAMx
REMSAD
UAM-V1
The Comprehensive Air quality Model with extensions (CAMx) is a Eulerian photochemical dispersion model that simulates air quality over many geographic scales. The model treats a wide variety of inert and chemically active pollutants, including ozone, particulate matter, inorganic and organic PM2:5 /PM10 , mercury and other toxics. CAMx also has plume-in-grid and source apportionment capabilities. The Regional Modelling System for Aerosols and Deposition (REMSAD) was developed to understand distributions, sources and removal processes relevant to regional haze, particulate matter and other airborne pollutants, including soluble acidic components and toxics. The UAM-V1 Photochemical Modeling System has been used widely for air quality studies focusing on ozone. It is a three-dimensional photochemical grid model designed to calculate concentrations of both inert and chemically reactive pollutants by simulating physical and chemical processes in the atmosphere affecting pollutant concentrations. This model is typically applied to model air quality ‘‘episodes’’ – periods during which adverse meteorological conditions result in elevated ozone pollutant concentrations. Examples of frequently used receptor models
CMB
UNMIX
PMF
The Chemical Mass Balance (CMB) model is based on an effective-variance least-squares (EVLS) method. CMB requires speciated profiles of potentially contributing sources and the corresponding ambient data from analysed samples collected at a single receptor site. CMB is ideal for localised non-attainment problems, and has proven to be a useful tool in applications where steady-state Gaussian plume models are inappropriate, as well as for confirming or adjusting emissions inventories. The US EPA UNMIX model ‘‘unmixes’’ the concentrations of chemical species measured in the ambient air to identify the contributing sources. Chemical profiles of the sources are not required, but instead are generated internally from the ambient data using a mathematical formulation based on a form of factor analysis. For a given selection of species, UNMIX estimates the number of sources, the source compositions, and source contributions to each sample. The Positive Matrix Factorization (PMF) technique is a form of factor analysis where the underlying co-variability of many variables (e.g., sample-to-sample variation in PM species) is described by a smaller set of factors (e.g., PM sources) to which the original variables are related. The structure of PMF permits maximum use of available data and better treatment of missing and below-detection-limit values.
10
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Eulerian models use a fixed spatial grid, whereas Lagrangian models have a moving frame of reference, and spatial scales depending on the sources and receptors of interest. Eulerian grid models solve numerical expressions describing vertical and horizontal transport, chemical, and emission processes by dividing the modelling domain into a large number of cells, which interact through advective and diffusive transport (Cooter and Hutzell, 2002). Photochemical air quality models often use a three-dimensional Eulerian grid to fully characterise physical processes in the atmosphere and predict the species concentrations throughout the entire model domain. The Community Multiscale Air Quality (CMAQ) model (Byun and Ching, 1999) is one of the US EPA’s most widely used models. CMAQ is a regional-scale Eulerian model that combines current knowledge of atmospheric chemistry and meteorological processes affecting air quality to model the fate and transport of ozone, particulates, toxics, and acid deposition. CMAQ was originally developed for PM2:5 and O3 assessments, and has been extended for a range of other applications, including mercury (Bullock and Brehme, 2002), atrazine (Cooter and Hutzell, 2002), aerosols (Binkowski and Roselle, 2003; Nolte et al., 2008), NO x (Han et al., 2009), ozone (Sahu et al., 2009; Yu et al., 2009), and many others. The National Oceanic and Atmospheric Administration’s (NOAA) Hybrid SingleParticle Lagrangian Integrated Trajectory (HYSPLIT) model (Draxier and Hess, 1998) is an example of a Lagrangian air quality model that calculates pollutant dispersion by assuming either puff or particle dispersion. In the puff model, puffs expand until they exceed the size of the meteorological grid cell (either horizontally or vertically) and then split into several new puffs, each with its share of the pollutant mass. In the particle model, a fixed number of initial particles are advected within the model domain by the mean wind field and a turbulent component. The NOAA HYSPLIT model is well suited for deriving source–receptor relationships for contaminants (Cohen et al., 2002, 2004) and for tracing plumes of emissions from wildfires and other sources.4 Receptor models (Table 1.1) are data-intensive methods used for identifying and quantifying contributions of pollutants to concentrations at different receptors, and for source apportionment. These models are therefore a natural complement to other air quality models, and are used in SIPs for identifying sources contributing to air quality problems. For example, the Chemical Mass Balance (CMB) model fully apportions receptor concentrations to chemically distinct sources, using a source profile database. UNMIX and PMF (Table 1.1) internally generate source profiles from the ambient data. Greater data abundance and quality (more species, stratified measurements by particle size, and shorter sampling intervals) therefore greatly improve the utility of receptor modelling. Receptor model algorithms are continuously improved to take advantage of new datasets.
1.2.3
Recent regulatory applications of air models
The Clean Air Interstate Rule (CAIR) and the Clean Air Mercury Rule (CAMR) are two recent examples of Federal rules (both promulgated in 2005) under the CAA that extensively used results from air quality models (US EPA, 2005a, 2005b). CAIR
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
11
proposed an overall cap for NO x and SO x emissions from electric utilities in the US, combined with a trading programme designed to optimise the economic efficiency of emissions reductions. CAMR was the first-ever rule in the US regulating mercury emissions from coal-fired utilities. In 2008, the Court remanded CAIR based on concerns about the implementation and effectiveness of the emissions trading programme, but has left the rule in place until the US EPA issues a replacement (United States Court of Appeals, 2008b). One of the Court’s concerns related to an individual state’s ability to achieve compliance with air quality standards when no set limits were enforced on individual point sources. Although modelling simulations performed as part of the rulemaking indicated that the cap and trade programme would be successful, the Court did not consider this evidence to be sufficient to meet regulatory requirements. A review of previous legal challenges to the Agency’s model-based decisionmaking (McGarity and Wagner, 2003) showed that, in prior rulings, science-based challenges to the Agency were generally unsuccessful unless they could demonstrate choices that were ‘‘arbitrary and capricious’’ (i.e., lacked a logical rationale), and that the underlying science of the Agency’s modelling applications was rarely questioned. In the case of CAIR, the specific modelling application in question was the Integrated Planning Model (IPM). IPM provides 20–30-year forecasts of least-cost energy expansion, electricity dispatch, and emissions controls strategies based on extensive historical data representing the US electric power system. The model determines the least-cost method of meeting energy and peak demand requirements over a specified period (e.g., 2010 to 2030), and considers a number of key operating or regulatory constraints (e.g., emission limits, transmission capabilities, renewable generation requirements, fuel market constraints) that are placed on the power and fuel markets. The Court’s concerns with IPM results thereby challenged the US EPA’s integrated modelling assessment that combined behavioural assumptions with air quality simulations. Such a precedent may indicate that integrated modelling activities will face greater legal scrutiny than modelling within a single disciplinary domain, such as the air quality modelling performed using CMAQ and reported in the Technical Support Documents (TSD) of the final CAIR rule (US EPA, 2005b). In 2008, the Court also remanded and vacated CAMR (United States Court of Appeals, 2008a) owing to what it deemed to be inappropriate regulation of mercury emissions sources under section 111 of the CAA (which allows an emissions trading programme) rather than section 112 intended for regulating HAPs. From a scientific perspective, the RIA for the CAMR rule provides another example of an integrated modelling analysis for decision-making (US EPA, 2005c). Because mercury exposure is dominated by the consumption of marine and freshwater fish (Mahaffey et al., 2004; Sunderland, 2007), analysing how emissions controls may reduce health impacts on wildlife and humans requires an understanding of atmospheric and aquatic fate and transport, bioaccumulation and human exposure pathways. Modelling conducted as part of the rule analysis therefore included: CMAQ model simulations that assessed the impact of emissions controls on deposition across the contiguous US (Figure 1.2 – see colour insert); water body fate, transport and bioaccumulation simulations based on the SERAFM (Knightes, 2008), WASP (Ambrose, 1988), and BASS (Barber,
12
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
2006) models for a variety of freshwater ecosystems; and human exposure analyses based on freshwater fish consumption rates and preferences (US EPA, 2005c). Integrated environmental modelling applications, such as the example provided by the CAMR rule analysis, are needed to assess the effectiveness of regulations in terms of human health and ecological indicators. Demand for such modelling applications represents a shift towards outcome-orientated decision-making by government agencies. For example, the CAA also requires the US EPA to assess relationships between exposure to other (i.e., in addition to NO2 and SO2 ) criteria air pollutants (ozone, particulate matter, carbon monoxide, lead) and human health impacts. To do this, the US EPA must combine atmospheric transport and chemistry models with epidemiological models for morbidity and mortality. Modelling performed as part of the CAMR analysis also illustrates some of the challenges associated with combining multiple models with differing spatial and temporal scales. Mercury deposition originates from both local and long-range sources. Relative contributions of sources within the regulatory domain (country, state, etc.) to deposition depend on the magnitude and composition of mercury released, and on physical and chemical factors that affect the transport and conversion of global atmospheric mercury sources to water-soluble forms that deposit rapidly (Lin and Pehkonen, 1999). Across the entire US, a large fraction (70–80%) of the mercury deposited originates from natural and global sources (Seigneur et al., 2004; Selin et al., 2007). However, individual ecosystems in proximity to point sources may receive a much greater contribution from domestic sources that will be affected by emissions control. Because of the significance of global sources for mercury deposition, regional air quality models such as CMAQ must use boundary conditions from global chemical transport models such as GEOS-Chem (Selin et al., 2007). Differences in rate constants and underlying algorithms between regional and global modelling frameworks can result in erroneous results at the boundaries of the modelling domain. For example, the modelling simulation in Figure 1.2 erroneously shows enhanced mercury deposition at the eastern extreme of the CMAQ model over the Atlantic Ocean where it meets the boundary conditions provided by the global GEOS-Chem model. Finally, mercury is integrated globally both through the atmosphere and also in the human exposure pathway through commercial market fisheries (Sunderland, 2007). Analysis of the exposure pathway must therefore consider the effects of domestic emissions on global fisheries, in addition to domestic water bodies, greatly increasing the complexity of modelling analyses required (Sunderland and Mason, 2007). Temporal lags in ecosystem responses to regulatory actions have a direct impact on cost–benefit analyses performed as part of RIAs. For example, in the case of mercury, freshwater ecosystems require anywhere from a few years to many decades to reach steady state with new atmospheric deposition levels (Knightes et al., 2009). When translated into economic terms for policy analysis, discounting of regulatory benefits into net present values means that ecosystem lag times result in large declines in perceived benefits (Knightes et al., 2009). In summary, regulatory assessments that use fate and transport models for pollutant dispersion are increasingly requiring information on the global scale to assess the effectiveness of domestic regulations at reducing human exposures and
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
13
associated health effects. Accordingly, modelling analyses supporting decisionmaking are required to integrate information across disciplines at multiple spatial and temporal scales.
1.2.4
Future directions in air quality modelling
Atmospheric fate and transport models used in policy analysis require an extensive history of past applications and substantial evaluation. Such models are often referred to as ‘‘validated and verified’’. However, model evaluation efforts should ideally continue throughout the entire life cycle of a model’s use. Model parameters need to be continuously updated with new measurement data, and with improvements in scientific understanding of fate and transport processes (NRC, 2007). An example of this practice is provided by the exchange of information between the US EPA’s regulatory modelling division in the Office of Air and Radiation and the Atmospheric Modeling and Analysis Division (AMAD) in the National Exposure Research Laboratory of the US EPA’s Office of Research and Development. The research and development group tests new model developments or extensions to existing models, such as CMAQ, to add or enhance existing modelling capabilities based on regulatory needs. Model development often includes adding new algorithms based on basic science research into atmospheric chemistry reaction rates and transport mechanisms. The credibility of new and enhanced models for regulatory applications is established by comparing simulated results with measurement data to develop performance indicators. The GEOS-Chem global atmospheric chemical transport model (CTM) provides an example of the intersection between research and regulatory modelling applications. GEOS-Chem is used by over 50 research groups globally, and also for a variety of regulatory assessments performed by the US EPA. One example, as mentioned above, was the use of GEOS-Chem to provide the global boundary conditions for the 2005 CAMR modelling simulations. Many atmospheric chemistry researchers are now focused on exploring the interactions between climate and air quality using CTM simulations (with models such as GEOS-Chem) driven by general circulation model (GCM) simulations of 21st century climate (Jacob and Winner, 2009). Such simulations, in combination with projections on future emissions levels, can help to plan for future changes in air quality (Wu et al., 2008) and identify the most appropriate regulatory actions. In addition, growing recognition of the importance of air–sea and air–land exchange processes for global budgets (e.g., Figure 1.3 – see colour insert) emphasises the need for coupled biogeochemical models for mercury (Selin et al., 2008; Sunderland and Mason, 2007), methanol (Millet et al., 2008), nitrous oxide (Suntharalingam et al., 2000), and carbon cycling (Suntharalingam et al., 2003, 2008). Finally, much research is now focused on resolving uncertainty in emissions inventories and associated source contributions to atmospheric pollution through the use of inverse modelling of emissions source regions using CTMs (Kopacz et al., 2009; Palmer et al., 2006; Suntharalingam et al., 2005).
14
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
1.3
SURFACE WATER QUALITY MODELLING
The objective of the Act is to ‘‘restore and maintain the chemical, physical and biological integrity of our Nation’s waters.’’ Water quality standards set by statute ‘‘shall be such as to protect the public health or welfare. . .’’ Clean Water Act (CWA), 33 USC § 1313(c)(2)(A)
1.3.1
Regulatory background
The goal of the Clean Water Act (CWA) is ‘‘to restore and maintain the chemical, physical and biological integrity of the Nation’s waters’’ (33 USC § 1251(a)). Under section 303(d), the CWA requires all states to submit a ‘‘303(d) list’’ of all impaired and threatened waters to the US EPA for approval every 2 years. A state lists waters as impaired if current regulations and controls are not stringent enough to meet the water quality standards. The state must also establish priorities for the development of TMDLs based on severity of pollution and the sensitivity of the uses of the waters, among other factors (40 CFR §130.7(b)(4)), and provide a long-term plan for completing TMDLs within 8–13 years of a water being listed. Water quality standards define protection goals by designating uses for a given water body, setting criteria to protect those uses, and establishing provisions to protect water quality from pollutants. A water quality standard consists of four basic elements: • • • •
designated uses (e.g., recreation, water supply, aquatic life, agriculture); criteria to protect designated uses (numeric pollutant concentrations and narrative requirements); an antidegradation policy to maintain and protect existing uses and high-quality waters; and general policies addressing implementation issues (e.g., low flows, variances, mixing zones).
A TMDL calculates how much of a given pollutant can enter a water body before the water body exceeds quality standards. Models are used to quantify this ‘‘assimilative capacity’’, and to determine a waste load allocation that ensures that this capacity is not exceeded (DePinto et al., 2004). TMDLs allocate inputs of pollutants to point sources (waste load allocation (WLA)) through the National Pollutant Discharge Elimination System (NPDES) programme) and non-point sources (load allocation (LA)). A margin of safety (MOS) is required for numerical assessments to account for uncertainty in the load calculation. The TMDL consists of three parts, which all introduce uncertainty and variability into load assessments: X X TMDL ¼ WLA þ LA þ MOS Waste loads and non-point sources are often temporally variable, and may undergo transformations, gains, or losses within the water body. The timing and location of critical conditions may be related to climate (e.g., precipitation, snow melt) or hydrology (e.g., high flow, low flow), or be event-based (e.g., discharge, spills). The
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
15
MOS is based on the desired level of protection that the TMDL will provide, which is subject to policy interpretations and subjective deliberations. The MOS may also include ‘‘implicit’’ and ‘‘explicit’’ components. The implicit MOS includes conservative assumptions on how the numeric target is derived, how the numeric model application is developed, and the feasibility of restoration activities. The explicit MOS includes uncertainties in the modelling system or water body response, such as setting numeric targets at more conservative levels than sampling results indicate, adding a safety factor to pollutant loading estimates, and/or setting aside a portion of the available loading capacity (US EPA, 1999). TMDLs are developed using various techniques, ranging from simpler mass budget calculations to more complex chemical and hydrodynamic differential mass balance simulations. Water quality models allow the explicit representation of timedependent transport and transformation processes that affect exposures. The Ecosystems Research Division in the US EPA’s National Exposure Research Laboratory has developed and applied a variety of water quality models for TMDL development and water quality protection. Examples of such models are described in more detail below.5 The US EPA provides assistance to the regional offices, state and local governments, and their contractors in implementing the CWA through the Watershed and Water Quality Modeling Technical Support Center.6
1.3.2
Examples of water quality models used in TMDLs and regulatory actions
WASP (Water Quality Analysis Simulation Program) The Water Quality Analysis Simulation Program (WASP7) is an enhancement (version 7) of the original WASP (Ambrose et al., 1988; Connolly and Thomann, 1985; Connolly and Winfield, 1984; Di Toro et al., 1983) that simulates water quality conditions and responses. WASP is a dynamic compartment-modelling framework for aquatic systems that includes water column segments and underlying benthic segments. The WASP modelling framework consists of a simple flow module (including net flows, gross flows, and kinematic wave propagation), several different kinetic modules, including heat (HEAT), toxicants (TOXI), nutrients (EUTRO) and mercury (MERCURY), and the time-varying processes of advection, dispersion, point and diffuse mass loading, and boundary exchange. Examples of previous WASP applications include investigations of: • • • • •
eutrophication and phosphorus loading (James et al., 1997; Tufford and McKellar, 1999; Wang et al., 1999); kepone, a carcinogenic insecticide (O’Connor et al., 1983); volatile organics (Ambrose, 1987; Zhang et al., 2008); heavy metals (Caruso, 2003; Yang et al., 2007); and mercury (US EPA, 2001b, 2004a, 2004b).
WASP has been used to develop TMDLs for the Neuse River Estuary, North Carolina (Wool et al., 2003); Cape Fear River, North Carolina; Fenholloway River, Florida;
16
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Mobile Bay, Alabama; Flint Creek, Alabama; Coosa Lakes, Alabama; Lake Allatoona, Georgia; and Alabama River, Alabama.7 Figure 1.4 shows the WASP grid developed to simulate nutrient dynamics in the Neuse River, North Carolina. Because WASP is primarily a fate and transport model, it has a relatively simple hydrodynamic component. The model uses known inflows and outflows for each WASP segment, or the kinematic wave formulation for flow routeing. For systems with more complex hydrodynamics, EFDC or DYNHYD can be used, and then the hydrodynamic file can be linked into WASP. These models are described below. Environmental Fluid Dynamics Code (EFDC) and DYNHYD The Environmental Fluid Dynamics Code (EFDC) (Hamrick, 1996) is a multidimensional hydrodynamic model used to simulate flow and solids movement, and for solving three-dimensional, vertically hydrostatic, free surface, turbulent-averaged equations of motion for variably dense fluids. EFDC has been applied to over 100 water bodies, including rivers, lakes, reservoirs, wetlands, estuaries and coastal ocean regions in support of environmental assessment and regulatory investigations. DYNHYD is an enhancement of the Potomac Estuary hydrodynamic model, and employs Figure 1.4: Example of Water Quality Analysis Simulation Program (WASP) grid developed to simulate nutrient dynamics in the Neuse River Estuary, NC. N W Swift Creek
Bachelor Creek
Trent River
0
E S Scale 5 km 10 km Pamlico Sound
Upper Broad Goose Creek Creek
Broad Creek Greens Creek Beard Dawson Creek Creek
Slocum Creek Hancock Creek
Clubfoot Adams Creek Creek
South River
Source: Wool, T.A., Davie, S.R. and Rodriquez, H.N. (2003). Development of 3-D hydrodynamic and water quality models to support total maximum daily load decision process for the Neuse River Estuary, North Carolina. Journal of Water Resource Planning and Management. 129: 295–306. (In the public domain.)
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
17
one-dimensional continuity and momentum equations for a branching network. DYNHYD can be used as an intermediary step between WASP’s kinematic wave and EFDC. River and Stream Water Quality Model (QUAL2K/QUAL2E) The River and Stream Water Quality Model (QUAL2K or Q2K; Chapra et al., 2007) is a modernised version of the QUAL2E (Q2E) model (Brown and Barnwell, 1987). QUAL2K is a branching, one-dimensional model, where the channel is well mixed vertically and laterally. QUAL2K models the diurnal heat budget and water quality kinetics, and simulates carbonaceous biochemical oxygen demand (BOD), anoxia, sediment–water interactions, algal growth, pH, and pathogens. Collecting the appropriate data for model calibration can be a major problem for complex hydrodynamic models such as QUAL2E. To help address some of these issues, QUAL2E-UNCAS provides a series of uncertainty analysis techniques, including sensitivity analysis, first-order error analysis, and Monte Carlo simulations. These techniques are described in detail elsewhere (Barnwell et al., 2004). BASINS The Better Assessment Science Integrating point and Non-point Sources (BASINS) software system is a multipurpose environmental analysis system designed for regional, state and local agencies that perform watershed and water quality-based studies. The BASINS system was originally introduced in 1996, with improved versions in 1998, 2001 and 2004. The most recent development of BASINS 4.0 integrates an open-source geographical information system (GIS) architecture with national watershed data and modelling and assessment tools. BASINS integrates environmental data, analytical tools and modelling programs for management applications, such as the development of TMDLs. BASINS provides a means to search the web for necessary GIS data layers, such as monitoring data, hydrography, land use, and digital elevation. These files are then pulled into a common layer data source that can be used to populate watershed and water quality models. A series of models have been housed within the BASINS framework, including PEST (the Parameter Estimation Tool), HSPF (Hydrological Simulation Program – FORTRAN), AQUATOX, and SWAT. Currently, there is an effort to move WASP into BASINS. EXAMS The Exposure Analysis Modeling System (EXAMS) (Burns et al., 1982; Burns, 2004) was developed to rapidly evaluate the fate and transport of synthetic organic chemicals, such as pesticides, industrial materials and leachates from disposal sites, and resulting exposure levels. EXAMS analyses long-term (steady-state), chronic chemical discharges, and short-term acute chemical releases from episodic discharges or weather-driven runoff and erosion. EXAMS also performs full kinetic simulations that allow for monthly variation in the meterological parameters and alterations of chemical loadings on a daily timescale. EXAMS is generalisable in structure for different spatial dimensions, and in the number of contaminants and chemical
18
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
degradation by-products modelled. The user determines the level of complexity of the environmental description and the number of chemicals simulated. Expected concentrations of synthetic contaminants and their by-products can be simulated using EXAMS. The user can specify the exposure interval so that EXAMS will report acute (e.g., 96 hours) or chronic (21 days or longer) exposures. Sensitivity analysis is included in EXAMS to present the relative importance of each transformation and transport process. The persistence of each chemical through transport or transformation after the source of contaminant has been removed can also be determined. EXAMS has also been linked to PRZM3 (the Pesticide Root Zone Model) and BASS (a food web bioaccumulation model). PRZM is a one-dimensional, dynamic, compartment model that simulates chemical movement in an unsaturated soil system around the plant root zone. PRZM3 includes both PRZM and VADOFT (Vadose Zone Flow and Transport Model) in its internal structure. VADOFT simulates moisture movement and solute transport in the vadose zone (the zone from the ground surface to the water table in terrestrial systems). Linking PRZM to VADOFT allows the fate and transport and exposures of nearby water bodies to be modelled following applications of agricultural pesticides.
1.3.3
Current and future directions for water quality modelling
The US EPA’s Office of Research and Development is currently working to improve both the underlying science and the physical simulation capabilities of its available water quality modelling tools. In WASP7, recent advances have increased the number of algal types in the eutrophication module. This allows for different physiological growth parameters for algae, which are important to accurately represent the timing of blooms and algal concentrations. Algorithms describing mercury kinetics are also being updated to reflect recent advances in the understanding of environmental processes driving methylation, demethylation, oxidation and reduction reactions. Current research projects involving WASP include simulating headwaters of watersheds, smaller stream and river systems, and complex hydrologic systems including reservoirs, impoundments and flood plains. To better represent the diversity of these systems, recent WASP developments have included the incorporation of the weir equations to handle flow routeing over dams/impoundments, and variable hydraulic geometry to handle systems with variable widths, depths and velocities as functions of volumetric flow rates. Future versions of the model will include DYNHYD algorithms for flow routeing, updated solids routines based on shear stress, process-based solids deposition, scour equations, and the introduction of a fourth solids type (cobbles) to represent solids that cannot resuspend or scour. Better representation of watershed influences on receiving water quality is required for water quality modelling to include assessments of how smaller rivers and streams affect the dynamics of large water bodies. Recent mercury cycling research and TMDL development by the US EPA reinforces the importance of the watershed as a loading source for most freshwater systems. TMDLs developed for Brier Creek and the Ocklocknee and Ogeechee watersheds in Georgia applied the Watershed Char-
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
19
acterization System-Mercury Loading Model (WCS-MLM) to simulate mercury loadings into the river of interest. A spatially explicit watershed model for mercury (the Grid Based Mercury Model (GBMM)) has since been developed as an enhancement to initial work performed using WCS-MLM. GBMM uses GIS layers of digital elevation, land-use type, soil-use type and stream hydrography to simulate watershed soil mercury concentrations, runoff and erosion to nearby water bodies (streams and rivers) as a function of atmospheric deposition loading to watersheds.8 As WASP is moved into BASINS, GBMM is similarly being incorporated into BASINS. GBMM was originally developed within ArcGIS, but the transition into BASINS will assist in pulling the necessary data layers for use in GBMM, as well as being constructed using an open-source, freely available GIS program. Much of the present work enhancing modelling capabilities at the US EPA is focused on combining multiple modelling tools to assess interactions among environmental media and human behaviour. For example, understanding the effectiveness of atmospheric mercury emissions controls requires the output of atmospheric models to be linked to watershed, water body and bioaccumulation models to simulate changes in fish mercury levels (Knightes et al., 2009). The scales of modelling analysis are also growing beyond processes occurring within a single water body to include the interrelationships of ecosystems and account for environmental stressors that are typically not limited by political boundaries. To enhance technological capabilities for integrated modelling analyses, the US EPA has been applying specific tools such as the Framework for Risk Analysis of Multimedia Environmental Systems (FRAMES). FRAMES enables complex environmental process simulations based on linked modelling applications by allowing different models to pass data efficiently. For example, the US EPA has developed a pilot application for 53 twelve-digit hydrologic unit code (HUC) headwater watersheds within the Albermarle and Pamlico watersheds in North Carolina and Virginia. For this application, hydrologic, water quality and bioaccumulation models were linked in FRAMES, along with an ecological model for habitat suitability index and a companion mercury watershed tool. The coupled modelling application was used to simulate the fate of atmospherically deposited nitrogen and mercury through the watershed and into streams and rivers to predict impacts on fish communities. Linking systems within a single framework facilitated sensitivity and uncertainty analysis for the unified modelling system. Ongoing modelling of the effects of hydrology and erosion on nitrogen and mercury cycling in the Cape Fear River basin is also linking atmospheric, watershed and water body models for a complete analysis of ecosystem processes. However, linking models across different media can be challenging, owing to propagation of uncertainty. For example, closing the hydrological mass balance across linked atmospheric and aquatic models can be difficult, because atmospheric models typically use simulated precipitation for the deposition via rainfall, whereas watershed hydrology models have traditionally used observed rain gauge station data. Synchronised water mass balances require the precipitation rates to be identical, otherwise pollutants such as nitrogen, sulfur and mercury are lost from the atmosphere at rates different from those at which they are deposited on the land surface, resulting in mass balance errors. Differences in spatial and temporal resolution of simulations
20
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
across different media must also be reconciled in these types of application. Generally, the time steps used for atmospheric modelling simulations are small (seconds) compared with ecosystem models (days), while the spatial domain of atmospheric models tends to be much larger (grid sizes of the order of kilometres) than ecosystem models (grid sizes of the order of metres).
1.4
BIOACCUMULATION MODELLING
Regulatory action on existing toxic substances are authorized ‘‘if the Administrator finds that there is a reasonable basis to conclude that the manufacture, processing, distribution in commerce, use, or disposal of a chemical substance or mixture, or any combination of such activities, presents or will present unreasonable risk of injury to health or the environment’’. Toxic Substances Control Act (TSCA), 15 USC § 2605(a)
A pesticide may be registered only if the Administrator finds that ‘‘when used in accordance with widespread and commonly recognized practice, it will not cause unreasonable adverse effects on the environment.’’ Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), 7 USC § 136(c)(5)(D)
A protective standard for pesticide residues is rebutted only once ‘‘there is reasonable certainty that no harm will result from aggregate exposures to these residues’’. Federal Food, Drug, and Cosmetic Act, 21 USC § 346a
1.4.1
Regulatory background
Chemical substances that achieve high concentrations in organisms relative to ambient environmental concentrations (air/water/sediments) can pose a variety of human and ecological health risks. Generally, those chemicals that are persistent, bioaccumulative, toxic, and subject to long-range transport have been the focus of chemical screening assessments and regulatory control strategies (Mackay and Fraser, 2000). Bioaccumulation of chemicals is generally quantified using either empirical data or mechanistic models. Empirical approaches use field data to derive a bioconcentration factor (BCF) or bioaccumulation factor (BAF), whereas mechanistic models use mass balance relationships to characterise biological uptake and loss processes (Mackay and Fraser, 2000). Organism/water concentration ratios measured in laboratory tests (BCFs) or in the field (BAFs) have limited predictive capability, and are subject to numerous errors, such as those caused by biological variability (Arnot and Gobas, 2006). Mechanistic models require more extensive data about chemical properties and food webs, but can provide insight into bioaccumulation phenomena, and have much greater predictive capability. Such models are used for a variety of applications, including screening of new and existing chemicals for their potential to bioaccumulate, developing environmental quality guidelines for water and sediments, and assessing exposure of biota to pollutants.
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
21
Bioaccumulation modelling is an integral part of several different regulatory programmes at the US EPA, including water quality assessments, contaminated site clean-up, and screening of new chemicals. As discussed above, bioaccumulation modelling may also be used to inform risk and regulatory assessments for atmospherically deposited contaminants. Water quality standards as part of the CWA for bioaccumulative contaminants such as mercury are listed as fish tissue residue guidelines rather than as aqueous concentration thresholds (US EPA, 2008). These assessments require information on bioaccumulation that is derived from models on a sitespecific basis, or is based on simplified BAFs for different ecosystems. The Superfund programme also requires information on potential bioaccumulation of metals and organic contaminants to develop hazard indices for humans and wildlife that, in turn, guide remediation goals. Finally, the US EPA’s Office of Pollution Prevention and Toxic Substances (OPPTS) is tasked with screening new and existing chemicals used by industry for potentially harmful environmental effects, including bioaccumulation. For example, the new chemicals programme under the Toxic Substances Control Act (TSCA) requires the US EPA to review approximately 2000 new chemicals per year, and to issue decisions on 20–30 chemicals per day (NRC, 2007).
1.4.2
Chemical screening
Screening-level models are useful for identifying chemicals that could potentially pose a threat to human or ecological health because of persistence in the environment, bioaccumulation potential, or toxicity. These models are not subject to the same rigorous review standards as other regulatory models because they are meant to trigger further studies of investigations of potentially problematic chemicals. Physicochemical properties can be used as screening-level models to identify those chemicals most likely to pose environmental risks. Some of these screening-level models may be simplifications of more complex bioaccumulation models with simplifying assumptions. For example, the BAF-QSAR model9 is a generic tool that provides estimates of the BAF of non-ionic organic chemicals in three general trophic levels of fish (i.e., lower, middle and upper). The BAF predictions are considered ‘‘generic’’ in that they are not considered to be for a particular species of fish. The model is essentially a quantitative structure–activity relationship (QSAR) requiring only the octanol–water partition coefficient (Kow ) of the chemical and the metabolic transformation rate constant (if available) as input parameters. BAF-QSAR v1.1 is derived from the parameterisation and calibration of a mechanistic bioaccumulation model to a large database of evaluated empirical BAFs from Canadian waters. The empirical BAFs are for chemicals that are poorly metabolised and are grouped into lower, middle and upper trophic levels of fish species. The model is calibrated to each trophic level of measured BAF values, thus providing estimates that are in agreement with empirical BAFs. The model can be used to predict dietary concentrations for higher trophic-level predators (e.g., birds and mammals), including human exposure concentrations from fish in their diet. When a contaminant is known to meet the criteria for bioaccumulation potential, persistence in the environment and toxicity,
22
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
additional, more detailed modelling simulations are often focused on specific sites of interest where contamination is thought to be a problem. Simple BAF calculations may not accurately predict concentrations of extremely hydrophobic chemicals and metals, such as mercury, that are often the chemicals of greatest concern (Barber, 2003).
1.4.3
Examples of regulatory and research models commonly used by the US EPA
AQUATOX AQUATOX is a simulation model for aquatic ecosystems, predicting the fate of pollutants, nutrients and organic chemicals and their ecosystem effects on fish, invertebrates and aquatic plants (Rashleigh, 2003; US EPA, 2001a). AQUATOX can be used to develop numeric nutrient targets based on desired ecological endpoints, evaluate impacts of several stressors which may cause biological impairment, predict effects of pesticides, evaluate response to invasive species, and (when linked with the BASINS modelling system) determine effects of land-use change on aquatic life. Example applications of the model include periphyton response to nutrients, snail grazing and variable flow in Walker Branch, Tennessee (US EPA, 2001a), eutrophication due to nutrient and organic matter loadings at Coralville Reservoir, Iowa, and computation of bioaccumulation of organic toxins in Lake Ontario (US EPA 2000b), among others (Bingli et al., 2008; Park et al., 2008; Rashleigh et al., 2009). BASS The Bioaccumulation and Aquatic System Simulator (BASS) model is a physiological model that simulates the population and bioaccumulation dynamics of age-structured fish communities. BASS describes contaminant dynamics using algorithms that account for species-specific terms affecting uptake and elimination of mercury, such as diet composition and growth dilution among different age classes of fish (Barber, 2001, 2006). For example, fish mercury intake is modelled as a function of gill exchange and dietary ingestion, and the model partitions mercury internally to water, lipid and non-lipid organic material. The structure of BASS is generalised and flexible, allowing users to simulate both small, short-lived species (dace and minnow) and large, long-lived species (bass, perch, and trout) by specifying either monthly or yearly age classes for any given species. The community’s food web is defined by identifying one or more foraging classes for each fish species, based on body weight, body length, or age. The dietary composition of each foraging class is then specified as a combination of benthos, incidental terrestrial insects, periphyton/attached algae, phytoplankton, zooplankton, and/or other fish species (Barber, 2006). Although BASS was developed to investigate the bioaccumulation of chemical pollutants within a community or ecosystem context, it can also be used to explore population and community dynamics of fish assemblages that are exposed to a variety of non-chemical stressors, such as altered thermal regimes associated with hydrological alterations or industrial activities, commercial or sports fisheries, and introductions
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
23
of non-native or exotic fish species. BASS can be used to evaluate various dimensions of fish health through its capability to simulate the growth and predator–prey dynamics for individual fish, and the productivity, recruitment/reproduction and mortalities of their associated populations. Process-based models such as BASS that simulate the toxicokinetic, physiological and ecological processes of fish provide scientifically defensible tools that can overcome many of the limitations and uncertainties associated with the use of BAF approaches (Barber, 2003). AQUAWEB AQUAWEB was developed by Arnot and Gobas (2004), and is a modified version of the Gobas 1993 food web model (Gobas, 1993) developed for assessing the degree of bioaccumulation of hydrophobic organic chemicals in aquatic ecosystems. AQUAWEB is a kinetically based model that provides site-specific estimates of chemical concentrations in organisms of aquatic food webs from chemical concentrations in water and sediments (Arnot and Gobas, 2004). For zooplankton, aquatic invertebrates and fish, the model calculates rates of chemical uptake from the water and the diet, and rates of chemical elimination to the water, faeces and the ‘‘pseudo’’ loss mechanism of growth dilution. The model requires input data on chemical properties (Kow ), environmental conditions, species-specific characteristics, and food web structure. These data are used to develop rate constants for chemical uptake and elimination in fish through dietary intake, gill ventilation, metabolic transformation, faecal egestion, and growth dilution.
1.4.4
Future research and regulatory directions
In 2004, the Stockholm Convention on Persistent Organic Pollutants proposed to eliminate most globally persistent, bioaccumulative and toxic substances. Such assessments have traditionally relied on values of Kow for different chemicals, or empirical BAF/BCF data when such information is not available. Generally, those chemicals with Kow . 105 are considered bioaccumulative, and are screened for further study in regulatory assessments. However, a variety of studies have shown that terresterial food webs (air-breathing organisms) can bioaccumulate chemicals with low Kow values but high octanol–air (Koa ) partition coefficients (Koa range 106 –1012 ) (Kelly and Gobas, 2001; Kelly et al., 2007). For example, Kelly et al. (2007) showed that although hydrophobic chemicals, such as hexachlorocyclohexanes (HCHs, Kow ¼ 103:8 ), tetrachlorobenzenes (Kow ¼ 104:1 ) and ensdosulfans (Kow ¼ 103:7 ), do not biomagnify in piscivorous aquatic organisms, they accumulate to a significant degree in terrestrial food webs. The authors also found that two-thirds of all chemicals used in commerce have Kow . 102 and Koa . 106 , and could therefore accumulate in terrestrial food webs. This research strongly suggests that chemical screening models based on Kow values that do not consider Koa values for terrestrial food webs will miss a large number of bioaccumulative contaminants. Future research and regulatory work needs to consider the implications of these findings for current chemical screening methods in the US.
24
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
1.5
CONTAMINATED SITE REMEDIATION
Standards for treatment of hazardous wastes disposed onto land shall specify ‘‘those levels or methods of treatment, if any, which substantially diminish the toxicity of the waste or substantially reduce the likelihood of migration of hazardous constituents from the waste so that short-term and long-term threats to human health and the environment are minimized.’’ Resource Conservation and Recovery Act (RCRA), 42 US § 6924(m)
The President shall select a remedial action that is protective of human health and the environment, that is cost effective, and that utilises permanent solutions and alternative treatment technologies or resource recovery technologies to the maximum extent practicable. Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), 42 USC § 9621(b)
1.5.1
Regulatory background
Both Federal and state-level regulatory programmes oversee contaminated site remediation depending on the nature, extent and severity of contamination. Larger sites with multiple contaminants across environmental media and one or more potentially responsible parties (PRPs) that are listed as Superfund sites fall under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) or the Resource Conservation and Recovery Act (RCRA). These are both Federal programmes that govern the way sites are characterised, assessed and remediated. In general, CERCLA and RCRA sites rely on risk-based approaches: that is, typically, human health and ecological risk assessments are conducted, and provide the basis for determining remedial standards or clean-up goals. State-level oversight tends to fall under brownfields-type programmes or other environmental programmes (e.g., every state has a clean-up programme, typically invoked in the context of changes in land use or through real estate transactions). Again, as for the Federal programmes, state regulatory oversight typically involves developing a risk assessment to determine the potential for adverse effects, and generally clean-up levels and remediation goals are based on agreed regulatory thresholds for risk and hazard. CERCLA, commonly known as Superfund, was enacted by Congress on 11 December 1980. This law established a provision for taxing the chemical and petroleum industries, and provides broad Federal authority to respond directly to releases or threatened releases of hazardous substances that may endanger public health or the environment. CERCLA establishes prohibitions and requirements with respect to closed and abandoned hazardous waste sites, and provides for liability of parties responsible for releases of hazardous waste at these sites. Initially, it established a trust fund to provide for clean-up when no responsible party could be identified. However, the trust fund has been largely depleted in recent years. CERCLA authorises both short- and long-term removal actions. Short-term responses are required when there is the threat of immediate danger to human health and the environment, whereas longer-term responses are required for releases that are
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
25
not immediately life-threatening, but which are serious with respect to potential effects as a result of exposure in humans and animals. The overall goal of any CERCLA remedy is to protect human health and the environment: thus, in virtually all cases, risk assessments are conducted to evaluate the potential for adverse effects. RCRA is the law that governs disposal of all hazardous and non-hazardous solid waste. Facilities that generate, treat, store or dispose of hazardous waste are regulated under Subtitle C of RCRA. As with CERCLA, the overall mandate of the law is to protect human health and the environment. The general approaches that RCRA takes are: preventing environmental problems by ensuring that wastes are well managed from ‘‘cradle to grave’’; reducing the amount of waste generated; conserving energy and natural resources; and cleaning up environmental problems caused by the mismanagement of wastes. The RCRA Corrective Action programme, part of Subtitle C, specifies when action is required to clean up contamination at a facility (as opposed to a site, as under CERCLA). Consequently, RCRA corrective actions usually occur at facilities that treat, store or dispose of hazardous waste, and corrective actions can often occur while a facility continues operating.
1.5.2
Risk-based approaches to contaminated site clean-up
Within the US EPA’s waste and clean-up programmes, and most state programmes, the National Academy of Sciences’ Risk Assessment Paradigm provides the framework for informing regulatory and programme decisions to protect human health and the environment. A variety of the US EPA’s reports (US EPA, 1989, 1991a, 1991b, 1996, 1997, 2000c) provide guidance for designing and conducting human health and ecological risk assessments under CERCLA. These documents generally serve as guidance for the RCRA programme as well. Most states also have their own risk assessment guidance. Risk assessments are generally context- and site-specific. However, all risk assessments share the common element of modelling. Virtually every risk assessment requires underlying fate and transport and food chain modelling to support estimates of risk into the future. These models are used to generate exposure estimates that are used for risk characterisation. The US EPA Center for Exposure Assessment Modeling10 was established to provide a repository for some of the most commonly used models, but in practice any model (assuming model development was appropriately conducted) can be used to support site-specific decision-making, particularly since all risk assessments and models for the larger sites will undergo peer review. Accordingly, a wide variety of modelling applications are used to support human health and ecological risk assessments in this area.
1.5.3
Hudson River polychlorinated biphenyl (PCB) modelling example
The Hudson River flows through New York State for nearly 300 miles (480 km), beginning as a small mountain lake on the side of the State’s highest peak, Mt Marcy,
26
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and ending in New York Harbor, one of the world’s busiest and most populated metropolitan port areas. Locations along the river are expressed as river miles, representing the number of miles north of New York Harbor. Halfway along the river, approximately 150 miles north of New York Harbor (at river mile 150), the Hudson River flows over the Federal lock and dam at Troy. From this point on the river is an estuary (i.e., a river flowing at sea level, where salt and freshwater mix with tidal flows). The upper freshwater portion of the river consists of a series of locks and dams, whereas the lower river is entirely open. The salt front extends up to river mile 50, and salinity drops off sharply at that point continuing north. In 1973, the Fort Edward dam (at approximately river mile 185) was removed, because of its deteriorating condition, causing a large migration of PCBs into the lower Hudson River. The portion of the river between miles 188 and 195, known as Thompson’s Island Pool, represents the most contaminated area, and is considered by the US EPA to represent the primary source of PCB contamination to the remainder of the river. PCB concentrations upstream of Thompson’s Island Pool are primarily at nondetect levels, and decline approximately linearly down the river in all media: sediment, water, and biota. PCB levels in the upper river (above the Federal dam) are orders of magnitude higher than for the lower river. Resuspension of highly contaminated sediments from the Thompson’s Island Pool area is implicated in the continuing downstream migration of PCBs. In addition, PCB releases from the General Electric (GE) Hudson Falls site due to migration of PCB oil through bedrock have also occurred, although the extent and magnitude of these releases are not well known. This leakage was identified after the partial failure of an abandoned mill structure near GE’s Hudson Falls plant site in 1991 revealed that PCB-bearing oils and sediments had accumulated within the structure. This failure also served to augment PCB migration from the bedrock beneath the plant to the river. Over a 30-year period ending in 1977, two GE facilities, one in Fort Edward and the other in Hudson Falls, New York, used PCBs in the manufacture of electrical capacitors. PCBs were considered an ideal insulating fluid because of their nonflammable properties. Various sources have estimated that between 209 000 and 1 300 000 lb (94 800–590 000 kg) of PCBs were discharged between 1957 and 1975 from these two GE facilities (Sofaer, 1976; Limburg, 1984). Discharges resulted from washing PCB-containing capacitors and PCB spills from building and dismantling transformers and capacitors. PCBs were marketed under the trade name ‘‘Aroclor’’ from 1930 to 1977. Aroclors represent mixtures of individual congeners, but the exact identity and proportion of congeners in any Aroclor mixture are not standard. The most common Aroclors include 1016, 1221, 1242, 1248, 1254 and 1260. The last two digits of any Aroclor number represent the weight percent of chlorine in the mixture (e.g., Aroclor 1254 is 54 percent chlorine by weight). According to scientists at GE, at least 80% of the total PCBs discharged are believed to have been Aroclor 1242, with lesser amounts of Aroclors 1254, 1221 and 1016 (Brown et al., 1985). The exact congener profile of commercial PCBs varied from lot to lot (even in lots from the same manufacturer), creating difficulty in making assumptions about congener profiles in Aroclors unless congener-specific analytical techniques are employed. Specification of the exact
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
27
proportion of individual congeners in PCB mixtures requires measurement at a cost of approximately $1000 per sample, versus $300 for the Aroclor standards. Environmental samples are compared with Aroclor standards, but PCB composition in the environment changes over time through fate processes that include partitioning, chemical transformation, and preferential bioaccumulation into lipid-rich tissues. This results in a congener composition in the environment that differs significantly from the original mixture. Degradation congeners are not accounted for in the Aroclor standards (thus they are not typically detected). The US EPA conducted a survey in 1974 showing elevated levels of PCBs in fish. The New York State Department of Environmental Conservation (NYS DEC) subsequently confirmed those findings, and yearly monitoring of fish has been conducted since that time. In February 1976, NYS DEC and the New York State Department of Health (NYS DOH) banned all fishing in the upper Hudson (from the Fort Edward dam to the Federal dam at Troy), and closed all Hudson River commercial fisheries. In that same year, a massive flood (of magnitude such that only one such flood is expected to occur every 100 years) caused a large movement of contaminated sediments from the upper Hudson into the lower river. In 1984, the US EPA issued a Record of Decision identifying GE as the responsible party at the Hudson Site, and called for in-place containment of the remnant deposits along the river bank but no action on sediments at the river bottom. In 1989, NYS DEC petitioned the US EPA to reconsider the 1984 Superfund ‘‘no action’’ decision, citing data that suggested continued unsafe levels of PCBs in fish, new information regarding the toxicity of PCBs, and the US EPA studies validating the effectiveness of dredging as a clean-up strategy. Consequently, both the US EPA and GE developed a series of models to assess site conditions, and to develop riskbased tools and strategies to support clean-up goals for the Hudson River. The principal questions to be answered by the modelling were: • • •
When will PCB levels in fish populations recover to levels meeting human health and ecological risk criteria under continued No Action? Can remedies other than No Action significantly shorten the time required to achieve acceptable risk levels? Are there contaminated sediments now buried that are likely to become ‘‘reactivated’’ following a major flood, possibly resulting in an increase in contamination of the fish population?
The principal fate and transport model developed by the US EPA to evaluate these questions is called the Upper Hudson River Toxic Chemical Model (HUDTOX). HUDTOX was developed to simulate PCB transport and fate for 40 miles of the upper Hudson River from Fort Edward to Troy, New York. The HUDTOX model code was based on an earlier version of the WASP model (WASP4/TOXI4) and updated by the US EPA to incorporate a variety of enhancements. HUDTOX simulates PCBs in the water column and sediment bed, and balances inputs, outputs and internal sources and sinks. Mass balances were constructed first for water, then for solids and bottom sediment, and finally for PCBs. External inputs of water, solids loads and PCB loads, plus values for many internal model coefficients, were specified from field observa-
28
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
tions. Once inputs were specified, the remaining internal model parameters were calibrated so that concentrations computed by the model would agree with field observations. Model calculations of forecast PCB concentrations in water and sediment from HUDTOX were used as inputs for the forecasts of the bioaccumulation model (FISHRAND). The US EPA also developed the Depth of Scour Model (DOSM) to provide spatially refined information on sediment erosion depths in response to high-flow events, such as a 100-year peak flow. The DOSM is a two-dimensional, sediment erosion model that was applied to the most contaminated portion of the river (and a potential source of PCBs to the remainder of the river, namely the Thompson Island Pool). The Thompson Island Pool is characterised by high levels of PCBs in the cohesive sediments. DOSM was linked with a hydrodynamic model that predicts the velocity and shear stress (force of the water acting on the sediment surface) during high flows. There was also a linkage between the ES-3 US EPA DOSM and HUDTOX. Relationships between river flow and cohesive sediment resuspension were developed using the DOSM for a range of flows below the 100-year peak flow. These relationships were used in the HUDTOX model for representing flowdependent resuspension. The bioaccumulation model FISHRAND is based on the food-web bioaccumulation model developed by Gobas (Gobas, 1993; Gobas et al., 1995), and provides a process-based, time-varying representation of PCB accumulation in fish. This is the same form of the model as was used to develop criteria under the Great Lakes Initiative (US EPA, 1995). FISHRAND incorporates distributions instead of point estimates for input parameters, and calculates distributions of fish body burdens from which particular point estimates can be obtained, for example the median, average, or 95th percentile. Modelling input variables can be described by distributions or point estimates, and users can specify whether parameters should be considered as ‘‘variable’’ (e.g., contributing directly to the population distribution of concentrations) or ‘‘uncertain’’ (e.g., contributing to the uncertainty bounds around the population distribution). There is ‘‘true’’ uncertainty (e.g., lack of knowledge) in the estimated concentrations of sediment and water to which aquatic organisms are exposed, and also variability in parameters contributing to contaminant bioaccumulation. Uncertainty and variability should be viewed separately in risk assessment, because they have different implications for regulators and decision-makers (Morgan and Henrion, 1990; Thompson and Graham, 1996). Variability is a population measure, and provides a context for a deterministic point estimate (e.g., average or reasonable maximum exposure). Variability typically cannot be reduced, only better characterised and understood. In contrast, uncertainty represents unknown but often measurable quantities. Typically, obtaining additional measurements can reduce uncertainty. Quantitatively separating uncertainty and variability allows an analyst to determine the fractile of the population for which a specified risk occurs, and the uncertainty bounds or confidence interval around that predicted risk. If uncertainty is large relative to variability (i.e., it is the primary contributor to the range of risk estimates), and if the differences in cost among management
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
29
alternatives are high, collecting and evaluating additional information can be recommended before making management decisions regarding risks from exposures to contaminants. Including variability in risk estimates also allows decision-makers to evaluate quantitatively the likelihood of risks both above and below selected reference values or conditions (for example, average risks as compared with 95th percentile risks). Characterising uncertainty and variability in any model parameter requires informed and experienced judgement. Studies have shown that in some cases, based on management goals and data availability, it is appropriate to ‘‘parse’’ input variables as predominantly uncertain or variable (Cullen, 1995; Kelly and Campbell, 2000; von Stackelberg et al., 2002a, 2002b). Figure 1.5 provides a schematic of the nested modelling framework. The spatial submodel of FISHRAND is described in detail elsewhere (Linkov et al., 2002; von Stackelberg et al., 2002a). It uses variables that describe fish foraging behaviours to calculate the probability that a fish will be exposed to a chemical concentration in water or sediment. The spatial submodel uses temporally variable sediment and water chemical concentrations, size of the site and hotspots (using GISbased inputs), attraction factors, migration habits of the fish, fish foraging area sizes and habitat sizes to calculate the probability that a fish will be exposed to chemicals in the site. The management area or site is divided into background areas and up to 10 hotspots. The model requires that all hotspots be located within the management area, and that they do not overlap. Each area is defined by minimum and maximum x and y coordinates, and the user provides the water and sediment chemical concentrations, organic carbon content of sediments, and water temperature for each area. These inputs can be point estimates or, preferably, distributions. Different areas may have the same sediment and water concentration, organic carbon content, and temperature, or one or all of these values may differ among areas. When a fish is not located within the site, the water and sediment exposures as well as exposure from diet are assumed to be zero. Essentially, FISHRAND starts with a large number of fish (e.g., the number of simulations in the variability loop of the model) and scatters them randomly over the modelling grid. The modelling grid is defined by a GIS-based map of the management area with spatially defined exposure concentrations in sediment and water. These can be defined in as much spatial and temporal detail as is available, including hotspots, background concentrations, and changes in concentrations over different time periods. These fish then move and forage according to their user-specified feeding preferences and foraging areas over the time interval specified in the model (typically one week, although it could be as little as one day or as much as a season). As the fish engage in these individual behaviours, they are exposed to sediment, water and benthic invertebrate concentrations relative to the underlying modelling grid. Figure 1.6 presents a schematic of the modelling equations and the mathematical connections that link model components. In addition to capturing the impact of migratory behaviours on exposure, the spatial submodel also incorporates the impact of heterogeneous chemical distribution across the site.
30
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 1.5: Nested Monte Carlo approach schematic for bioaccumulation modelling. Identify all uncertain and variable parameters (except for spatial coordinates)
Simulate values for all ‘uncertainty’ parameters Simulate values for all ‘variability’ parameters Simulate fish location within the site (accounting for hotspot attraction) Simulate local water and sediment concentrations Expose fish to local water and sediment and local biota diet (in equilibrium) – invertebrate and/or phytoplankton Simulate random tissue concentration in prey-fish (sampling from all prey-fish population – not local only) Expose fish to prey tissue concentration
Diet loop
Variability loop
Uncertainty loop
Concentration distributions for fish population
Model output is presented in various different ways, including tabular and graphical. The basic forms of the model results are individual percentiles of variability (concentrations across the population) and associated uncertainty for each population percentile for each time period and species. The user can select individual percentiles for plotting, or can average the data in different ways (e.g., seasonal or annual average
31
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
Figure 1.6: Schematic of FISHRAND modelling equations and the mathematical connections that link model components. Spatial submodel Input parameters 1. Size of management area, MA, and disposal site, DS (km2)
2. Seasonal abundance (no./hectare, monthly pt)
4. Foraging area, FA, or dispersion coefficient (hectare, tagging survey)
3. Attraction factor, AF (fish abundmngmt area/fish abundoutside mngmt area)
5. Subpopulation habitat size (km2, based on human consumption)
6. Concs of organic chemical in sed - Cs (ng/g dw)
7. log Kow, TOC for EqP Model
Random walk analysis 3. At specific intervals, each fish (1000 fish used) is modelled foraging in randomly selected areas
2. Assign each site cell probabilistic distribution for chem. conc. in sed.; assume surrounding area 0
1. Divide habitat into 1 m 1 m cells
4. EPC average conc. across cells that a fish hit within its FA during 1 month
12 Prob(MA, this month) N(MA, this month / 冱month = 1 N(MA, month)
Prob(DS) AF DS2/MA2
Prob(MA) 1 Prob(DS)
FH2 intersection of FA2 and DS2
The Cs and Cw (conc. in water) are randomly selected from the distribution: Prob density Fx(Cs, Cw/this month) Prob(FH2) PDF(Cs, Cw/DS) (1 FH2/FA2) PDF(Cs, Cw/MA) where Prob(FH2) AF FH2/(AF 1) FH2 FA2
Bioaccumulation model Input parameters 1. Freely dissolved concs of organic chemical in water, Cwd (ng/L)
2. Lipid content (%) and BSAF (kg/L) of invertebrate for Cdiet
3. Lipid content (%) and body weight (g) of fish
4. Gobas model inputs: gill uptake rate (L kg1 d1), dietary uptake rate (d1), gill elimination rate (d1), faecal egestion rate (d1), metabolic rate (d1), growth rate (d1)
FISHRAND Cf k1 Cwd kd Cdiet/(k2 ke km kg)
Human health risk submodel Input parameters 1. Concs of organic chemicals in fish species, EPCsp (µg/kg1) AFsp Fsp
2. Fish consumption, meal size (MS) (g meal1) meal frequency (EF) (meal year1) exposure duration (ED) (year)
3. Toxicity factors, RfD (mg kg1 day1), CSF (kg day1 mg1)
4. Averaging time, AT (days)
Risk calculations x Cancer risk 冱sp=1 (EPCsp AFsp Fsp) MS EF ED CSF/AT BW 106 x Non-cancer hazard 冱sp=1 (EPCsp AFsp Fsp) MS EF ED/AT BW RfD 106
5. Body weight, BW (kg)
32
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
with associated uncertainty). Figure 1.7 presents an example of predicted PCB concentrations for the mean and 95% uncertainty confidence limit (UCL). These concentrations can then be used to develop exposure point concentrations for human health risk assessment, or be incorporated directly as continuous functions for joint probability estimates of ecological hazard (US EPA, 2000a).
1.6
USE OF EXPERT ELICITATION IN MODELLING ASSESSMENTS
Expert elicitation is an approach for using expert judgements to provide alternate inputs to modelling systems. In general, the expert elicitation process asks experts to synthesise the available body of evidence for some domain of knowledge with experience and judgement to derive probabilities that describe their confidence in that knowledge. This approach, which uses structured interviews to quantify expert beliefs (typically probabilistic), provides processes that may be useful to address two of the major challenges in environmental modelling: estimating uncertain quantities or events, and characterising uncertainty of quantities or events for quantitative assessments. Since its origin with the decision-analytic community in the 1950s, expert elicitation has been used for many different types of application, and has seen a recent increase in usage. The foundation of expert elicitation is the idea that individuals’ beliefs about the likelihood of something can be expressed meaningfully in terms of probabilities, and can be used by modellers and analysts as part of quantitative analyses. Consequently, in a situation where data to support a model are unavailable (because of cost or time constraints), or where it is impossible to corroborate the output of a model because it is forecasting unique future events, expert elicitation provides a means to obtain quantities that would otherwise be unavailable, or would be impossible to estimate by other methods. In the field of environmental modelling
3.5 107 3.0 107 2.5 107 2.0 107 1.5 107 1.0 107 5.0 108 0
Mean
Month and year
08 n Ja
07 N
ov
07 p Se
7 Ju l0
07 ay M
M
ar
07
95% UCL
07 n Ja
PCB/ppm
Figure 1.7: Predicted PCB concentrations in the Hudson River using FISHRAND for the mean and 95% uncertainty confidence limit (UCL).
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
33
such needs are frequent. Hence recent years have seen an increased interest in the use of expert elicitation as part of environmental modelling systems. The potential benefits of using expert elicitation have been reflected in recent statements by the National Academy of Sciences (NAS). In its 2002 report Estimating the Public Health Benefits of Proposed Air Pollution Regulation, the NAS recommended that the US EPA: should begin to move the assessment of uncertainties from its ancillary analyses into its primary analyses by conducting probabilistic, multiple-source uncertainty analyses. This shift will require specifications of probability distributions for major sources of uncertainty. These distributions should be based on available data and expert judgement.
In addition, the US Office of Management and Budget (OMB) has recognised the utility of expert elicitation methods, and encourages its use in probabilistic uncertainty analyses that support regulatory decisions. In its Circular A-4 (2003), OMB stated: In formal probabilistic assessments, expert solicitation is a useful way to fill key gaps in your ability to assess uncertainty. In general, experts can be used to quantify the probability distributions of key parameters and relationships. These solicitations, combined with other sources of data, can be combined in Monte Carlo simulations to derive a probability distribution of benefits and costs.
At the US EPA, experience with expert elicitation to support the development of air quality standards goes back to the late 1970s (Feagans and Biller, 1981). The Agency anticipates that expert elicitation will continue to be an important approach for characterising uncertainty and filling data gaps. Because it recognises the challenges of applying expert elicitation findings to regulatory and other policy decisions, the US EPA has established an Expert Elicitation Task Force. The purpose of this Task Force has been to ‘‘initiate a dialogue within the Agency about the conduct and use of expert elicitation and then to facilitate future development and appropriate use of expert elicitation methods’’. In advancing this objective, the US EPA’s Expert Elicitation Task Force has written a Draft Expert Elicitation Task Force White Paper that discusses and presents issues that are pertinent to expert elicitation, including a definition of expert elicitation, when to consider it, how it should be conducted, and how the results should be presented and used (US EPA, 2009b).
1.6.1
Example application for environmental fate and transport
Trichloroethene (trichloroethylene, TCE) is a common solvent, a carcinogen, and a frequent groundwater pollutant. The fate and transport of TCE in groundwater is highly dependent on the occurrence of natural biodegradation. When it is active, naturally occurring biodegradation can stop or limit the spread of TCE in groundwater. In the 1990s, phenomena governing this biodegradation were not adequately under-
34
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
stood, including the identification of the active bacteria, the optimal environmental conditions for its occurrence, and the presence of degradation by-products. During this time there were several approaches by the US EPA and other researchers to simulate this biodegradation process, as it was understood. They attempted to estimate when adequate natural biodegradation was occurring, and when it might be enhanced by the addition of nutrients to support bacterial growth (US EPA, 1998). To advance these existing approaches Stiber et al. (1999) built a model for this biodegradation process, and used expert elicitation to provide probabilistic relationships for many of its uncertain steps. This research obtained the beliefs of 22 experts through an elicitation protocol that asked the experts to estimate 94 separate probabilities. It showed that these probabilities could be used in a phenomenological model to make predictions about the occurrence of natural biodegradation at real locations of TCE-contaminated groundwater. More generally, the authors (Stiber et al., 1999) concluded that the use of expert knowledge is desirable because: (1) by acquiring expert knowledge, non-experts can make better quality decisions, (2) by breaking up the decision process into discrete components, experts can systematically specify and integrate their knowledge, and (3) by combining the discrete elements of a decision and analysing the outcome, it is possible to identify which components are most critical to the final evaluation, identify significant differences among experts, and determine the value of additional information.
1.6.2
Example application at the US EPA to quantify uncertainty
As part of the development of its NAAQS for fine particles (PM2:5 ) in 2006, the US EPA conducted an RIA to inform the public about the potential costs and benefits of implementing these proposed air quality standards. This RIA used expert elicitation of non-EPA experts to better characterise the uncertainty associated with reductions in exposure to PM2:5 pollution. In its final report for the US EPA, Industrial Economics, Inc. (Industrial Economics, 2006) described why expert elicitation was used to quantify the health benefits of reductions in PM2:5 concentrations: The effect of changes in ambient fine particulate matter (PM2:5 ) levels on mortality constitutes a key component of the EPA’s approach for assessing potential health benefits associated with air quality regulations targeting emissions of PM2:5 and its precursors . . . Because it [avoided premature deaths] is such a large component of benefits, obtaining a good characterisation of uncertainties regarding the mortality effects of changes in PM2:5 exposure could capture the largest portion of uncertainty characterisation of an entire benefit analysis (aside from unquantified or unmeasurable benefit endpoints).
Industrial Economics, Inc. used ‘‘carefully structured interviews to elicit from each expert his best estimate of the true value for an outcome or variable of interest as well as his uncertainty about the true value’’. For each of the 12 experts who participated in
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
35
this study, Industrial Economics, Inc. developed ‘‘a subjective probabilistic distribution of values, reflecting each expert’s interpretation of theory and empirical evidence from relevant disciplines and ultimately his beliefs about what is known and not known about the subject of the study’’. These findings were included in the US EPA’s RIA to better characterise the uncertainty in the estimated health benefits.
1.7
SUMMARY: REGULATORY APPLICATIONS OF FATE AND BIOACCUMULATION MODELS
The US EPA’s Quality Assurance Plan for modelling applications recommends a tiered approach to model evaluation activities, depending on the importance of modelling results for informing a particular decision and the cost/significance of the environmental decision to be made (US EPA, 2002). For models that are used to inform significant decisions (as defined by the OMB), a variety of practices help ensure the rigour of such applications if subjected to legal challenges. For example, McGarity and Wagner (2003) noted that: an EPA modeling exercise . . . should not suffer reversal . . . if the Agency is careful to describe the model in some detail; identify the assumptions upon which the model relies; explain why those assumptions are valid in the particular context in which it is applying the model and specifically request comments on the validity of the assumptions and their use in the modeling exercise.
To assist in communication among stakeholders, Beck (2002) recommends that model evaluation in the most general sense should answer several straightforward questions: • • •
Is the model based on generally accepted science and computational methods? Does it work, that is, does it fulfil its designated task or serve its intended purpose? Does its behaviour approximate that observed in the system being modelled?
Developing confidence in environmental modelling applications for policy applications requires effective communication of their underlying science and inherent uncertainties. The US EPA uses a wide variety of fate and bioaccumulation models to inform regulatory actions. In order for these models to withstand the judicial review process, both the models themselves and site-specific applications must be transparent, well documented, and peer-reviewed. The US EPA has released a ‘‘best practices’’ document for the development, evaluation and application of environmental models (US EPA, 2009a). One of the major recommendations of this guidance document is a philosophical move away from ‘‘validating and verifying’’ models towards a framework for life cycle model evaluation (i.e., continued updating and testing of a model throughout all applications). Environmental models can never be ‘‘verified,’’ meaning be established as true, because they are an imperfect representation of the real world by definition, and thus are untrue (Oreskes et al., 1994). Life cycle model evaluation
36
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 1.2: Example of differences in perspective between scientists and managers when considering the use of climate information. Factor
Scientist’s perspective
Water manager’s perspective
Identifying a critical issue
Based on a broad understanding of the nature of water management
Based on experience of particular system
Time frame
Variable
Immediate (operations) Long-term (infrastructure)
Spatial resolution Goals
Defined by data availability, funding, modelling capabilities Prediction Explanation Understanding of natural system
Defined by institutional boundaries, authorities Optimisation of multiple conditions and minimisation of risk
Basis for decisions
Generalising multiple facts and observations Use of scientific procedures, methods Availability of research funding Disciplinary perspective
Tradition Procedure Professional judgement Training Economics Politics Job risks Formal and informal networks
Expectation
Understanding Prediction Ongoing improvement (project never actually complete) Statistical significance of results Innovations in methods/theory
Accuracy of information Appropriate methodology Precision Save money, time Protect the public Protect their job, agenda or institution
Product Complex characteristics Scientifically defensible
As simple as possible without losing accuracy
Frame
Physical (atmospheric, hydrologic, economic, etc.) and societal conditions as drivers Dependent on scientific discipline
Safety, well-being Profit Consistency with institutional culture, policy, etc.
Nature of use
Conceptual
Applied
Source: From Jacobs, K. and Pulwarty, R. (2003). Water resource management: science, planning, and decision-making. In Lawford, R.G., Fort, D.D., Hartmann, H.C. and Eden, S. (Eds), Water: Science, Policy, and Management. American Geophysical Union, Washington, DC, pp. 177–204.
approaches recognise the pervasive nature of such uncertainties. Thus a key component of the use of fate and bioaccumulation models in regulatory applications is appropriately conveying their uncertainties in a context that is relevant and understandable by decision-makers. Table 1.2 summarises some of the differences in perspectives of environmental managers and modellers (scientific staff) when dealing
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
37
with environmental information. Recognising these differences is essential for effective communication and therefore the success of modelling applications for informing management decisions and enhancing environmental protection.
ENDNOTES 1. Criteria air pollutants include: ozone (O3 ), fine (PM2:5 ) and coarse (PM10 ) particulate matter, lead (Pb), sulfur dioxide (SO2 ), carbon monoxide (CO), and nitrous oxides (NO x ). 2. For more information see: http: //www.epa.gov/scram001/guidance_permit.htm. 3. All models, a history of their past applications and extensive documentation are available online from the US EPA at http: //www.epa.gov/scram001/dispersion_prefrec.htm. 4. http: //www.blueskyrains.org 5. Most of these models are publicly available, and can be downloaded from http: //www.epa.gov/ ceampubl/. 6. http: //www.epa.gov/athens/wwqtsc/ 7. See: http: //www.epa.gov.athens/wwqtsc. 8. http: //www.epa.gov/region4/mercury/TMDLs.htm 9. Available at: http: //www.rem.sfu.ca/toxicology/models/models.htm 10. http: //www.epa.gov/ceampubl/
REFERENCES Ambrose, R.B. (1987). Modeling volatile organics in the Delaware Estuary. Journal of Environmental Engineering. 113: 703–721. Ambrose, R.B., Wool, T., Connolly, J.P. and Schanz R.W. (1988). WASP4, A Hydrodynamic and Water Quality Model: Model Theory, User’s Manual, and Programmer’s Guide. EPA/600/3-87-039. US Environmental Protection Agency, Athens, GA. Arnot, J. and Gobas, F. (2004). A food web bioaccumulation model for organic chemicals in aquatic ecosystems. Environmental Toxicology and Chemistry. 23: 2343–2355. Arnot, J.A. and Gobas, F. (2006). A review of bioconcentration factor (BCF) and bioaccumulation factor (BAF) assessments for organic chemicals in aquatic organisms. Environmental Reviews. 14: 257–297. Barber, M. (2003). A review and comparison of models for predicting dynamic chemical bioconcentration in fish. Environmental Toxicology and Chemistry. 22: 1963–1992. Barber, M.C. (2001). Bioaccumulation and Aquatic System Simulator (BASS) User’s Manual Beta Test Version 2.1. EPA/600/R-01/035. US Environmental Protection Agency, Office of Research and Development, Athens, GA. Barber, M.C. (2006). Bioaccumulation and Aquatic System Simulator (BASS) User’s Manual Version 2.2. EPA/600/R-01/035 update 2.2. US Environmental Protection Agency, National Exposure Research Laboratory, Ecosystems Research Division, Athens, GA. Barnwell, T.O., Brown, L.C. and Whittemore, R.C. (2004). Importance of field data in stream water quality modeling using QUAL2E-UNCAS. Journal of Environmental Engineering. 130: 643– 647. Beck, B. (2002). Model evaluation and performance. In: El-Shaarawi, A.H. and Piegorsch, W.W. (Eds). Encyclopedia of Environmetrics. John Wiley & Sons, Chichester, pp. 1275–1279. Bingli, L., Shengbiao, H., Min, Q., Tianyun, L. and Zijian, W. (2008). Prediction of the environmental fate and aquatic ecological impact of nitrobenzene in the Songhua River using the modified AQUATOX model. Journal of Environmental Sciences – China. 20: 769–777. Binkowski, F.S. and Roselle, S.J. (2003). Models-3 Community Multiscale Air Quality (CMAQ)
38
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
model aerosol component 1. Model description. Journal of Geophysical Research – Atmospheres. 108: 4183. Brown, L. and Barnwell, T. (1987). Computer Program Documentation for the Enhanced Stream Water Quality Model QUAL2E-UNCAS. Edited by E.R. Laboratory, US Environmental Protection Agency, Athens, GA. Brown, M.P., Werner, M.B., Sloan, R.J. and Simpson, K.W. (1985). Polychlorinated-biphenyls in the Husdon River. Environmental Science and Technology, 19: 656–667. Bullock, R.O. and Brehme, K. (2002). Atmospheric mercury simulation using the CMAQ model: formulation description and analysis of wet deposition results. Atmospheric Environment. 36: 2135–2146. Burns, L.A. (2004). Exposure Analysis Modeling System (EXAMS): User Manual and System Documentation. EPA/600/R-00/081/September 2000, Revision G (May 2004), 201 pp. Office of Research and Development, US Environmental Potection Agency, Athens, GA. Burns, L.A., Cline, D.M. and Lassiter, R.R. (1982). The Exposure Analysis Modeling System. EPA600/3-82-023. US Environmental Protection Agency, Athens, GA. Byun, D.W. and Ching, J.K.S. (1999). Science Algorithms of the EPA Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. EPA-600/R-99/030. Edited by Office of Air and Radiation, US Environmental Protection Agency, Washington, DC. Caruso, B.S. (2003). Water quality simulation for planning restoration of a mined watershed. Water, Air, and Soil Pollution, 150: 359–382. Chapra, S.C., Pelletier, G.J. and Tao, H. (2007). QUAL2K: A Modeling Framework for Simulating River and Stream Water Quality, Version 2.07: Documentation and Users Manual. Civil and Environmental Engineering Department, Tufts University, Medford, MA. Charnley, G. and Eliott, E.D. (2002). Risk versus precaution: environmental law and public health protection. The Environmental Law Reporter: News and Analysis, 33: 10363–10366. Cohen, M., Draxler, R.R., Artz, R., Commoner, B., Bartlett, P., Cooney, P., Couchot, K., Dickar, A., Eisl, H., Hill, C., Quigley, J., Rosenthal, J.E., Niemi, D., Ratte´, D., Deslauriers, M., Laurin, R., Mathewson-Brake, L. and McDonald, J. (2002). Modeling the atmospheric transport and deposition of PCDD/F to the Great Lakes. Environmental Science and Technology. 36: 4831– 4845. Cohen, M., Artz, R., Draxler, R., Miller, P., Poissant, L., Niemi, D., Ratte´, D., Deslauriers, M., Duval, R., Laurin, R., Slotnick, J., Nettesheim, T. and McDonald, J. (2004). Modeling the atmospheric transport and deposition of mercury to the Great Lakes. Environmental Research. 95: 247– 265. Connolly, J. and Thomann, R.V. (1985). WASTOX, A Framework for Modeling the Fate of Toxic Chemicals in Aquatic Environments. Part 2: Food Chain. US Environmental Protection Agency, Gulf Breeze, FL and Duluth, MN. Connolly, J.P. and Winfield, R (1984). A User’s Guide for WASTOX, a Framework for Modeling the Fate of Toxic Chemicals in Aquatic Environments. Part 1: Exposure Concentration. EPA-600/ 3-84-077. US Environmental Protection Agency, Gulf Breeze, FL. Cooter, E.J. and Hutzell, W.T. (2002). A regional atmospheric fate and transport model for atrazine. 1. Development and implementation. Environmental Science and Technology. 36: 4091–4098. Cullen, A.C. (1995). The sensitivity of probabilistic risk assessment results to alternative model structures: a case study of municipal waste incineration. Journal of the Air and Waste Management Association. 45: 538–546. DePinto, J.V., Freedman, P.L., Dilks, D.M. and Larson, W.M. (2004). Models quantify the total maximum daily load process. Journal of Environmental Engineering. 130: 703–713. Di Toro, D.M., Fitzpatrick, J.J. and Thomann, R.V. (1983). Water Quality Analysis Simulation Program (WASP) and Model Verification Program (MVP): Documentation. Hydroscience, Inc., Westwood, NY, for the US EPA, Duluth, MN, Contract No. 68-01-3872. Draxier, R.R. and Hess, G.D. (1998). An overview of the HYSPLIT_4 modelling system for trajectories, dispersion and deposition, Australian Meteorological Magazine. 47: 295–308. Feagans, T.B. and Biller, W.F. (1981). Risk assessment: describing the protection provided by ambient air quality standards. The Environmental Professional. 3: 235–247.
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
39
Gobas, F.A.P.C. (1993). A model for predicting bioaccumulation of hydrophobic organic chemicals in aquatic food-webs: application to Lake Ontario. Ecological Modelling. 69: 1–17. Gobas, F.A.P.C., Z’Graggen, M.N. and Zhang, X. (1995). Time response of the Lake Ontario ecosystem to virtual elimination of PCBs. Environmental Science and Technology. 29: 2038– 2046. Hahn, R.W., Burnett, J., Chan, Y.H., Mader, E.A. and Moyle, P.R. (2000). Assessing regulatory impact analyses: The failure of Agencies to comply with Executive Order 12,866. Harvard Journal of Law and Public Policy, 23: 859–885. Hamrick, J.M. (1996). User’s Manual for the Environmental Fluid Dynamics Computer Code, Special Report No. 331 in Applied Marine Science and Ocean Engineering. The College of William and Mary, School of Marine Science, Virginia Institute of Marine Science, Gloucester Point, VA. Han, K.M., Song, C.H., Ahn, H.J., Lee, C.K., Richter, A., Burrows, J.P., Kim, J.Y., Woo, J.H. and Hong, J.H. (2009). Investigation of NO x emissions and NO x -related chemistry in East Asia using CMAQ-predicted and GOME-derived NO2 columns. Atmospheric Chemistry and Physics. 9: 1017–1036. Industrial Economics (2006). Expanded Expert Judgment Assessment of the Concentration–Response Relationship Between PM2:5 Exposure and Mortality Final Report, September 21, 2006. http: //www.epa.gov/ttn/ecas/regdata/Uncertainty/pm_ee_report.pdf Jacob, D.J. and Winner, D.A. (2009). Effect of climate change on air quality. Atmospheric Environment. 43: 51–63. Jacobs, K. and Pulwarty, R. (2003). Water resource management: science, planning, and decisionmaking. In Lawford, R.G., Fort, D.D., Hartmann, H.C. and Eden, S. (Eds), Water: Science, Policy, and Management. American Geophysical Union, Washington, DC, pp. 177–204. James, R.T., Martin, J., Wool, T. and Wang, P.F. (1997). A sediment resuspension and water quality model of Lake Okeechobee. Journal of the American Water Resources Association. 33: 661– 680. Kelly, B.C. and Gobas, F.A.P.C. (2001). Bioaccumulation of persistent organic pollutants in lichen– caribou–wolf food chains of Canada’s central and western Arctic. Environmental Science and Technology. 35: 325–334. Kelly, B.C., Ikonomou, M.G., Blair, J.D., Morin, A.E. and Gobas, F. (2007). Food web-specific biomagnification of persistent organic pollutants. Science. 317: 236–239. Kelly, E.J. and Campbell, K. (2000). Separating variability and uncertainty in environmental risk assessment: making choices. Human and Ecological Risk Assessment. 6: 1–13. Knightes, C. (2008). Development and test application of SERAFM: a screening-level mercury fate model and tool for evaluating wildlife exposure risk for surface waters with mercurycontaminated sediments. Environmental Software and Modelling. 23: 495–510. Knightes, C., Sunderland, E., Barber, M., Johnston, J. and Ambrose, R.J. (2009). Application of ecosystem scale fate and bioaccumulation models to predict fish mercury response times to changes in atmospheric deposition. Environmental Toxicology and Chemistry. 28: 881–893. Kopacz, M., Jacob, D.J., Henze, D.K., Heald, C.L., Streets, D.G. and Zhang, Q. (2009). Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns. Journal of Geophysical Research – Atmospheres. 114, D04305, doi:10.1029/2007JD009264. Limburg, K.E. (1984). Environmental impact assessment of the PCB problem: a review. Northeastern Environmental Science. 3: 122–136. Lin, C.J. and Pehkonen, S.O. (1999). The chemistry of atmospheric mercury: a review. Atmospheric Environment. 33: 2067–2079. Linkov, I., Burmistrov, D., Cura, J. and Bridges, T.S. (2002). Risk-based management of contaminated sediments: consideration of spatial and temporal patterns in exposure modeling. Environmental Science and Technology. 36: 238–246. Mackay, D. and Fraser, A. (2000). Bioaccumulation of persistent organic chemicals: mechanisms and models. Environmental Pollution. 110: 375–391. Mahaffey, K.R., Clickner, R.P. and Bodurow, C.C. (2004). Blood organic mercury and dietary
40
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
mercury intake: national health and nutrition examination survey, 1999 and 2000. Environmental Health Perspectives. 112: 562–670. McGarity, T.O. and Wagner, W.E. (2003). Legal aspects of the regulatory use of environmental modeling. The Environmental Law Reporter: News and Analysis. 33: 10751–10774. Millet, D.B., Jacob, D.J., Custer, T.G., de Gouw, J.A., Goldstein, A.H., Karl, T., Singh, H.B., Sive, B.C., Talbot, R.W. Warneke, C. and Williams, J. (2008). New constraints on terrestrial and oceanic sources of atmospheric methanol. Atmospheric Chemistry and Physics. 8: 6887–6905. Moore, G.E. (1965). Cramming more components onto integrated circuits. Electronics. 38: 114–117. Morgan, M.G. and Henrion, M. (1990). Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, New York. Nolte, C.G., Bhave, P.V., Arnold, J.R., Dennis, R.L., Zhang, K.M. and Wexler, A.S. (2008). Modeling urban and regional aerosols: application of the CMAQ-UCD aerosol model to Tampa, a coastal urban site. Atmospheric Environment. 42: 3179–3191. NRC (2007). Models in Environmental Regulatory Decision Making. National Academies Press, Washington, DC. O’Connor, D.J., Mueller, J.A. and Farley, K.J. (1983). Distriution of kepone in the James River Estuary. Journal of Environmental Engineering. 109: 396–413. Oreskes, N., Shrader-Frechette, K. and Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science. 263: 641–646. Palmer, P.I., Suntharalingam, P., Jones, D.B.A., Jacob, D.J., Streets, D.G., Fu, Q., Vay, S.A. and Sachse, G.W. (2006). Using CO2 : CO correlations to improve inverse analyses of carbon fluxes. Journal of Geophysical Research – Atmospheres. 111: 11. Park, R.A., Clough, J.S. and Wellman, M.C. (2008). AQUATOX: modeling environmental fate and ecological effects in aquatic ecosystems. Ecological Modelling. 213: 1–15. Rashleigh, B. (2003). Application of AQUATOX, a process-based model for ecological assessment, to Contentnea Creek in North Carolina. Journal of Freshwater Ecology. 18: 515–522. Rashleigh, B., Barber, M.C. and Walters, D.M. (2009). Foodweb modeling for polychlorinated biphenyls (PCBs) in the Twelvemile Creek arm of Lake Hartwell, South Carolina, USA. Ecological Modelling. 220: 254–264. Sahu, S.K., Yip, S. and Holland, D.M. (2009). Improved space–time forecasting of next day ozone concentrations in the eastern US. Atmospheric Environment. 43: 494–501. Seigneur, C., Jayaraghavan, K., Lohman, K., Karamchandani, P. and Scott, C. (2004). Global source attribution for mercury deposition in the United States. Environmental Science and Technology. 38: 555–569. Selin, N., Jacob, D., Yantosca, R., Strode, S., Jaegle, L. and Sunderland, E. (2008). Global 3-D land– ocean–atmosphere model for mercury: present-day versus preindustrial cycles and anthropogenic enrichment factors for deposition. Global Biogeochemical Cycles. 22, GB2011. Selin, N.E., Jacob, D.J., Park, R.J., Yantosca, R.M., Strode, S., Jaegle, L. and Jaffe, D. (2007). Chemical cycling and deposition of atmospheric mercury: global constraints from observations. Journal of Geophysical Research – Atmospheres. 112: D02308. Sofaer, A.D. (1976). Interim Opinion and Order in the Matter of Alleged Violations of the Environmental Conservation Law of the State of New York by General Electric Co., Respondent, File No. 2833, February 9, 1976, NYSDEC, Albany, New York. Stiber, N.A., Pantazidou, M. and Small, M.J. (1999). Expert system methodology for evaluating reductive dechlorination at TCE sites. Environmental Science and Technology. 33: 3012–3020. Sunderland, E.M. (2007). Mercury exposure from domestic and imported estuarine and marine fish in the US seafood market. Environmental Health Perspectives. 115: 235–242. Sunderland, E.M. and Mason, R. (2007). Human impacts on open ocean mercury concentrations. Global Biogeochemical Cycles. 21: GB4022. Suntharalingam, P., Sarmiento, J.L. and Toggweiler, J.R. (2000). Global significance of nitrous-oxide production and transport from oceanic low-oxygen zones: a modeling study. Global Biogeochemical Cycles. 14: 1353–1370. Suntharalingam, P., Spivakovsky, C.M., Logan, J.A. and McElroy, M.B. (2003). Estimating the distribution of terrestrial CO2 sources and sinks from atmospheric measurements: sensitivity
ENVIRONMENTAL FATE AND BIOACCUMULATION MODELLING AT THE US EPA
41
to configuration of the observation network. Journal of Geophysical Research – Atmospheres. 108: 4452. Suntharalingam, P., Randerson, J.T., Krakauer, N., Logan, J.A. and Jacob, D.J. (2005). Influence of reduced carbon emissions and oxidation on the distribution of atmospheric CO2 : implications for inversion analyses. Global Biogeochemical Cycles. 19: GB4003. Suntharalingam, P., Kettle, A.J., Montzka, S.M. and Jacob, D.J. (2008). Global 3-D model analysis of the seasonal cycle of atmospheric carbonyl sulfide: implications for vegetation uptake. Geophysical Research Letters. 35: L19801. Suter, G.W.I. (2007). Ecological Risk Assessment, 2nd edn. CRC Press, Boca Raton, FL. Thompson, K.M. and Graham, J.D. (1996). Going beyond the single number: using probabilistic risk assessment to improve risk management. Human Ecological Risk Assessment. 2: 1008–1034. Tufford, D.L. and McKellar, H.N. (1999). Spatial and temporal hydrodynamic and water quality modeling analysis of a large reservoir on the South Carolina (USA) coastal plain. Ecological Modelling. 114: 137–173. United States Court of Appeals (2008a). State of New Jersey, et al. v. Environmental Protection Agency, Washington, DC. United States Court of Appeals (2008b). State of North Carolina v. Environmental Protection Agency, District of Columbia Circuit, Washington, DC. US EPA (1989). Risk Assessment Guidance for Superfund, Volume 1 – Human Health Evaluation Manual. Part A: Final, edited. Office of Emergency and Remedial Response, US Environmental Protection Agency, Washington, DC. US EPA (1991a). Risk Assessment Guidance for Superfund: Volume 1 – Human Health Evaluation Manual. Part C: Risk Evaluation of Remedial Alternatives. PB92-963334. Publication 9285.17-01C, edited. US Environmental Protection Agency, Washington, DC. US EPA (1991b). Risk Assessment Guidance for Superfund, Volume 1 – Human Health Evaluation Manual. Part B: Development of Risk-Based Preliminary Remediation Goals, edited. Office of Emergency and Remedial Response, US Environmental Protection Agency, Washington, DC. US EPA (1995). Great Lakes Water Quality Initiative Technical Support Document for the Procedure to Determine Bioaccumulation Factors, edited. Office of Water, US Environmental Protection Agency, Washington, DC. US EPA (1996). Risk Assessment Guidance for Superfund: Human Health Evaluation Manual. Part D: Standardized Planning, Reporting and Review of Superfund Risk Assessments, Final, edited. Office of Solid Waste and Emergency Response, US Environmental Protection Agency, Washington, DC. US EPA (1997). Ecological Risk Assessment Guidance for Superfund: Process for Designing and Conducting Ecological Risk Assessments (interim final). Environmental Response Team, US Environmental Protection Agency, Edison, NJ. US EPA (1998). Technical Protocol for Evaluating Natural Attenuation of Chlorinated Solvents in Groundwater. US Environmental Protection Agency, Washington, DC. US EPA (1999). Draft Guidance for Water Quality-based Decisions: The TMDL Process, 2nd edn. Office of Water, US Environmental Protection Agency, Washington, DC. US EPA (2000a). Further Site Characterization and Analysis, Revised Baseline Ecological Risk Assessment, Hudson River PCBs Reassessment RI/FS. Prepared for the US EPA Region 2 and USACE, Kansas City District, by K. von Stackelberg, S.B. Kane Driscoll and C. Menzie at Menzie-Cura & Associates, Inc., and TAMS Consultants, Inc. December 2000. Available from: www.epa.gov/hudson: click on ‘‘reassessment reports’’. Region 2, US Environmental Protection Agency, New York, NY. US EPA (2000b). AQUATOX for Windows: A Modular Fate and Effects Model for Aquatic Ecosystems. Release 1, Volume 3: Model Validation Reports Addendum. Office of Water, US Environmental Protection Agency, Washington, DC. US EPA (2000c). Risk Assessment Guidance for Superfund, Volume I – Human Health Evaluation Manual. Part E: Supplemental Guidance for Dermal Risk Assessment Interim Guidance. Office of Emergency and Remedial Response, US Environmental Protection Agency, Washington, DC.
42
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
US EPA (2001a). AQUATOX for Windows: A Modular Fate and Effects Model for Aquatic Ecosystems. Release 1.1, Volume 3: Model Validation Reports Addendum. Office of Water, US Environmental Protection Agency, Washington, DC. US EPA (2001b). Total Maximum Daily Load (TMDL) for Total Mercury in Fish Tissue Residue in the Middle and Lower Savannah River Watershed. US Environmental Protection Agency, Athens, GA. US EPA (2002). Quality Assurance Project Plans for Modeling. Office of Environmental Information, US Environmental Protection Agency, Washington, DC. US EPA (2004a). Total Maximum Daily Load (TMDL) Development for Total Mercury in Fish Tissue Residue in the Canoochee River (Canoochee Watershed). The US EPA Region 4, Athens, GA. US EPA (2004b). Total Maximum Daily Load (TMDL) Development for Total Mercury Fish Tissue in Brier Creek, GA. US Environmental Protection Agency, Region IV, Athens, GA. US EPA (2005a). Technical Support Document for the Final Clean Air Mercury Rule: Air Quality Modeling, 24 pp. Office of Air Quality Planning and Standards, US Environmental Protection Agency, Research Triangle Park, NC. US EPA (2005b). Technical Support Document for the Final Clean Air Interstate Rule, 285 pp. Office of Air Quality Planning and Standards, US Environmental Protection Agency, Research Triangle Park, NC. US EPA (2005c). Regulatory Impact Analysis of the Clean Air Mercury Rule, Final Report. Office of Air Quality Planning and Standards, US Environmental Protection Agency, Research Triangle Park, NC. US EPA (2008). Methodology for Deriving Ambient Water Quality Criteria for the Protection of Human Health, 369 pp. Office of Water, Office of Science and Technology, US Environmental Protection Agency, Washington, DC. US EPA (2009a). Guidance on the Development, Evaluation and Application of Environmental Models, edited. Office of the Science Advisor, Council for Regulatory Environmental Modeling, US Environmental Protection Agency, Washington, DC. US EPA (2009b). Draft Expert Elicitation White Paper, External Review Draft. Office of the Science Advisor, US Environmental Protection Agency, Washington, DC. von Stackelberg, K., Burmistrov, D., Linkov, I., Cura, J. and Bridges, T.S. (2002a). The use of spatial modeling in an aquatic food web to estimate exposure and risk. Science of the Total Environment. 288: 97–110. von Stackelberg, K., Burmistrov, D., Vorhess, D.J., Bridges, T.S. and Linkov, I. (2002b). Importance of uncertainty and variability to predicted risks from trophic transfer of PCBs in dredged sediments. Risk Analysis. 22: 499–512. Wang, P.F., Martin, J. and Morrison, G. (1999). Water quality and eutrophication in Tampa Bay, Florida. Estuarine Coastal and Shelf Science. 49: 1–20. Wool, T.A., Davie, S.R. and Rodriquez, H.N. (2003). Development of 3-D hydrodynamic and water quality models to support total maximum daily load decision process for the Neuse River Estuary, North Carolina. Journal of Water Resource Planning and Management. 129: 295– 306. Wu, S.L., Mickley, L.J., Jacob, D.J., Rind, D. and Streets, D.G. (2008). Effects of 2000–2050 changes in climate and emissions on global tropospheric ozone and the policy-relevant background surface ozone in the United States. Journal of Geophysical Research – Atmospheres. 113(D18). Yang, C.P., Kuo, J.T., Lung, W.S., Lai, J.S. and Wu, J.T. (2007). Water quality and ecosystem modeling of tidal wetlands. Journal of Environmental Engineering. 133: 711–721. Yu, S.C., Mathur, R., Kang, D.W., Schere, K. and Tong, D. (2009). A study of the ozone formation by ensemble back trajectory-process analysis using the Eta-CMAQ forecast model over the northeastern US during the 2004 ICARTT period. Atmospheric Environment. 43: 355–363. Zhang, X., Rygwelski, K.R., Rossmann, R., Pauer, J.J. and Kreis, R.G. (2008). Model construct and calibration of an integrated water quality model (LM2-Toxic) for the Lake Michigan Mass Balance Project. Ecological Modelling. 219: 92–106.
CHAPTER
2
Contaminated Land Decision Support: A Review of Concepts, Methods and Systems Aisha Bello-Dambatta and Akbar A. Javadi
2.1
INTRODUCTION
Land contamination, caused by industrial and commercial activities, is a major environmental and infrastructural problem in industrialised countries (Figures 2.1, 2.2 – see colour section). These lands pose potential risks to human health and the wider environment, and as a result there has been concerted effort between policymakers, landowners, practitioners and other stakeholders to clean up these lands to minimise or eliminate the risks posed, and redevelop the lands for beneficial use. National estimates of European Environment Agency (EEA) member countries show that, on average, approximately 8% of lands are contaminated and need to be remediated, with an average annual management expenditure of A12 per capita. Of the total expenditure, 40% is used for site assessment purposes, and 60% for remediation (Figure 2.3). As not all sites end up classified as contaminated, at least legally, properly carried out risk assessments could prevent further costs with unnecessary remediation. At present, the management of contaminated land represents on average about 2% of the estimated overall management costs of the countries for which estimates are available. The EEA has forecast that the number of identified contaminated sites will increase by 50% by the year 2025 as a result of increased levels of awareness and commitment to the identification and characterisation of contaminated lands (Figure 2.4). Although most of these sites represent historical contamination, current activities are still causing contamination: for example, approximately 2% of the total contaminated sites in the UK alone are classified as newly created. This is reflected in the ever-growing contaminated land management industry, patent applications and Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
44
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 2.3: Annual expenditure on the management of contaminated sites, as a percentage of GDP.
Annual expenditures/% of GDP
3.5 3.0 2.5 2.0 1.5 1.0 0.5
Cz
ec
Cr
oa
tia I ta h Re ly pu b H lic un N et gary he rla nd Fr s an ce D en m Bu ark lg Ro aria m an Sl ia ov Sw ak itz ia er l Be and lg iu m Fi nl an A d us tr Sw ia ed FY en R of Est o M n ac ia ed on N ia or wa y Sp ai n
0
Source: EEA (2007). Progress in Management of Contaminated Sites. European Environment Agency, Copenhagen.
Figure 2.4: Status of investigation and clean-up of contaminated sites in Europe.
Number of remediated sites
80.7
Estimated number of contaminated sites
245.9
Number of identified potentially contaminated sites
1823.6
Estimated number of potentially polluting activity sites
2965.5
0
500
1000 1500 2000 2500 Number of sites in 2006 (1000)
3000
3500
Source: EEA (2007). Progress in Management of Contaminated Sites. European Environment Agency, Copenhagen.
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
45
research initiatives. The contaminated land management problem is not restricted to historical contamination, and therefore systems need to be in place to account for this. Like most environmental decision-making, contaminated land decision-making is a complex, multidisciplinary, non-linear, time-intensive process, usually with high cost overruns, and depends on a diverse range of inputs from policymakers, regulators, practitioners and researchers from multiple disciplines, and from industry, with multiple objectives against multiple criteria, preferences and resource constraints. Collating, aggregating, processing and using complex information for decision-making in several formats requires not only expertise but also analytical and simulation models and techniques, and collaboration within a group structure. Differences arise not only because of geographically dispersed expertise, or discipline-specific techniques and processes, but from the linguistic expressions of decision-makers’ preferences and opinions. This complexity calls for understanding and incorporating the various multidisciplinary processes and techniques into an integrated, holistic system for more informed, rational, consistent, justifiable and effective decision-making. In response to this, several computer-based support systems have been developed for contaminated land management in the past decades to address this. Decision support systems (DSS) and artificial intelligence (AI)-based systems, such as expert systems (ES) (also sometimes referred to as knowledge-based systems, KBS), have been used for this purpose. AI in its broadest definition is a concept of applying human reasoning to computer systems, which makes it possible to apply it to virtually any problem domain that is involved with decision-making, especially under conditions of complexity and uncertainty. DSS and tools have been used for a long time to assist with effective, affordable and feasible decision-making. DSS are tools that aid decision-makers to arrive at effective decisions between alternatives, without compromising any underlying heuristic human reasoning and/or expertise. DSS make use of powerful decision analysis (DA) techniques and processes, by incorporating analytical methods and tools. Because of the multidisciplinary nature of contaminated land management, a lot of decision-making is done across distributed disciplines. ES have been successfully applied in business and healthcare for decades, and the experiences are being applied to other disciplines. Knowledge-based, probabilistic and learning techniques have had a profound impact in the evolution of environmental decision-making, and decision support ES have been used to encapsulate high-level expertise that cannot easily be transferable, for problem-solving purposes. DSS and ES/KBS have been successfully developed in many organisations at different organisational levels (for example, see Aiken, 1993; Jankowski and Stasik, 1997; Lu et al., 2005; Segrera et al., 2003). Hybrid systems have also been developed where AI techniques have been used to enhance the potentials of DSS. Knowledgebased techniques, in particular, have been proven to deal effectively with the multiple dimensions of decision-making within a reasonable time frame (Avouris, 1995; Ceccaroni et al., 2004; Xuan et al., 1998). Corte´s et al. (2001) have done some work in this field, with a brief review of interesting and successful environmental applications developed using these techniques. Using relevant examples, Avouris (1995)
46
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
surveys and discusses the applicability of AI and knowledge-based DSS techniques in environmental DSS. In this work, we review contaminated land decision support from both a historical and a contemporary perspective, discussing recent and future trends in policy and research, and their implications. In this section, we have introduced the contaminated land problem, its scale, and computer-based support systems developed for it. Section 2.2 overviews contaminated land management processes, Section 2.3 reviews computer-based contaminated land management methods and systems, and Section 2.4 overviews and reviews contaminated land decision support. Section 2.5 reviews DSS for contaminated land management, and Section 2.6 discusses trends in contaminated land DSS. We conclude the chapter (Section 2.7) by highlighting the key points in the evolution of contaminated land management, and the current trends in computer-based support systems for contaminated land management.
2.2
CONTAMINATED LAND MANAGEMENT
Natural resources such as land and water are finite, and are exponentially becoming exhausted as a result of rapid urbanisation and population growth, the changing climate, and derelict former industrial sites. These finite resources need to be protected and managed in an appropriate way in order for them to perform their intended functions and roles. In the UK, for example, increasing numbers of green fields that should be preserved and protected are being threatened or lost to development, although there are extensive abandoned and derelict lands that could be sustainably regenerated for this. These lands are mostly contaminated as a result of past activities, and pose unacceptable risks to human health and the wider environment, which need to be minimised or eliminated. Land is made up of soil and groundwater, both of which are protected and essential natural resources. Not only does soil provide the food we eat through agriculture, it also helps clean our water and the air we breathe. For the soil to perform all these functions, the land must be free from any contaminants, and must be managed efficiently. Soil covers most of the Earth’s land surface, varying in depth from a few centimetres to several metres, interacting with the air and groundwater by exchanging essential gases and regulating the flow of groundwater, and thus performing its role as a key natural resource. Groundwater is considered to be any water located beneath the ground surface in soil pore spaces, or in the fractures of geological formations, which eventually flows through the surface as surface water. Groundwater makes up about 20% of the world’s freshwater, making it the largest available reservoir of freshwater supply. As groundwater can be a long-term water reservoir where the natural water cycle takes anything from days to millennia to complete, it is vulnerable to potential contamination by leaching or runoff from contaminated soils (Figure 2.5). European legislation for contaminated land is covered in the Integrated Pollution Prevention and Control (IPPC) Directive, which sets out obligations with which polluting industries must comply with regard to the release of pollutants into the environment. The Directive aims to reduce pollution to air, water and soil to ensure
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
47
Figure 2.5: Conceptual model of different exposure pathways to receptors.
Gaseous effluents Deposition Deposition to crops to ground
Inhalation and transpiration Air submersion Nuclear plant
Exposure to deposited material Irrigation
Liquid effluents Direct radiation Drinking groundwater
Eating crops Meat Drinking milk Ingestion
Drinking surface water
Source: ASTDR (n.d.). Conceptual model of different exposure pathways to receptors. Available online from http://www.atsdr.cdc.gov/HAC/pha/paducah2/pgd_p2.html. Retrieved 9 July 2009.
maximum environmental protection. Most EU member states already have an environmental management policy in place, although most have national and regional policies in place for the assessment and management of contaminated lands, which differ according to a country’s needs. For example, the UK legislation is based on a principle of ‘‘fitness for use’’, where land is remediated according to future land use. By contrast, in the Netherlands, the legislation is based on the principle of ‘‘multifunctionality’’, where land is improved for any possible use. In the USA, contaminated land legislation is covered in the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), which is a Federal remediation programme for hazardous and waste sites. CERCLA made provisions for the taxation of chemical and petroleum industries, which go into a trust fund for the clean-up of hazardous substances that might endanger public health and the environment. The Canadian government response to the contaminated land problem is addressed through the Federal Contaminated Sites Action Plan (FCSAP), which addresses the reduction of risks to human health and the environment through risk assessment and remediation of contaminated lands.
2.3
COMPUTER-BASED CONTAMINATED LAND MANAGEMENT
Approaches to decision-making for contaminated land management have changed rapidly over the past three decades, evolving from simple paper-based hazard assess-
48
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
ments, through cost-centred approaches, remedial technological feasibility and efficacy, risk-based approaches, to sustainability appraisals in the new millennium (Pollard et al., 2004). Management approaches are still broadly categorised as: • • •
cost centred, using CBA (cost–benefit analysis), CRA (cost–risk analysis) or SWOT (strengths, weaknesses, opportunities and threats); environmentally centred, such as ERA (environmental risk assessment), EIA (environmental impact assessment) and LCA (life cycle assessment); or socio-politically centred, such as SIA (social impact assessment).
Increasingly, sustainability appraisals and adaptive management are used to integrate the many different facets of contaminated land management into a holistic, systemsbased approach that addresses the overall problem. All these methods have their advantages and limitations. Many problems require more than one technique, and as a result hybrid methods that integrate two or more techniques are widely used now. Although the methods listed above have been used extensively, they deal only with specific aspects of contaminated land management in isolation, as they are not able to deal with the multidisciplinary and multidimensional nature of contaminated land management, which requires a more integrated and holistic approach for effective decision-making. Also, most of them lack structure for a rational and consistent decision-making framework. It is widely accepted that taking decisions in isolation is no longer sufficient, and that there is a need for a more robust and integrated decision-making framework (Pollard et al., 2004). Recent holistic approaches such as sustainability appraisals, too, can only do so much, and need to be coupled with a multiple criteria decision analytical method to help balance the many conflicting trade-offs between criteria and alternatives. Contaminated land management is a multicriteria problem: most projects are site-specific and subject not only to local geological, hydrological and climatic conditions but also to previous land uses, intended future uses, and neighbouring land uses. Also, all decisions have to be made from a policy and regulatory point of view, with decision-makers increasingly facing complex issues of financial liability, reduced funding, multiple regulatory frameworks, the interpretation of sophisticated analytical data and risk assessments, the relative capabilities of remediation technologies, and the maintenance of public confidence in remediation projects. All this impacts on the decision-making process and its outcome (Pollard et al., 2004). Contaminated land models are used to simulate and optimise the management process to eliminate and control the risks posed. Simulation models are used to represent and quantify processes and their effects, and could be used for prediction purposes. Optimisation models are used to find the most favourable solution among alternatives by minimising or maximising model parameters. Several simulation and optimisation models have been built to help in understanding contaminant behaviour and transport in the subsurface environment. These models have been used by regulators to shape policy, and by practitioners for decision-making purposes. The past two decades have seen the development of physical, analytical and numerical models for contaminant transport in soils (Javadi and Al-Najjar, 2007). Groundwater flow and contaminant fate and transport models are used to determine
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
49
contaminant behaviour in the subsurface. Groundwater flow models are used to determine the rate and direction of groundwater flow, and fate and transport models are used to determine the movement and chemical reactions of contaminants. These models are governed by equations, which can be solved by various methods – numerical, analytical, geostatistical or stochastic, for example. Analytical methods are used mostly for small-scale or simple cases, giving exact solutions. Numerical methods, on the other hand, give approximate solutions. However, numerical methods such as finite element and finite difference methods are preferred by practitioners, because they can be used for highly complex cases. Sometimes a hybrid of two or more of these methods is used. Much work has been done in recent years to apply these methods and techniques in modelling real-life situations. Sheu and Chen (2002) developed a finite element model for predicting the groundwater contaminant concentration governed by advection and dispersion, taking into account the first-order degradation of the contaminant to model the transport phenomenon in groundwater. A similar model was developed by Mohamed et al. (2006) to simulate advection, dispersion and biological reactions of contaminants. Brouyere (2006) modelled the migration of contaminants through variably saturated dual-porosity, dual-permeability chalk. Javadi and Al-Najjar (2007) developed a coupled transient finite element model for simulating the flow of water and air, and contaminant transport in unsaturated soils considering the effects of temperature, air flow, chemical reactions and mobile and immobile water in the soil. Analytical and numerical hybrid methods, such as the analytical element method and the boundary element method, have also been used for modelling of contaminant transport (Liao and Aral, 2000; McDermott et al., 2007). Li et al. (2005) used a hybrid approach to simulate contaminant transport in the three-dimensional subsurface. Liao and Aral (2000) developed a semi-analytical solution for two-dimensional equations governing transport of light non-aqueous phase liquids (LNAPL) in unconfined aquifers. McDermott et al. (2007) used a hybrid numerical and analytical approach to model the efficiency of removal of organic contaminants from water using a microporous polypropylene hollow fibre membrane module. Wang et al. (1999) developed another numerical–analytical hybrid model for solving equations describing the transport of contaminants in two-dimensional homogeneous porous media. Contaminant fate and transport models make use of site-specific geological and hydrogeological information, together with information on site-specific contamination – the contaminants present, their concentrations and locations, and their physical, biological and chemical properties, etc. The transport of contaminants in the soil results in chemical, physical and biological reactions between contaminant and soil constituents (Javadi and Al-Najjar, 2007). The type of hydrogeological information needed for modelling includes soil porosity, pore water and air pressures, hydraulic conductivity, and soil water content. Other hydrogeological information used includes historical and current land use maps, which help with forecasting future contaminant behaviour. In solving contaminant transport problems, groundwater and air flow equations, usually derived from Darcy’s law, are generally solved together with governing equations for the fate and transport of contaminants. Darcy’s law describes the flow of
50
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
fluid through an aquifer, given the fluid viscosity and hydraulic head loss over a given distance. The groundwater flow equation is used to determine flow in both transient and steady-state conditions. Both analytical and numerical methods have been used to solve the groundwater flow equation (Craig and Rabideau, 2006; Holder et al., 2000; Javadi and Al-Najjar, 2007; Sheu and Chen, 2002; Therrien and Sudicky, 2000), especially during the last two decades, where both methods have been used to develop models for contaminant transport in soils considering the effects of different transport mechanisms. Analytical methods are used for small-scale or simple cases, and give exact solutions. Analytical models are usually steady-state and one-dimensional, and do not account for spatio-temporal changes in field conditions. Numerical methods, on the other hand, give approximate solutions to the governing equations, requiring the spatial and time domains to be discretised. Approximate solutions do not imply inaccurate solutions, however, and the accuracy of these models depends on the quality of the data used. Numerical methods, such as finite element and finite difference methods, are preferred by practitioners, because they can be used for highly complex cases. There are hybrid analytical and numerical methods similar to the analytical element method (Liao and Aral, 2000; McDermott et al., 2007). Stochastic methods are also used to model the spatial variability of contaminant flow and solute transport (Yeh, 1992), and to quantify uncertainty (Aguirre and Haghighi, 2003). Geostatistical methods are used to model other (more common) patterns of contamination, accounting for the continuity and uncertainty in estimates by incorporating spatial and temporal coordinates of observations in data processing (Goovaerts, 1999). Mackay and Morakinyo (2006) argued that neither of these approaches effectively accounts for the contamination heterogeneity that lies between these two limits, and therefore developed a stochastic model for simulating spatially uncertain and discontinuous contaminant releases. Aguirre and Haghighi (2003) used a hybrid stochastic finite element methodology to model transient contaminant transport. Although models are successfully used to help understand contaminant behaviour and fate in the subsurface environment under different scenarios, they are not used for decision-making purposes per se. As a result, DSS have been developed for this purpose. There are several contaminated land DSS, the most common of which are coupled model and support systems, known as model-based DSS. Model-based DSS extend simulation or optimisation models to provide decision-making support, and information-based DSS make use of information available for decision-making purposes.
2.4
CONTAMINATED LAND DECISION SUPPORT
The concept of decision support is very broad, and DSS have been defined generically in several ways, depending on who or what are its end-users. At its broadest definition, a DSS is any methodology that is helpful to a decision-maker to resolve issues of trade-offs through the synthesis of information. It is therefore not necessary for a DSS
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
51
to be computer-based. In the scope of this work, we define DSS as interactive knowledge or expert computer-based information systems that are intended to help managers make decisions more easily. DSS make use of powerful DA techniques and processes, by incorporating qualitative, quantitative or progressive analytical methods, and sometimes a hybrid of these. DA is a multidisciplinary field that deals with matching the most suitable tools, methodologies, techniques and theories to relevant decisions. In contaminated land management DA, a broad range of techniques are used. Decision-making under uncertainty has long been addressed by DA, which uses mathematical techniques to solve scientific and engineering problems. DA is the practical application of decision theory to structure and formulate decision-making under risk and uncertainty. DA methods and techniques have been used to handle complex problems such as contaminated land management. The inherent complexity of contaminated land management dictates that problems be decomposed into smaller ones, where the different groups of stakeholders determine how each sub-problem individually affects the overall problem (Saaty, 1990). DA in itself is an emerging discipline, ‘having existed in name only’ since its creation in 1966 (Chou et al., 2007). Its application to geotechnical engineering is therefore in its infancy, and although there are works available in this area in the literature, much of it to date has been rhetoric. Multi-criteria decision analysis (MCDA) is a type of DA that is used to help with complex decision-making, mostly under uncertainty, where there are multiple and often conflicting criteria and preferences, with trade-offs to be made between these and their alternatives. MCDA is used to formalise decision-making by clarifying the advantages and disadvantages of options under risk and uncertainty (Saaty, 1990). Although MCDA has been used extensively in the broad areas of environmental management and stakeholder involvement, few efforts have been made to apply it to contaminated land management and risk analysis (Linkov et al., 2005). The literature covers detailed analysis of the different theories and methods that have evolved from traditional MCDA: some techniques rank options; some identify a single optimal alternative; some provide an incomplete ranking; and some differentiate between acceptable and unacceptable alternatives (Linkov et al., 2006). Different problems require different solutions, and therefore different kinds of DA method have been successfully applied to different kinds of contaminated land problem over the years. One of the most common advantages of MCDA is its ability to draw similarities and potential areas of conflicts between stakeholders in group decision-making, which results in a more complete understanding of the values held by others (Linkov et al., 2005). The current methods of MCDA, used specifically for environmental sciences DA, include: •
• •
utilitarian methods, such as MAUT/VT (Multi-Attribute Utility/Value Theory), SMART (Simple Multi-Attribute Rating Technique) and TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution); hierarchical methods, such as AHP (Analytical Hierarchy Process); probabilistic and statistical methods, such as Bayesian analysis; and
52 •
2.5
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
outranking methods, such as ELECTRE (Elimination and Choice Expressing Reality) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations).
DECISION SUPPORT SYSTEMS FOR CONTAMINATED LAND MANAGEMENT
Contaminated land practitioners and stakeholders make important decisions that ultimately not only affect the environment, but also have social, economical, legal and ethical repercussions, with trade-offs between these and other diverse criteria and objectives. All the above types of decision are to be found in contaminated land management decision problems, and decision theory is applied to these kinds of problem to help arrive at optimal decisions. These decisions are often very complex, and are made under conditions of considerable risk and uncertainty, often because of incomplete and noisy datasets, and the unique nature of each contaminated land problem. Also, because of the diversity of decision-makers involved with each project, there are often competing and conflicting decisions and alternatives, each representing an optimal rational decision alternative to a particular group of stakeholders. As a result, DSS have been used to bridge all these multiple and conflicting decisions. The history of DSS is not always clear-cut, and many people have different perspectives on what happened when, and what was important. Power has presented documentation of historical facts, over several versions. In this he posits that, although the DSS concept evolved from the theoretical studies of organisational decisionmaking at the Carnegie Institute of Technology and the work on interactive computer systems at the Massachusetts Institute of Technology (MIT), there are differences of opinion spanning about four decades. Therefore chronicling the history of DSS is neither neat, nor linear (Power, 2007). Modern decision support had its beginnings in the US military during World War II, followed by adoption by the business sector as management information systems (MIS). Interest in the theory and practice of computer-based DSS and their applications resulted in their rapid adoption in other fields, especially operational research (OR) and the management sciences (MS). The first-generation DSS then evolved into computer-based decision support systems and tools developed for different applications, such as clinical diagnostics, supply chain management and financial services. Research activities in universities soon expanded the field of DSS and their applications beyond the original management and business domains (Power, 2007), with definitions evolving as application potentials increased. Rizzoli (2002) presents a chronology of definitions of DSS, from the 1970s up to the early 1990s. Although OR and MS were originally two different and disparate terms, they are now used interchangeably in the literature (Lawrence and Pasternack, 1998). October 2002 saw the 50th anniversary of OR (and MS a year later in 2003). The October issue of OR/ MS Today exhaustively covered the history and legacy of OR/MS (Hess, 2002; Horner, 2002; Vazsonyi, 2002a; Woolsey, 2002), timelines (Gass, 2002) and milestones (Vazsonyi, 2002b), problems, solutions and achievements (Horner, 2002) from past,
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
53
present and beyond. The next generation of DSS evolved mostly from the fields of distributed computing and AI, out of the need to provide an integrated support network for different types of professional to manage organisations and to make more rational decisions (Power, 2003) that would save businesses’ and industries’ resources by preventing the wrong decisions being made. One of the first classifications of DSS was by Alter (1980). He concluded, from a study of 56 systems, that DSS could be categorised into seven categories, where each category is in terms of the generic operations that can be performed by systems in that category (Power, 2007). More details of these classifications and a timeline of other different classifications can be found in Power’s work (Power, 2007). Rizzoli and Young (1997) proposed a classification of environmental DSS into two, clearly separate, categories: • •
problem-specific environmental DSS, which are tailored to narrow problems, but are applicable to a wide range of different locations; situation- and problem-specific environmental DSS, which are tailored to specific problems in specific locations.
Ha¨ttenschwiler (1999) has categorised DSS as passive, active or cooperative. According to Ha¨ttenschwiler, a passive DSS merely assists the decision-making process, without coming up with suggestions or solutions, whereas an active DSS can come up with decisions and suggestions. A cooperative DSS is like a feedback system: it allows for the decisions suggested by the system to be modified or refined and then fed back to the system for validation. Power (2002) splits DSS into five categories: communication-driven, data-driven, knowledge-driven, model-driven and document-driven DSS. Communication-driven DSS are usually internal systems for use within an organisation. Data-driven DSS are used for querying databases to seek specific answers for specific purposes. Knowledge-driven DSS are knowledge bases that cover a broad range of systems: for example, all the other categories put together could make up a knowledge-driven DSS that served many different purposes. Model-driven DSS are complex systems that help in analysing decisions or choosing between different options. Document-driven DSS manage, retrieve and manipulate unstructured information in a variety of formats. Although practitioners have long used DSS for business decision-making (Sauter, 1997; Thomas, 2002; Turban, 1995) and other decision-making processes in other fields, the use of DSS as a solution to geo-environmental problems is relatively new. It is an emerging field that is rapidly evolving, giving rise to different types of DSS classification, each tailored as solutions to different types of geo-environmental problem. As DSS have evolved, so their architecture, mode of implementation and functionality, and the incorporation of new computational techniques, have advanced (Segrera et al., 2003). Several researchers have proposed frameworks for developing contaminated land DSS, and several DSS have been developed for different aspects of contaminated land over the years. For example, Jankowski and Stasik (1997) developed a prototype collaborative, spatial DSS, SUDSS, for site selection and land use allocation. It was aimed at accommodating public participation in all decision-making processes across
54
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
space and time. Birkin et al. (2005) developed another spatial DSS, HYDRA, for providing access to data for spatial analysis in order to make better decisions for the development and redevelopment of brownfield sites in urban areas. Bonniface et al. (2005) outlined an integrated approach that demonstrated the significant advantages that can be gained by using data management and visualisation tools. They developed a live system that provides an accessible and dynamic management tool as a DSS for the management of decommissioned contaminated UK nuclear sites. Chen et al. (2002) developed a spatial DSS, GISSIM, for the effective management of petroleum-contaminated land. The system contained two main components: an advanced three-dimensional, numeric, multicomponent simulation model for contaminant fate and transport and biodiversity, and a geographical information system (GIS) component for managing and displaying modelling results. SmartPlaces, a prototype GIS-based DSS, was developed by Thomas (2002) as an integrated expert system for city governments to apply in making urban siting decisions, preventing redevelopment of potentially contaminated land. The system provides access to state and regional geospatial databases using several visualisation tools to provide a better understanding of issues, options and alternatives in redeveloping brownfield sites.
2.6
CONTAMINATED LAND DSS TRENDS
The Internet is where DSS action is today (Power, 2002), as is evident with many recently developed DSS (Ceccaroni et al., 2004; Lago et al., 2007; Rydahl et al., 2003; Sokolov, 2002; Wang and Cheng, 2006). With the expansion of industries and companies across borders, brought about by globalisation, and the growing societal awareness of the effects of climate change, organisations have had to modify the way they behave, mostly by using technological innovations (Lago et al., 2007). The evolution and growth of the Internet have allowed innovative Web-based DSS to be developed. Web-based DSS are computerised systems that deliver decision support information or tools using a web browser (Power, 1998) as a front end. As the Internet increasingly becomes the primary source of information, people are increasingly relying on the Web for decision-making processes (Jarupathirun and Zahedi, 2007). Also, environmental problems and legislations governing their management tend to be distributed geographically, so there is an increasing demand for DSS with unlimited access across distributed time and space domains. Decisionmaking and decision support activities tend to be ubiquitous in nature, which makes the Web an ideal platform for systems with decision support capabilities. Although there are Web-based DSS for contaminated land, most of them are commercial (Ha¨ma¨la¨inen et al., 2001). This subject is still relatively underexplored in academic research. Whereas traditional DSS require software to be installed on individual workstations or computers, Web-based DSS are available to any person with Internet access, and are therefore being utilised by various different industries (Molenaar and Songer, 2001). Apart from bridging technological barriers and saving costs (e.g., licensing issues, support, end user training, centralised data and information reposi-
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
55
tories), other benefits of using Web-based DSS include extensibility, accessibility and information distribution with cross-platform flexibility and independence. Web-based DSS also provide a portal for group decision support. With the rapid growth and evolution of users on the Internet and the Web, this platform provides a greater distribution of DSS (Molenaar and Songer, 2001). With the current interest in the effects of human activities on earth, research interest into sustainable alternatives is increasing. The implications of these effects, their causes and their impacts call for a sustainable framework that not only immediately minimises these impacts, but also progressively provides long-term sustainable alternatives. Environmental sustainability is geared towards dealing with current problems in such a way that future generations are not compromised, as defined by the Brundtland Commission (UN, 1987). Sustainable development of contaminated land ensures the long-term efficacy of the overall management process, using the methods and processes that feasibly pose the least environmental impact. DSS offer a platform for the incorporation of sustainability appraisal in contaminated land management decisions. The marked shift towards the integration of sustainable criteria in contaminated land projects can be seen in the varying policies of different countries and regions. For example, the UK government had a target of 60% of all new developments to be on previously developed lands (PDLs) and the conversion of existing buildings by 2008. This was an attempt to reduce the need for greenbelt development, which was met 8 years earlier. PDLs are mostly abandoned, derelict or contaminated lands that need to be cleaned up before development to ensure that they do not pose unacceptable risks to human health. This initiative is aimed at reducing the pressure on greenbelt development, and regenerating blighted and derelict neighbourhoods, thereby providing opportunities for job creation, economic growth and recovery, and environmental sustainability. Another trend in contaminated land management decision-making is in considering the effects of projected climate change on contaminant fate and transport. Our understanding of the behaviour of conventional contaminants is well developed after years of study (Hu¨bner, 2008). However, with the increasing pressures on availability of land from drivers such as the changing climate, rapid population growth, migration, urbanisation and ever-changing land use, there are significant implications for contaminated land management, and the increasing effects of risks posed by the contaminants on human health and the environment. Effective and sustainable management of contaminated land therefore requires an understanding of the magnitude of this risk to current and future pollutant linkages between source and potential receptors. There are many interactions between the climate system and the land surface. The land surface has an important influence on the flow of air over it, on the absorption of solar energy, and on the soil and water cycles. Also, the interactions between climate variables and soil properties, such as soil pH, soil organic carbon content, soil moisture and soil redox potential, are key to assessing contaminant fate and transport mechanisms. There is much empirical evidence in the literature that the terrestrial component of the carbon cycle is responding to climatic variations and
56
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
trends. The change that is already occurring has significant implications for contaminated land management. Temperature and precipitation changes due to global warming are affecting the input of chemicals into the environment, and their fate and transport in aquatic systems: therefore there is a need to identify the potential impact of climate change on contaminant fate and behaviour. The latest IPCC report (the AR4) projects a range of possible global warming temperature increases between 1.68C and 6.48C by 2100, based on different emissions scenarios and economic policies (IPCC, 2007). This is a massive range, and there is no indication of relative probability, as the planet is now outside its normal operating regime, making prediction difficult. Climate sensitivity to the doubling of CO2 remains uncertain (Murphy et al., 2004), but future projections (e.g., IPCC AR4, the Stern review, UKIPC reports) using various climate models are not optimistic. There is a risk that changes in environmental conditions and processes will affect the standards of remediation required to ensure receptors are not significantly impacted in the future (Al-Tabbaa et al., 2007a). Very little experimental evidence has been published for the direct impact of climate change on contaminated land and remediation systems (Al-Tabbaa et al., 2007b). More frequent floods or droughts will have significant impacts on contaminant mobility, and will strongly influence contaminant fate and transport mechanisms, raising doubts about the technical efficacy of remedial methods and techniques.
2.7
CONCLUSION
Approaches to decisions for contaminated land management have changed rapidly over the past three decades, evolving from simple hazard assessments, through the cost-centred approaches of the 1970s, the technology feasibility studies of the 1980s, and the risk-based approaches of the 1990s, to sustainability appraisals in the new millennium. Policy has also had to change with increasing scientific and technical understanding of the processes that govern contaminant transport and behaviour, and the effect of these on human health, protected resources and the wider environment. Over the past few decades, computer-based solutions for contaminated land management have been steadily evolving alongside the evolution of hardware and software. The computer revolution has made possible the development of complex and powerful systems that are able not only to simulate contaminant fate and behavioural processes, but also to help with decision-making and decision support. Systems developed for contaminated land management are in the form of decision support systems for decision-making, or AI-based systems, such as expert systems, for problem-solving. Computer-based systems provide a portal for rational, justifiable and scientifically based decision-making and decision support. DSS make use of powerful DA techniques and processes by incorporating analytical methods and tools. Owing to the multidisciplinary nature of contaminated land management, much decision-making is done across distributed disciplines. Also, specific knowledge is easily encapsulated in computer-based systems that can readily be accessible to other decision-makers and stakeholders.
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
57
The current trend in contaminated land management is in developing Web applications for decision-making and decision support systems, with a number of the DSS developed now being Web-based. With environmental problems and legislation governing management tending to be distributed geographically, and across multiple disciplines, the Web makes possible the development of systems that are accessible to all the stakeholders involved. Decision-making and decision support activities tend to be ubiquitous in nature, which makes the Web an ideal platform for systems with decision support capabilities. Also, recent trends in contaminated land policy and practice have seen a marked shift towards the integration of sustainability criteria. Sustainable development and management make possible consideration of the implications of the management process, not only in cost or technical feasibility terms, but also in the wider long-term effectiveness of management decisions, and their implications for the wider environment, such as emissions, energy use, and waste by-products. Practitioners and policymakers also have to consider the potential effects of projected climate change on contaminant fate and transport, and the potential effects on receptors and the wider environment with contaminated land management projects.
REFERENCES Aguirre, C.G. and Haghighi. K. (2003). Stochastic modelling of transient contaminant transport. Journal of Hydrology. 27: 224–239. Aiken, M.W. (1993). Advantages of group decision support systems. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century. Center for Teaching and Technology, Academic Computer Center, Georgetown University, Washington, DC. Available online from http://www.helsinki.fi/science/optek/1993/n3/aiken.txt. Al-Tabbaa, A., Smith, S.E., Duru, U.E., Iyengar, S., De Munck, C., Moffatt, A.J., Hutchings, T., Dixon, T., Doak, J., Garvin, S., Ridal, J., Raco, M. and Henderson, S. (2007a). Climate Change, Pollutant Linkage and Brownfield Regeneration. SUBR:IM Bulletin No. 3. CL:AIRE, London. Al-Tabbaa, A., Smith, S.E., Duru, U.E., Iyengar, S., De Munck, C., Hutchings, T., Moffat, A.J., Garvin, S., Ridal, J., Dixon, T., Doak, J., Raco, M. and Henderson, S. (2007b). Impact of and response to climate change in UK brownfield remediation. Proceedings of International Conference on Climate Change: Act Now on Climate Change – Now or Never, Hong Kong. Alter, S.L. (1980). Decision Support Systems: Current Practice and Continuing Challenge. AddisonWesley, Reading, MA. ASTDR (n.d.). Conceptual model of different exposure pathways to receptors. Available online from http://www.atsdr.cdc.gov/HAC/pha/paducah2/pgd_p2.html. Retrieved 9 July 2009. Avouris, N.M. (1995). Cooperating knowledge-based systems for environmental decision support. Knowledge-Based Systems. 8: 39–54. Birkin, M., Dew, P.M., Macfarland, O. and Hodrien, J. (2005). Hydra: A prototype grid-enabled spatial decision support system. Proceedings of the First International Conference on e-Social Science, National Centre for e-Social Science, Manchester. Available online from http:// www.ncess.ac.uk/research/nodes/MoSeS/publications/20050623_birkin_Hydra.pdf Bonniface, J.P., Coppins, G.J., Hitchins, G.D. and Hitchins, G.R. (2005). Contaminated land management: defining and presenting a robust prioritised programme using integrated data management and geographical information systems. Proceedings of the Waste Management Conference 2005, Tucson, AZ.
58
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Brouyere, S. (2006). Modeling the migration of contaminants through variably saturated dualporosity, dual-permeability chalk. Journal of Contaminant Hydrology. 82: 195–219. Ceccaroni, L., Corte´s, U. and Sanchez-Marrea, M. (2004). Onto WEDSS: augmenting environmental decision-support systems with ontologies. Environmental Modeling & Software. 19: 785–797. Chen, Z., Chakma, G.H.H. and Li, J. (2002). Application of a GIS-based modeling system for effective management of petroleum-contaminated land. Environmental Engineering Science. 19: 291–303. Chou, J.J., Chen, C.P. and Yeh, J.T. (2007). Crop identification with wavelet packet analysis and weighted Bayesian distance. Computers and Electronics in Agriculture. 57: 88–98. Corte´s, U., Sanchez-Marrea, M., Sanguestaa, R., Comasb, J., Rodab, I. R., Poch, M. and Rian˜o, D. (2001). Knowledge management in environmental decision support systems. AI Communications. 14: 3–12. Craig, J.R. and Rabideau, A.J. (2006). Finite difference modeling of contaminant transport using analytic element flow solutions. Advances in Water Resources. 29: 1075–1087. EEA (2007). Progress in Management of Contaminated Sites. European Environment Agency, Copenhagen. Gass, S.I. (2002). Side story: great moments in HistORy. OR/MS Today. Special issue, October. Available online from http://www.lionhrtpub.com/orms/orms-10-02/frhistorysb1.html. Retrieved 21 May 2007 Goovaerts, P. (1999). Geostatistics in soil science: state-of-the-art and perspectives. Geoderma. 89: 1–45. Ha¨ma¨la¨inen, R., Kettunen, E., Marttunen, M. and Ehtamo, H. (2001). Evaluating a framework for multi-stakeholder decision support in water resources management. Group Decision and Negotiation. 10: 331–353. Ha¨ttenschwiler, P. (1999). Neues anwenderfreundliches Konzept der Entscheidungsunterstu¨tzung. Gutes Entscheiden in Wirtschaft, Politik und Gesellschaft. vdf Hochschulverlag AG, Zurich, pp. 189–208. Hess, S. (2002). Reminiscences and reflections: my first taste of OR. ‘‘I had never heard of OR.’’OR/ MS Today. Special issue, October. Holder, A.W., Bedient, P.B. and Dawson, C.N. (2000). FLOTRAN, a three-dimensional ground water model, with comparisons to analytical solutions and other models. Advances in Water Resources. 23: 517–530. Horner, P. (2002). History in the making: INFORMS celebrates 50 years of problems, solutions, anecdotes and achievement. OR/MS Today. Special issue, October. Hu¨bner, Y. (2008). Royal Society of Chemistry Response to the Royal Commission on Environmental Pollution Study on ‘‘Adapting the UK to Climate Change’’. Royal Society of Chemistry, London. IPCC (2007). Fourth Assessment Report of the Intergovernmental Panel on Climate Change (AR4). IPCC, Geneva. Available online from http://www.ipcc.ch. Jankowski, P. and Stasik, M. (1997). Spatial understanding and decision support system: a prototype for public GIS. Transactions in GIS. 2: 73–84. Jarupathirun, S. and Zahedi, F. M. (2007). Exploring the influence of perceptual factors in the success of web-based spatial DSS. Decision Support Systems. 43: 933–951. Javadi, A.A. and Al-Najjar, M.M. (2007). Finite element modeling of contaminant transport in soils including the effect of chemical reactions. Journal of Hazardous Materials. 143: 690–701. Lago, P.P., Beruvides, M.G., Jian, J.-Y., Canto, A.M., Sandoval, A. and Taraban, R. (2007). Structuring group decision making in a web-based environment by using the nominal group technique. Computers & Industrial Engineering. 52: 277–295. Lawrence, J.A. Jr and Pasternack, B.A. (1998). Applied Management Science: a Computer-Integrated Approach for Decision Making. Wiley, New York. Li, M.-H., Cheng, H.-P. and Yeh, G.-T. (2005). An adaptive multigrid approach for the simulation of contaminant transport in the 3D subsurface. Computers & Geosciences. 31: 1028–1041. Liao, B. and Aral, M. M. (2000). Semi-analytical solution of two-dimensional sharp interface LNAPL transport models. Journal of Contaminant Hydrology. 44: 203–221.
CONTAMINATED LAND DECISION SUPPORT: CONCEPTS, METHODS AND SYSTEMS
59
Linkov, I., Sahay, S., Kiker, G., Bridges, T. and Seager, T.P. (2005). Multi-criteria decision analysis: a framework for managing contaminated sediments. In Levner, E., Linkov, I. and Proth, J.M. (Eds). Strategic Management of Marine Ecosystems. Springer, pp. 271–297. Linkov, I., Satterstrom, F.K., Kiker, G., Batchelor, C., Bridges, T. and Ferguson, E. (2006). From comparative risk assessment to multi-criteria decision analysis and adaptive management: recent developments and applications. Environment International. 32: 1072–1093. Lu, J., Zhang, G. and Wu, F. (2005). Web-based multi-criteria group decision support system with linguistic term processing function. IEEE Intelligent Informatics Bulletin. 5: 35–43. Mackay, R. and Morakinyo, J.A. (2006). A stochastic model of surface contaminant releases to support assessment of site contamination at a former industrial site. Stochastic Environmental Research and Risk Assessment. 20: 213–222. McDermott, C.I., Tarafder, S.A., Kolditz, O. and Schuth, C. (2007). Vacuum assisted removal of volatile to semi volatile organic contaminants from water using hollow fiber membrane contactors: II: A hybrid numerical–analytical modeling approach. Journal of Membrane Science. 292: 17–28. Mohamed, M.M.A., Hatfield, K. and Hassan, A.E. (2006). Monte Carlo evaluation of microbialmediated contaminant reactions in heterogeneous aquifers. Advances in Water Resources. 29: 1123–1139. Molenaar, K.R. and Songer, A.D. (2001). Web-based decision support systems: case study in project delivery. Journal of Computing in Civil Engineering. 15: 259–267. Murphy, J.M., Sexton, D.M.H., Barnett, D.N., Jones, G.S., Webb, M.J., Collins, M. and Stainforth, D.A. (2004). Quantification of modelling uncertainties in a large ensemble of climate change simulations. Nature. 430: 768–772. Pollard, S.J.T., Brookes, A., Earl, N., Lowe, J., Kearney, T. and Nathanail, C.P. (2004). Integrating decision tools for the sustainable management of land contamination. Science of the Total Environment. 325: 15–28. Power, D.J. (1998). Web-based decision support systems. DS*Star. Available online from http:// dssresources.com/papers/webdss/ Power, D.J. (2002). Web-based and model-driven decision support systems: concepts and issues. Proceedings of the Americas Conference on Information Systems, Long Beach, CA. Power, D.J. (2003). A Brief History of Decision Support Systems, v. 2.8. Available online from http:// dssresources.com/history/dsshistoryv28.html. Retrieved 26 October 2006. Power, D.J. (2007). A Brief History of Decision Support Systems, v. 4.0. Available online from http:// dssresources.com/history/dsshistory.html. Retrieved 26 October 2006. Rizzoli, A.E. (2002). Decision Support Systems. IDSIA, Manno, Switzerland. Rizzoli, A.E. and Young, W.Y. (1997). Delivering environmental decision support systems: software tools and techniques. Environmental Modelling & Software. 12: 237–249. Rydahl, P., Hagelskaer, L., Pedersen, L. and Bøjer, O.Q. (2003). User interfaces and system architecture of a web-based decision support system for integrated pest management in cereals. EPPO/OEPP Bulletin. 33: 473–481. Saaty, T.L. (1990). How to make a decision: the analytic hierarchy process. European Journal of Operational Research. 48: 9–26. Sauter, V.I. (1997). Decision Support Systems. John Wiley & Sons, New York. Segrera, S., Ponce-Herna´ndez, R. and Arcia, J. (2003). Evolution of decision support system architectures: applications for land planning and management in Cuba. Journal of Computer Science and Technology. 3: 40–47. Sheu, T.W.H. and Chen, Y.H. (2002). Finite element analysis of contaminant transport in groundwater. Applied Mathematics and Computation. 127: 23–43. Sokolov, A. (2002). Information Environment and Architecture of Decision Support System for Nutrient Reduction in the Baltic Sea. Stockholm University. Therrien, R. and Sudicky, E.A. (2000). Well bore boundary conditions for variably saturated flow modeling. Advances in Water Resources. 24: 195–201. Thomas, M.R. (2002). A GIS-based decision support system for brownfield redevelopment. Landscape and Urban Planning. 58: 7–23.
60
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Turban, E. (1995). Decision Support and Expert Systems: Management Support Systems. Prentice Hall, Englewood Cliffs, NJ. UN (1987). Report of the World Commission on Environment and Development. United Nations, New York. Vazsonyi, A. (2002a). Reminiscences and reflections: my first taste of OR. ‘‘I had a helluva big assignment.’’ OR/MS Today. Special issue, October. Vazsonyi, A. (2002b). Side story: milestone manifesto. OR/MS Today. Special issue, October. Wang, G.T., Li, B.Q. and Chen, S. (1999). A semianalytical method for solving equations describing the transport of contaminants in two-dimensional homogeneous porous media. Mathematical and Computer Modeling. 30: 63–74. Wang, L. and Cheng, Q. (2006). Web-based collaborative decision support services: concept, challenges and application. Proceedings of the ISPRS Technical Commission II Symposium, Vienna, Vol. 36, Part 2. Woolsey, G. (2002). Reminiscences and reflections: my first taste of OR. ‘‘Frank Parker Fowler Jr. made me do it.’’ OR/MS Today. Special issue, October. Xuan, Z., Richard, G.H. and Richard, J.A. (1998). A knowledge-based systems approach to design of spatial decision support systems for environmental management. Environmental Management. 22: 35–48. Yeh, T.C.J. (1992). Stochastic modeling of groundwater flow and solute transport in aquifers. Hydrological Processes. 6: 369–395.
CHAPTER
3
An Overview of Exposure Assessment Models Used by the US Environmental Protection Agency Pamela R.D. Williams, Bryan J. Hubbell, Eric Weber, Cathy Fehrenbacher, David Hrdy and Valerie Zartarian
3.1
INTRODUCTION
Models are often used in addition to or in lieu of monitoring data to estimate environmental concentrations and exposures for use in risk assessments or epidemiological studies, and to support regulatory standards and voluntary programmes (Jayjock et al., 2007; US EPA, 1989, 1992). The purpose of this chapter is to provide an overview of 35 models currently supported and used by the US EPA to assess exposures to human or ecological receptors (see Table 3.1 for a list of abbreviations). These models differ in regard to their purpose, and level and scope of analysis. For example, some of the exposure assessment models refer to a single pollutant or exposure pathway, while others assess multiple pollutants and pathways. Additionally, most of these models pertain to either human or ecological receptors, although a few are applicable to both receptor groups. These models may target the general population, subgroups within the population, or individuals (e.g., workers, consumers). In regard to temporal and spatial scale, some of these models predict acute, subchronic and/or chronic exposures at the local, urban, regional and/or national level. These models are frequently used by, and have sometimes been developed in collaboration with, researchers and practitioners in academia, consulting, private industry, state and local governments, and internationally. The models included in this chapter can be used during different stages of the exposure assessment process. Specifically, these models represent the first half of the Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. Published 2010 by ILM Publications, a trading division of International Labmate Limited. This chapter is a US Government work and is in the public domain in the United States of America.
62
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.1: List of abbreviations.a Abbreviation
Full spelling
ARS ADD BART CO CEAM CAMR CERCLA
Agricultural Research Service average daily dose best available retrofit technology carbon monoxide Center for Exposure Assessment Modeling Clean Air Mercury Rule Comprehensive Environmental Response, Compensation, and Liability Act Consolidated Human Activity Database Continuing Survey of Food Intakes by Individuals Clean Air Scientific Advisory Committee Council for Regulatory Environmental Modeling estimated environmental concentration Federal Insecticide, Fungicide, and Rodenticide Act Food Quality Protection Act geographical information system hazardous air pollutant Hazardous Waste Identification Rule lifetime average daily dose margin of exposure Mobile Source Air Toxics National Academy of Sciences National Ambient Air Quality Standards National Exposure Research Laboratory National Health and Nutrition Examination Survey National-Scale Air Toxics Assessment New Source Review Office of Air Quality Planning and Standards Office of Air and Radiation Office of Pesticide Program Office of Pollution Prevention and Toxics Office of Research and Development Office of Solid Waste and Emergency Response Office of Water organophosphate pesticide particulate matter percent crop area physiologically based pharmacokinetic pre-manufacturing notification prevention of significant deterioration reference dose Registration Reviews re-registration eligibility decisions Research Triangle Park Resource Conservation and Recovery Act ( continued)
CHAD CSFII CASAC CREM EEC FIFRA FQPA GIS HAP HWIR LADD MOE MSAT NAS NAAQS NERL NHANES NATA NSR OAQPS OAR OPP OPPT ORD OSWER OW OP PM PCA PBPK PMN PSD RfD Reg Reviews REDs RTP RCRA
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
63
Table 3.1: ( continued ) Abbreviation
Full spelling
SAB SAP SOPs SCRAM TMDL TSCA USDA US EPA VCCEP WMU
Science Advisory Board Science Advisory Panel standard operating procedures Support Center for Regulatory Atmospheric Modeling Total Maximum Daily Load Toxic Substances Control Act United States Department of Agriculture United States Environmental Protection Agency Voluntary Children’s Chemical Evaluation Program Waste Management Unit
a
Does not include acronyms for exposure assessment models (see Table 3.2).
source-to-outcome continuum (see Figure 3.1), and include selected fate/transport models, exposure models, and integrated fate/transport and exposure models. According to the literature, exposure is defined as contact between an agent and a target that takes place at an exposure surface over an exposure period, whereas dose is defined as the amount of an agent that enters a target after crossing a contact boundary (WHO, 2004; Zartarian et al., 1997, 2005a). In general, the fate/transport models assess the movement and transformation of pollutants in the environment, and yield predicted ambient pollutant concentrations (in units of mg m3 , mg L1 or mg kg1 ) in different environmental media (e.g., air, water, soil, food). The outputs of these models therefore represent concentrations to which receptors have the potential for exposure, although these estimates are often used as a proxy or surrogate for actual exposure, or can serve as an input to exposure models. The exposure models incorporate information on exposure factors and time activity patterns, and yield predicted exposures or doses (in units of mg m3 or mg kg1 day1 ) based on actual (or assumed) contact between a receptor and the general environment or specific microenvironments. The outputs of these models are therefore the most representative of actual human or ecological exposures. For the purposes of this chapter, the integrated fate/transport and exposure models include those models that yield both predicted ambient pollutant concentrations (based on algorithms for assessing a pollutant’s fate and transport in the environment) and predicted exposures or doses (based on exposure factor information). However, these models do not incorporate information on time–activity patterns: model outputs are therefore based on assumed contact between a receptor and an agent. Note that although many of the exposure assessment models provide estimates of potential or absorbed dose, the primary focus of these models is on using algorithms and data to assess human or ecological exposures (i.e., the dose estimates from these models are based on simple calculations and assumptions about absorption rates, rather than kinetic or PBPK modelling). A review of stand-alone or more sophisticated PBPK dose models is outside the scope of this assessment.
64
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 3.1: Source-to-outcome continuum.
Source/stressor formation
Effect/outcome
Fate and transport
Biological event
Target dose
Environmental concentration Stressor domain
Exposure
Receptor domain
An important distinction among the various exposure assessment models used by the US EPA is that they are typically used for either ‘‘screening-level’’ or ‘‘highertiered’’ applications. Screening-level models included in this chapter do not incorporate time–activity patterns, and generally overpredict receptor exposures, because they are based on conservative (e.g., high-end) default scenarios and assumptions. These models are frequently used to obtain a first approximation, or to screen out exposures that are not likely to be of concern (US EPA, 1992). However, the default values in these models can sometimes be modified or changed if other values are deemed more suitable for the specific exposure scenario being evaluated. A small number of models used by the US EPA (which are highlighted below) are also sometimes referred to as ‘‘screening-level’’ because of their limited spatial and temporal scope, rather than because they are based on conservative exposure assumptions. On the other hand, higher-tiered models used by the US EPA typically include time–activity patterns, are based on more realistic scenarios and assumptions, and have broader spatial and temporal resolution. These models require much more data of higher quality than the screening-level models, and are intended to produce more refined estimates of exposure (US EPA, 1992). Although all of the exposure assessment models included here have been internally peer-reviewed to ensure consistency with Agency-wide policies and procedures, many of these models have also undergone independent external peer review by outside experts. In this context, external peer review can take several different forms, depending on the intended use and complexity of the model. For example, models developed for basic research, planning or screening purposes will often undergo external letter peer reviews. For these types of review, four or five outside experts
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
65
provide an independent review of the model and prepare a written response to a set of specific charge questions. Other models, such as those designed for regulatory or advanced research purposes, may undergo more extensive external peer reviews by the US EPA’s scientific advisory boards, panels or committees (e.g., SAB, SAP, CASAC). During these reviews, a panel of experts holds one or more meetings to discuss and evaluate the model structure and predictions, and the panel prepares a group report that attempts to reach consensus in response to a set of specific charge questions. Many of the models included here have also been published in the peerreviewed literature in regard to their structure, potential applications and evaluation. These internal and external peer review efforts have resulted in continuous updates and improvements to the exposure assessment models used by the US EPA. The models included in this chapter have also undergone varying degrees of model evaluation. The NAS (2007) defines model evaluation as the process of deciding whether and when a model is suitable for its intended purpose, and makes it clear that this process is not a strict validation or verification procedure, but rather one that builds confidence in model applications, and increases the understanding of model strengths and limitations. Similarly, the US EPA (2008a) defines model evaluation as the process used to generate information to determine whether a model and its analytical results are of a quality sufficient to serve as the basis for a decision. Model evaluation is a multifaceted activity involving peer review, corroboration of results with data and other information, quality assurance and quality control checks, uncertainty and sensitivity analyses, and other activities (NAS, 2007). Some of the approaches that have been used to evaluate the exposure assessment models used by the US EPA include comparing the structure, model inputs, and results of one model to another (i.e., model-to-model comparisons); comparing modelled estimates with measured or field data; and comparing modelled estimates with biomonitoring data (e.g., urine, blood). Our overview of exposure assessment models currently supported and used by the US EPA complements and expands prior efforts to identify and summarise exposure assessment tools and models used by selected US EPA programme offices (Furtaw, 2001; Daniels et al., 2003). This work also supports recent and ongoing efforts at the US EPA to characterise and evaluate its models (see Section 3.2 below). The current chapter serves as a readily available, concise reference that should provide a useful resource for modellers and practitioners of exposure and risk, as well as experimental scientists, regulators and other professionals responsible for addressing the implications of the use of these models.
3.2
BACKGROUND INFORMATION
A number of Agency guidelines, programmes and policies have shaped or provided the underlying basis for the development and refinement of the exposure assessment models used by the US EPA. For example, the US EPA’s (1992) Guidelines for Exposure Assessment provides a broad discussion of the use of mathematical modelling in the absence of field data when evaluating human exposures. Specifically, the following five general aspects of a modelling strategy are discussed in the 1992 Guidelines:
66 • • • • •
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
setting objectives; model selection; obtaining and installing the code; calibrating and running the computer model, and validation and verification.
In a subsequent consultation by the US EPA’s SAB (2006a), it was recommended that the 1992 Guidelines be updated to cover advances and changes in the theory and practice of exposure assessment (e.g., in areas such as probabilistic analyses, exposure factors, aggregate exposure and cumulative risk, community-based research, and consideration for susceptible populations and life stages), and include a greater emphasis on specific issues related to exposure modelling, such as model evaluation, spatial aspects, and timing. Efforts to update the 1992 Guidelines are currently under way within the US EPA’s Risk Assessment Forum (expected to be completed in 2009), and will include an expanded discussion of the role of mathematical modelling in conducting exposure assessments, as well as some specific examples of the US EPA’s exposure models (Bangs, 2008, personal communication).1 The US EPA also established CREM in 2000 to promote consistency and consensus among environmental model developers and users, with a focus on establishing, documenting and implementing criteria and best management practices within the US EPA. In their initial efforts, CREM developed a preliminary online database, called the Models Knowledge Base, that includes an inventory and descriptions of more than 100 environmental models currently used by the US EPA. A report entitled Draft Guidance on the Development, Evaluation, and Application of Regulatory Environmental Models was also prepared by CREM: this discusses the role of modelling in the public policy process, and provides guidance to those who develop, evaluate and apply environmental models (Pascual et al. 2003). This report recommended that best practices be used to help determine when a model (despite its uncertainties) can be appropriately used to inform a decision, such as subjecting the model to peer review, assessing the quality of the data used, corroborating the model with reality, and performing sensitivity and uncertainty analyses. In its external peer review of CREM’s draft report, the US EPA’s SAB (2006b) concluded that it provides a comprehensive overview of modelling principles and best practices, but recommended a number of ways in which the report and the Models Knowledge Base could be improved, including the need to gather or develop additional information about the framework and limitations of various models. The NAS also reviewed evolving scientific and technical issues related to the development, selection and use of computational and statistical models in the regulatory process at the US EPA. In its report Models in Environmental Regulatory Decision Making, the NAS (2007) recommended the following 10 principles to improve the US EPA’s model development and evaluation process: • • • •
evaluate a model through its entire life cycle; ensure appropriate peer review of a model; perform an adequate (quantitative) uncertainty analysis; use adaptive strategies to coordinate data collection and modelling efforts;
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
• • • • • •
67
conduct retrospective reviews of regulatory models; ensure model parsimony; carefully extrapolate from models; create an open evaluation process during rule-making; document the origin and history of each model; and improve model accessibility to stakeholders and others.
Efforts are currently under way by CREM to update the Models Knowledge Base and to respond to the NAS’s recommendations to perform life cycle evaluations and retrospective analyses of existing US EPA models (Gaber, 2008, personal communication).2 CREM also recently updated its draft guidance document in response to comments and recommendations from the SAB and NAS (US EPA, 2008a). A number of generic Agency-wide guidance documents and policies have also influenced the US EPA’s modelling efforts and underlying databases. For example, the US EPA’s (2002a) Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency contains policy and procedural guidance related to the quality of information that is used and disseminated by the US EPA. The mandatory Agency-wide Quality System also provides policy and programme requirements that help ensure that the US EPA organisations maximise the quality of environmental information by using a graded approach to establish quality criteria that are appropriate for the intended use of the information and resources available (US EPA, 2000a). Similarly, the US EPA’s (2006a) System Life Cycle Management Policy helps to ensure continued enhancements and improvements to the life cycle management of the US EPA’s data and databases. Additionally, the US EPA’s Peer Review Policy and accompanying Peer Review Handbook encourage the peer review of all scientific and technical information and work products intended to inform or support Agency decisions, including technical and regulatory models published by the Agency (US EPA, 2006b). These and other ongoing activities support the Agency’s continued efforts to ensure the quality, transparency and reproducibility of the information in underlying databases and models used and disseminated by the US EPA.
3.3
METHODS
The models included in this chapter were identified primarily by searching the US EPA’s website (www.epa.gov) using search terms such as ‘‘exposure assessment’’, ‘‘exposure model’’ and ‘‘environmental modelling’’. Materials contained within individual web pages developed by selected US EPA programme offices and research laboratories and centres related to exposure modelling and tools were also reviewed, including those by OPPT,3 OAR’s Technology Transfer Network SCRAM,4 ORD’s CEAM 5 and ORD’s NERL.6 Additionally, all of the entries listed in the CREM Models Knowledge Base were reviewed and used to identify relevant exposure assessment models. Finally, scientists and managers within different US EPA pro-
68
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
gramme offices, including ORD, OAR, OPPT, OPP, OW and OSWER, were contacted about their knowledge of other relevant models not already identified. Several criteria were used to determine which models were ultimately included in or excluded from our overview. First, any model identified whose primary purpose is to estimate human or ecological exposures, and where the model outputs are expressed in units of exposure or dose, was included, as this was the primary focus of our review. These models were later categorised as either exposure models or integrated fate/ transport and exposure models. One exception was the All-Ages Lead Model, which was excluded from this chapter because it was developed for a specific purpose (i.e., risk assessment for lead NAAQS), and is not readily transferable to other applications. Second, only a subset of those models whose primary purpose is to assess the fate and transport of pollutants in the environment, and where the modelled outputs are expressed in units of ambient media concentration, were included, with an emphasis on those fate/transport models that are the most frequently used by the US EPA for conducting exposure assessments, or which have linkages to the other exposure models included here. A complete listing of all fate/transport models was therefore considered to be outside the scope of the current assessment. Third, exposure assessment models no longer actively supported by the US EPA (i.e., the US EPA no longer provides technical support related to these models, and these models have not been updated) were excluded, although these models may still be acceptable for use outside the Agency and continue to be made available to the public. Finally, a detailed characterisation of broader modelling frameworks (i.e., computerised modelling systems that integrate or allow for interchanging various models and model components) was outside the scope of this chapter. Information about each of the exposure assessment models was obtained from several sources. First, the available user and technical manuals, website summaries, workshop or technical presentations, Federal register notices, US EPA staff papers, external peer-reviewed reports, and published papers were reviewed. Second, extensive discussions and interviews were conducted with each of the model developers or project managers at the US EPA in order to fill in information gaps and obtain a better perspective on the history, development, applications, and strengths and limitations of each model. Third, a number of the models were run using real or hypothetical data by at least one of the co-authors (either prior to or during this review) to obtain a better understanding of the model inputs and outputs, key assumptions, default options, and interactions with other models. Fourth, a search of the published literature, selected journals and recent conference proceedings was conducted to help identify peerreviewed publications and presentations that discuss how these models have been evaluated or applied in different settings.
3.4
RESULTS
A total of 35 exposure assessment models that are currently supported and used by the US EPA were identified for inclusion in this chapter. These include selected fate/ transport models (N ¼ 12), exposure models (N ¼ 15) and integrated fate/transport
PRZM EXAMS WASP EPANET
SCIGROW
Pesticide Root Zone Model Exposure Analysis Modeling System Water Quality Analysis Simulation Program
GENeric Estimated Environmental Concentration Screening Concentration in GROund Water NERL NERL NERL NRMRL
OPP
OPP
OPP
GENEEC
FIRST
OAQPS NERL OAQPS
Assessment System for Population Exposure Community Multiscale Air Quality Comprehensive Air quality Model with extensions FQPA Index Reservoir Screening Tool
ASPEN CMAQ CAMx
Supporting programme office OAQPS OAQPS
Full model name
Fate/transport models ISC Industrial Source Complex AERMOD AMS/EPA Regulatory Model
Model
Table 3.2: Exposure assessment models currently used by the US EPA, by model category.
( continued)
http://www.epa.gov/oppefed1/models/water/ first_description.htm http://www.epa.gov/oppefed1/models/water/ geneec2_description.htm http://www.epa.gov/oppefed1/models/water/ scigrow_description.htm http://www.epa.gov/ceampubl/gwater/przm3 http://www.epa.gov/ceampubl/swater/exams/index.htm http://www.epa.gov/athens/wwqtsc/html/wasp.html http://www.epa.gov/nrmrl/wswrd/dw/epanet.html
http://www.epa.gov/scram001/dispersion_alt.htm#isc3 http://www.epa.gov/scram001/dispersion_prefrec. htm#aermod http://www.epa.gov/ttn/atw/nata1999/aspen99.html http://www.epa.gov/AMD/CMAQ/cmaq_model.html http://www.camx.com
Website URL
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
69
SHEDS-Wood
CARESTM LifeLineTM SHEDSMultimedia
SWIMODEL PIRAT DEEM TM CalendexTM
WPEM IAQX
MCCEM
Exposure models HAPEM APEX SHEDS-Air Toxics SHEDS-PM
Model
Cumulative and Aggregate Risk Evaluation LifeLineTM Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals Stochastic Human Exposure and Dose Simulation for Wood Preservatives
Hazardous Air Pollutant Exposure Model Air Pollutants Exposure Model Stochastic Human Exposure and Dose Simulation for Air Toxics Stochastic Human Exposure and Dose Simulation for Particulate Matter Multi-Chamber Concentration and Exposure Model Wall Paint Exposure Assessment Model Indoor Air Quality and Inhalation Exposure – Simulation Tool Kit Swimmer Exposure Assessment Model Pesticide Inert Risk Assessment Tool Dietary Exposure Evaluation Model CalendexTM
Full model name
Table 3.2: ( continued )
NERL
OPP OPP NERL
OPP OPP OPP OPP
OPPT NRMRL
OPPT
NERL
OAQPS OAQPS NERL
Supporting programme office
( continued)
http://www.epa.gov/heasd/sheds/cca_treated.htm
http://www.epa.gov/oppad001/swimodel.htm http://www.epa.gov/oppt/exposure/pubs/pirat.htm http://www.exponent.com/deem_software http://www.exponent.com/calendex_software/ #tab_overview http://cares.ilsi.org http://www.thelifelinegroup.org http://www.epa.gov/scipoly/sap/meetings/2007/ 081407_mtg.htm
http://www.epa.gov/oppt/exposure/pubs/wpem.htm http://www.epa.gov/appcdwww/iemb/iaqx.htm
http://www.epa.gov/oppt/exposure/pubs/mccem.htm
http://www.epa.gov/ttn/fera/human_hapem.html http://www.epa.gov/ttn/fera/human_apex.html http://www.epa.gov/AMD/AirToxics/ humanExposure.html http://www.epa.gov/heasd
Website URL
70 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Full model name
Integrated fate/transport and exposure models HEM3 Human Exposure Model PERFUM Probabilistic Exposure and Risk Model for FUMigants EPA’s Vapor EPA’s Johnson and Ettinger (1991) Model for Intrusion Model Subsurface Vapor Intrusion into Buildings ChemSTEER Chemical Screening Tool for Exposures & Environmental Releases E-FAST Exposure and Fate Assessment Screening Tool Version 2.0 IGEMS Internet Geographical Exposure Modeling System TRIM Total Risk Integrated Methodology 3MRA Multimedia, Multi-pathway, Multi-receptor Exposure and Risk Assessment
Model
Table 3.2: ( continued )
http://www.epa.gov/oppt/exposure/pubs/efast.htm http://www.epa.gov/oppt/exposure/pubs/gems.htm http://www.epa.gov/ttn/fera/trim_gen.html http://www.epa.gov/ceampubl/mmedia/3mra/index.htm
OPPT OAQPS NERL
http://www.epa.gov/ttn/fera/human_hem.html http://www.exponent.com/ProjectDetail.aspx?project ¼ 450 http://www.epa.gov/oswer/riskassessment/airmodel/ johnson_ettinger.htm http://www.epa.gov/oppt/exposure/pubs/chemsteer.htm
Website URL
OPPT
OPPT
OSWER
OAQPS OPP
Supporting programme office
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
71
72
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and exposure models (N ¼ 8). Table 3.2 provides a complete listing of these models, including the full model name, the supporting US EPA programme office, and the website URL.
3.4.1
Fate/transport models
The fate/transport models included here are all media-specific, and include selected air quality and dispersion models, and surface water and drinking water models (see Table 3.3). These models are discussed in more detail in the subsections below. Air quality and dispersion models The identified air quality and dispersion models are similar in that they predict ambient air concentrations of a pollutant at a given location for a specified duration of time. However, these models differ in regard to their complexity, and their spatial and temporal level of analysis. These models are currently used by the US EPA and other stakeholders in a variety of applications, and many were originally developed to meet statutory requirements, such as for attainment demonstrations or to satisfy permitting requirements. Recommended or preferred air quality and dispersion models are listed in the US EPA’s (2005a) Guideline on Air Quality Models, which was first published in 1978 to provide guidance on model use, and ensure consistency and standardisation of model applications. ISC (US EPA, 1995) and AERMOD (US EPA, 2004a, 2008b) represent steadystate Gaussian plume dispersion models (i.e., source-based dispersion models) that simulate the fate of chemically stable airborne pollutants based on local emission sources, and estimate airborne concentrations at different grid locations (Isakov et al., 2006). ISC (long-term, ISCLT, or short-term, ISCST) is a long-standing US EPA model that was historically the preferred point source model for use in simple terrain owing to its past use, public familiarity and availability. This model has been used extensively to analyse impacts from a single or a few facilities, as well as for urbanwide air quality modelling of air toxic pollutants. However, the US EPA recently recommended that AERMOD be used as a replacement for ISC because of its more advanced technical formulation, and this model is now the preferred Agency model for assessing compliance with regulatory requirements such as NSR/PSD regulations. Both ISC and AERMOD are currently used as the atmospheric dispersion model in a number of other exposure assessment models. ASPEN (US EPA, 2000b) is a population-based air dispersion model that couples receptor point concentrations (estimated from ISCLT) with a GIS application to yield average airborne concentration at the census tract (block) level. This model has historically been used to evaluate ambient air concentrations of HAPs under the US EPA’s (2006c) NATA programme. CMAQ (Byun and Ching, 1999; US EPA, 2007a) and CAMx (ENVIRON, 2006) represent dynamic photochemical air quality dispersion models (i.e., grid-based chemical transport models) that incorporate complex atmospheric chemistry to simulate the fate of chemically reactive airborne pollutants in the general background on a regional scale (Isakov et al., 2006). CMAQ is an advanced model that can be used to assess multiple air quality issues, including tropospheric ozone, fine particles,
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
73
toxics, acid deposition and visibility degradation. It has been used to support various Agency regulations, such as the US EPA’s (2005b) CAMR. CAMx is another advanced model that is typically used to evaluate gases and particulates (e.g., ozone, PM, air toxics, mercury). This model has been widely used by the US EPA (2006d) to support assessments related to the NAAQS for ozone. Typical inputs to the air quality and dispersion models include source data (e.g., emission rates, stack parameters), meteorological data (e.g., temperature, wind speed and direction, atmospheric stability class), physical and chemical properties (e.g., reactive decay, deposition, secondary transformations), and receptor data (e.g., grid coordinates, census track locations). Outputs to these models include estimates of maximum or average airborne concentrations (e.g., ppm or mg m3 ) over a certain duration (e.g., hourly, daily, annually) at a specified receptor level (e.g., grid, census tract). All of the air quality and dispersion models included here are deterministic models that rely on a single value for each input parameter, and yield point estimates for predicted concentrations. Although none of these models contain stochastic processes to address variability or uncertainty, some of them allow the user to vary specific input parameter values, such as source emission rates. AERMOD also has the ability to perform source contribution analyses. All of the identified models are considered to be higher-tiered models, because they provide average or best estimates of ambient pollutant concentrations. ISC and AERMOD also have screening-level versions that can be used to produce conservative (upper-bound) estimates. The models included here have undergone extensive internal peer reviews (e.g., quality assurance and quality control checks of model code, databases) and varying degrees of external peer review. In particular, the ISC models have been extensively reviewed, updated and replaced, and revised over time (e.g., the ISC2 models were developed as replacements for previous model versions, and ISC3 models were based on revisions to algorithms contained in ISC2 models). AERMOD also underwent an external peer review of its formulation, documentation, and evaluation and performance in 1998 (US EPA, 2002b), and was found to be an appropriate replacement for ISC3 for regulatory modelling applications in 2005. ASPEN has been peer-reviewed by the US EPA’s SAB as part of the peer review process for the Cumulative Exposure Project (SAB, 1996) and the 1996 NATA programme (SAB, 2001). CMAQ also underwent three formal external peer reviews from 2003 to 2007 (Aiyyer et al., 2007; Amar et al., 2004, 2005), and CAMx underwent an external peer review in 1997 (Kumar and Lurmann 1997). A number of efforts have also been made to try to evaluate the air quality and dispersion models. For example, model inter-comparisons have been conducted between ISC and AERMOD (Silverman et al., 2007). AERMOD was also chosen as a replacement for ISC3 for regulatory modelling applications, in part because its model performance was found to be better than that of ISC3 (Paine et al., 1998). The performance of AERMOD has been extensively evaluated by using one set of databases during model formulation and development and another set of databases during final evaluation, which included four short-term tracer studies and six conventional long-term SO2 monitoring databases in a variety of settings (Paine et al., 1998). Evaluations of the ASPEN model have also included assessing how well ambient
74
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.3: Selected fate/transport models used by the US EPA. Model
Model purpose
Model type
Model inputs
Model outputs
ISC a
Estimates airborne pollutant concentrations from multiple sources (point, volume, area, or open pit) at urban or rural level (distances less than 50 km).
Steady-state Gaussian plume dispersion model.
Source data (emission rate, stack parameters), meteorological data (temperature, wind speed, wind direction, stability), receptor data (height, grid locations), building downwash parameters, terrain parameters (complex, simple), settling and removal processes (dry and wet deposition, precipitation scavenging – shortterm model only).
Maximum concentration (mg m3 ), annual (or seasonal) average concentrations (mg m3 ).
AERMODa Estimates airborne pollutant concentrations from multiple sources (point, volume, area, line) at urban or rural level (distances less than 50 km).
Steady-state Gaussian or non-Gaussian plume dispersion model.
Source data (emission rate, stack parameters), meteorological data (temperature, wind speed, wind direction, stability), receptor data (height, grid locations), building downwash parameters, terrain parameters (complex, simple; hill and point elevation).
Maximum concentration (mg m3 ), annual (or seasonal) average concentrations (mg m3 ).
ASPEN
Steady-state, population-based Gaussian plume dispersion model.
Air dispersion model, Annual average meteorological data concentrations (wind speed, wind (mg m3 ). direction), settling, breakdown, and transformation processes (reactive decay, deposition, secondary formations), census track population data.
Estimates toxic air pollutant concentrations and populationweighted pollutant concentrations for a large number of sources (HAPs) at national level.
75
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (allows for varying of source emission rates; can perform source contribution analyses; does not have stochastic process to address variability or uncertainty).
Prior external peer reviews of model (ISC2 models were developed as replacements for previous model version, ISC3 models were based on revisions to algorithms contained in ISC2 models); model intercomparisons with AERMOD have been conducted.
Includes a long-term (ISCLT) and short-term (ISCST) component; SCREEN3 is a screeninglevel version of ISC3; used as atmospheric dispersion model in ASPEN (ISCLT2), IGEMS (ISCLT/ISCST), PERFUM (ISCST3), and 3MRA (ISCST3).
Higher-tiered (allows for varying of source emission rates, can perform source contribution analyses; does not have stochastic process to address variability or uncertainty).
External panel peer review of model formulation, documentation, and evaluation and performance in 1998; model has been tested in various types of environment, and modelling results have been compared with monitoring data and other air dispersion models.
AERSCREEN is a screening-level version of AERMOD; used as atmospheric dispersion model for HEM; predicted air concentrations used in APEX.
Higher-tiered (mapping module is used to interpolate estimated concentrations from the grid receptors to census tract centroids; does not have stochastic process to address variability or uncertainty).
External SAB peer review as part of the peer review of the Cumulative Exposure Project in 1996 and as part of the peer review of NATA in 2001; model results have been compared with monitoring data for HAPs in Texas.
Uses ISCLT2 as atmospheric dispersion model; predicted air concentrations used in HAPEM.
( continued)
76
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.3: ( continued ) Model
Model purpose
Model type
Model inputs
Model outputs
CMAQ
Estimates airborne pollutant concentrations (or their precursors) for multiple air quality issues at multiple scales including urban and regional levels.
Grid-based, threedimensional, dynamic, Eulerian photochemical air quality simulation and dispersion model.
Meteorological data, Hourly, daily, emissions data weekly or monthly (processed using the average SMOKE emissions concentrations processor), chemistry- (mg m3 ), annual or multiyear transport data and average models (atmospheric concentrations transport, deposition, (mg m3 ). transformation processes, aerosol dynamics and atmospheric chemistry).
CAMx
Estimates airborne pollutant concentrations for gases and particulates (ozone, PM, air toxics, mercury) at suburban, urban, regional and continental levels.
Grid-based, threedimensional, dynamic, Eulerian photochemical air quality simulation and dispersion model.
Photochemical conditions, surface conditions, initial/ boundary conditions, emissions rates, meteorology.
Hourly, daily, or annual average concentrations (mg m3 ).
FIRST
Estimates pesticide Single-event concentrations in process model. untreated drinking water derived from surface water from non-point sources and aerial drift at watershed level.
Chemical properties (basic), pesticide application rate and frequency (maximum assumed), method of application (aerial, air blast, ground spray), environmental fate data (deposition, adsorption/desorption, degradation), PCA.
Daily peak concentration (mg L1 ), 1- and 10-year annual average concentration (mg L1 ).
77
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (simulates all atmospheric and land processes that affect the transport, transformation, and deposition of atmospheric pollutants; does not have stochastic process to address variability or uncertainty).
Model has undergone three formal external peer reviews from 2003 to 2007 (this is a community-supported model that undergoes continuous upgrades and improvements); modelling results have been compared with other atmospheric chemistry models (this is a community-supported model that has a formal process for model evaluation).
Used as air quality model for APEX (alone and combined with AERMOD); additional efforts under way to link with other models.
Higher-tiered (simulates the emission, dispersion, chemical reaction, and removal of pollutants in the troposphere; does not have stochastic process to address variability or uncertainty).
External peer review in 1997; model has been evaluated against observed ozone and PM concentrations in a number of studies.
Input/output file formats are based on the Urban Airshed Model; model has been used to provide inputs to the MENTOR/ SHEDS system.
Conservative (Tier I) screening level (assumes maximum application rates and highest exposure scenario; does not have stochastic process to address variability or uncertainty).
External SAP review of index reservoir scenario in 1998 and PCA methodology in 1999; limited model evaluation.
Designed to mimic a PRZM/EXAMS simulation (with fewer inputs).
( continued)
78
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.3: ( continued ) Model
Model purpose
Model type
GENEEC
Estimates pesticide Single-event concentrations in process model. surface water (aquatic organisms) from non-point sources and aerial drift at farm pond level.
Model inputs
Model outputs
Chemical properties (basic), pesticide application rate and frequency (maximum assumed), method of application (aerial, air blast, ground spray), environmental fate data (deposition, adsorption/desorption, degradation).
Daily peak concentration (mg L1 ), 4-, 21-, 60- and 90-day average concentration (mg L1 ).
SCIGROW Estimates pesticide Regression concentrations in model. vulnerable groundwater (shallow aquifers, sandy, permeable soils, substantial rainfall/irrigation) from non-point sources at local level.
Chemical properties Highest average (basic), pesticide concentration application rate (mg L1 ). and frequency (maximum assumed), environmental fate data (adsorption/desorption, degradation), groundwater monitoring data.
PRZM
Chemical and soil Daily loadings properties, pesticide over 36-year application rate and period (mg L1 ). frequency (maximum assumed), method of application (aerial, air blast, ground spray), environmental fate data (deposition, adsorption/desorption, degradation), PCA, site-specific information (daily weather patterns, rainfall, hydrology, management practices).
Estimates pesticide Dynamic, oneand organic dimensional, chemical loadings finite-difference, to surface water or compartmental groundwater from (box) model. point or non-point sources, dry fallout or aerial drift, atmospheric washout, or groundwater seepage at local or regional level.
79
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Conservative (Tier I) screening level (assumes maximum application rates and highest exposure scenario; does not have stochastic process to address variability or uncertainty).
No external peer review; limited model evaluation.
Designed to mimic a PRZM/EXAMS simulation (with fewer inputs).
Conservative (Tier I) screening level (assumes maximum application rates and based on vulnerable groundwater sites; does not have stochastic process to address variability or uncertainty).
External SAP review in 1997; modelled estimates have been compared with groundwater monitoring datasets and other models (PRZM).
Higher-tiered (Tier II) screening level (accounts for impact of daily weather, but uses maximum pesticide application rates and frequencies; stochastic processes address the variability in natural systems, populations, and processes as well as the uncertainty in input parameters using one-stage Monte Carlo).
External FIFRA SAP peer review; components of model have been evaluated (e.g., performance of soil temperature simulation algorithms); model calibration using sensitivity analyses and comparisons with other models.
Links subordinate PRZM and VADOFT models; receives data from and inputs data to EXPRESS; can be linked with other models.
( continued)
80
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.3: ( continued ) Model
Model purpose
EXAMS
WASP
Model type
Model inputs
Model outputs
Estimates pesticide Steady-state, and organic one-dimensional, chemical compartmental concentrations in (box) model. surface water or groundwater from point or non-point sources, dry fallout or aerial drift, atmospheric washout, or groundwater seepage at local or regional level.
Chemical loadings (runoff, spray drift), chemical properties, transport and transformation processes (volatilisation, sorption, hydrolysis, biodegradation, oxidation, photolysis), system geometry and hydrology (volumes, areas, depths, rainfall, evaporation rates, groundwater flows).
Annual daily peak concentration (mg L1 ), maximum annual 96-hour, 21-day and 60-day average concentration (mg L1 ), annual average concentration (mg L1 ).
Estimates chemical concentrations (organics, simple metals, mercury) in surface water from point and non-point sources and loadings from runoff, atmosphere, or groundwater for complex water bodies at regional level.
Chemical loadings (runoff, deposition, reaeration), chemical properties, flows (tributary, wastewater, watershed runoff, evaporation), transport and transformation processes (dispersion, advection, volatilisation).
Average concentration per water segment (mg L1 ).
Dynamic, threedimensional, compartmental (box) model.
81
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (Tier II) screening level (accounts for impact of daily weather, but uses maximum pesticide application rates and frequencies; problem-specific sensitivity analysis is provided as a standard model output).
External FIFRA SAP peer review; model system evaluation tests have compared model performance against measured data in either a calibration or validation mode.
Used as surface water quality model in 3MRA; includes file-transfer interfaces to PRZM3 terrestrial model and FGETS and BASS bioaccumulation models.
Higher-tiered (generates predicted best estimate values for each segment/time period; does not have stochastic process to address variability or uncertainty).
No formal peer review, but model application has been published in peer-reviewed literature; model components are published in peer-reviewed literature.
Uses chemical fate processes from EXAMS; can be linked with loading models, hydrodynamic and sediment transport models, and bioaccumulation models; can be linked to FRAMES; under development for inclusion within BASINS framework.
( continued)
82
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.3: ( continued ) Model
Model purpose
Model type
Model inputs
Model outputs
EPANET
Estimates chemical concentrations in treated drinking water delivered to consumers throughout a water distribution piping system over an extended period of time.
Dynamic, Lagrangian transport model with chemical reactions both in the bulk water and at the pipe wall.
Pipe network characteristics (network topology; pipe sizes, lengths, and roughness; storage tank dimensions; pump characteristics), water usage rates, source locations, chemical input rates, and chemical reaction coefficients.
Chemical concentration (mg L1 ) at each time step and water withdrawal point within the piping network (also estimates water age and source contribution).
a
AERMOD was promulgated as a replacement to ISC3 on 9 November 2005.
modelling compares with central site monitoring. For example, a recent study found the ASPEN model results to be in substantial agreement with monitored ambient HAP concentrations in a large heterogeneous area in Texas, although poor agreement was found among a few HAPs that were not as well characterised (Lupo and Symanski, 2008). CMAQ has also been compared with other atmospheric chemistry models in a recent study assessing trends in atmospheric mercury deposition (Sunderland et al., 2008), and CAMx modelling results have been evaluated against observed ozone and PM concentrations in a number of studies (Emery et al., 2004; Morris et al., 2004, Wagstrom et al., 2008). Surface water and drinking water models The identified surface water and drinking water models also share some similarities and differences. For example, these models simulate the fate and transport of contaminants in the environment in order to ultimately predict loadings to or ambient concentrations in sediment, surface water bodies, ambient groundwater, or drinking water. However, these models differ in regard to relevant pollutants, receptors, and spatial and temporal scales. These models are currently used by the US EPA or other stakeholders in a variety of applications, including aquatic, drinking water and community exposure assessments. Unlike the air quality and dispersion models, the US EPA has not published any recent guidance on recommended or preferred water quality models for use in different applications. FIRST (US EPA, 2001a) and GENEEC (US EPA, 2001b) are screening-level models that estimate pesticide concentrations in simple surface water bodies for use in drinking water exposure assessments and aquatic exposure assessments, respectively.
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (generates predicted best estimate values for each distribution point/time period; does not have stochastic process to address variability or uncertainty, although other researchers have added Monte Carlo extensions to this model).
Model structure and equations have been published in peerreviewed literature; decay kinetics have been tested and evaluated with data collected from multiple water distribution systems; model predictions have been compared with field studies.
EPANET-MSX is an extensible, multispecies chemical reaction modelling system that has been linked to EPANET; EPANET-DPX is a distributed processing version of EPANET; the EPANET engine has been incorporated into several commercial software packages that are widely used by water utilities.
83
Both of these models are single-event models: i.e., they assume that one single large rainfall/runoff event occurs and removes a large quantity of pesticide from the field to the water at one time. FIRST is based on a relatively large (172.8 ha) index reservoir watershed that was selected to be representative of a number of reservoirs in the central Midwest that are known to be vulnerable to pesticide contamination. This model also accounts for the percentage of a watershed that is planted with specific crops (PCA), the adsorption of pesticides to soil and sediment, pesticide degradation in soil and water, and flow through the reservoir. GENEEC is based on a much smaller (10 ha) standard agricultural field-farm pond scenario that assumes a static water body. Both of these models were derived from higher-tiered (Tier II) models (see below), but they are simpler in design, and require fewer inputs and less time and effort to use. SCIGROW (US EPA, 2001c) is another screening-level model that estimates pesticide concentrations in shallow, vulnerable groundwater for use in drinking water exposure assessments. FIRST, GENEEC and SCIGROW are all used by OPP’s Environmental Fate and Effects Division to provide conservative, screening-level (Tier I) predictions of pesticide concentrations (i.e., EECs) in order to assess human or ecological risks during the registration or re-registration process for a pesticide. PRZM (Suarez, 2006) and EXAMS (Burns, 2000), which were developed within ORD’s CEAM, are also screening-level models that provide more refined (Tier II) estimates of pesticide concentrations in simple surface water bodies for use in aquatic or drinking water exposure assessments. These models account for chemical-specific characteristics, and include more site-specific information regarding application methods and the impact of daily weather patterns on a treated field over time. These models also assume that a pesticide is washed off the field into a water body by 20–40
84
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
rainfall/runoff events per year. However, like the Tier I models, PRZM and EXAMS assume maximum pesticide application rates and frequencies for a vulnerable drinking water reservoir. PRZM is a dynamic, one-dimensional, finite-difference compartmental model that estimates pesticide and organic chemical loadings to surface water by simulating how a contaminant that leaches into soil (e.g., pesticide applied to a crop) is transported and transformed down through the crop root and unsaturated zone and reaches a surface water body. EXAMS then simulates the movement of the pesticide or organic chemical in surface water under either steady-state or dynamic conditions based on loadings from point or non-point sources, dry fallout or aerial drift, atmospheric washout, or groundwater seepage. PRZM and EXAMS have historically been used by OPP to assess agricultural pesticide runoff in water bodies where the pattern and volume of use are known (Tier II assessments). However, EXAMS has also been used by the US EPA to evaluate the behaviour of relatively field-persistent herbicides and to evaluate dioxin contamination downstream from paper mills, and by manufacturing firms for environmental evaluations of newly synthesised materials. EXPRESS (Burns, 2006) is a separate software system that is sometimes used to link the PRZM and EXAMS models together. WASP (Ambrose et al., 2006; Wool et al., 2001) is a higher-tiered, dynamic, three-dimensional compartmental model that simulates pollutant concentrations in the sediment and surface water (by depth and space) in water bodies of different complexity on a regional level. WASP assesses transport and kinetic processes separately, and includes two types of module: water quality modules (eutrophication, heat) and toxicant modules (simple toxicants, non-ionising toxicants, organic toxicants, mercury). This model is applicable primarily to small rivers, lakes and reservoirs. However, because WASP can evaluate very complex water bodies, and integrates spatially across large networks, it is generally used for more detailed or specialised applications. For example, the US EPA’s OW has used WASP to study the effects of phosphorus loading, heavy metal pollution and organic chemical pollution, including kepones, PCBs and volatile organics, on a number of aquatic ecosystems. BASINS (US EPA, 2001d) is a separate multipurpose environmental analysis and decision support system that integrates (using a GIS framework) environmental data, analytical tools and modelling programs. This system was designed for use by regional, state and local agencies in performing watershed and water quality-based studies, and is also used by the US EPA’s OW to support the development of TDMLs. WASP is currently under development for inclusion within the BASINS system. EPANET is a hydraulic and water quality model developed within ORD’s NRMRL that tracks the flow of drinking water and its constituent concentrations throughout a water distribution piping system (US EPA, 2000c). EPANET can simulate the behaviour of different chemical species, such as chlorine, trihalomethane and fluoride, within pressurised pipe networks over extended periods. This model can also evaluate a wide range of chemical reactions, including the movement and fate of a reactive material as it grows or decays over time (e.g., using nth order kinetics to model reactions in the bulk flow, and zero- or first-order kinetics to model reactions at the pipe wall). EPANET was developed primarily as a research tool to help water utilities maintain and improve the quality of water delivered to consumers, but it can also be used to design water sampling
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
85
programmes (e.g., select compliance monitoring locations under the US EPA’s Stage 2 Disinfectants and Disinfection Byproducts Rule), perform hydraulic model calibration, and conduct consumer exposure assessments. For example, the US EPA’s National Homeland Security Research Center is currently using EPANET to predict the effects of contaminant incidents at specific locations over time within distribution systems (Morley et al., 2007). This model has also been used in various epidemiological investigations, such as to estimate the contribution of contaminated groundwater from various points of entry of the distribution system to specific residences in Toms River, New Jersey (Maslia et al., 2000), and to estimate the flow and mass of tetrachloroethylene in drinking water through a town’s network to specific residences in Cape Cod, Massachusetts (Aschengrau et al., 2008). Primary inputs to the surface water and drinking water models include pesticide application rates and methods, physical and chemical properties, pollutant loadings, hydrology and flows, soil properties and topography, and environmental fate data and transport and transformation processes (e.g., soil degradation, hydrolysis, ionisation and sorption, advective and dispersive movement, volatilisation). EPANET also includes inputs related to the layout and characteristics of the piping system throughout a drinking water distribution network. Outputs to these models include short-term (e.g., daily peak) or longer-term (e.g., annual average) ambient concentrations in the sediment, surface water column, groundwater or drinking water, or time-varying concentrations at specific locations (e.g., distribution point). Most of these models are deterministic, and report either maximum or average values. The Tier I screeninglevel models are also expected to overestimate pesticide concentrations in water. Although few of these models contain stochastic processes to address variability or uncertainty, PRZM uses one-stage Monte Carlo techniques to address the variability in natural systems, populations and processes, as well as the uncertainty in input parameters. A problem-specific sensitivity analysis is provided as a standard model output for EXAMS. Additionally, although the current version of EPANET does not provide probabilistic estimates, other researchers have developed extensions of this model that have Monte Carlo capabilities (Morley et al., 2007). Each of the identified fate/transport models has been peer-reviewed internally, and many have undergone external peer reviews. For example, aspects of the FIRST and SCIGROW models underwent external FIFRA SAP reviews in the late 1990s, which included a review of the US EPA’s index reservoir scenario and PCA methodology. PRZM and EXAMS have also been subjected to an external peer review by the FIFRA SAP. WASP has not had a formal external peer review, but has been heavily cited in the peer-reviewed literature for its application in water quality and eutrophication assessments (Tufford et al., 1999; Wool et al., 2003). The model structure and equations for EPANET have also been published in the peer-reviewed literature (Rossman et al., 1994; Rossman and Boulos, 1996). Although the screening-level surface water and drinking water models have undergone limited model evaluation, components of some of the other models have been evaluated. For example, the performance of soil temperature simulation algorithms was evaluated for PRZM, in which model results were compared and evaluated based on the ability to predict in situ measured soil temperature profiles in an
86
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
experimental plot during a 3-year monitoring study (Tsiros and Dimopoulos, 2007). Additionally, a sensitivity analysis of PRZM and three other pesticide leaching models was performed using a one-at-a-time approach of varying input parameters, in which it was found that parameters that had the largest influence on pesticide loss were generally those predicting the extent of soil sorption (Dubus et al., 2003). The model performance of EXAMS has also been compared against measured data in a number of model system evaluation tests for different chemicals (e.g., dyes, herbicides, insecticides, phenols, other organic chemicals) and environments (e.g., small streams, rivers, ponds, rice paddies, bays). EPANET’s decay kinetics have been tested and evaluated with data collected from multiple water distribution systems, and model simulation results have been compared with field data, other dynamic water quality models, and analytical solutions (Rossman et al., 1994; Rossman and Boulos, 1996). This model also contains a calibration option that allows the user to compare the results of a simulation against measured field data if such data exist for a particular period of time within a distribution system.
3.4.2
Exposure models
The exposure models included here generally focus on either inhalation or multimedia exposures for human receptors at the individual and/or population level (see Table 3.4). HAPEM (Rosenbaum, 2005; Rosenbaum and Huang, 2007) and APEX (US EPA, 2006e) were developed primarily to support Agency regulations on criteria air pollutants (e.g., CO, PM, ozone) and air toxics (e.g., benzene) from mobile or stationary sources at local, urban or national scales. For example, HAPEM was used as the primary exposure model for assessing inhalation exposures to HAPs under the 2007 MSAT rule (US EPA, 2007b). APEX (formerly called the probabilistic NAAQS Exposure Model for Carbon Monoxide, pNEM/CO) was used to estimate inhalation exposures as part of the ozone NAAQS (US EPA, 2007c). HAPEM has historically been used more often than APEX for regulatory purposes, because it relies on relatively few inputs and is easy to use, but this model is better suited for assessing long-term exposures to pollutants on a national scale (e.g. air toxics) rather than shorter-term exposures to pollutants on a local scale (e.g., criteria pollutants). These models have also been used within and outside the Agency for various applications, such as determining priority pollutants under the US EPA’s (2006c) NATA programme, and estimating NO2 exposures from various emission sources, such as on-road vehicles and indoor gas cooking (Rosenbaum et al., 2008). Although TRIM-Expo (US EPA, 2003a) was developed as the ‘‘next generation’’ multimedia exposure model to support these same types of regulations, to date, this model has relied on APEX only for assessing inhalation exposures (i.e., the ingestion exposure module has not yet been completed). SHEDS-Air Toxics (Isakov et al., 2006; Stallings et al., 2008) and SHEDS-PM (Burke, 2005; Burke et al., 2001) were developed by ORD primarily as internal research models to provide state-of-the-art research tools for evaluating inhalation exposures to toxic and particulate air pollutants, respectively. SHEDS-Air Toxics has been used to assess benzene exposures in case studies linking air quality and exposure
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
87
models (Isakov et al., 2006), and SHEDS-PM has been used in case studies to evaluate PM2:5 exposures in Philadelphia (Burke et al., 2001). A number of algorithms and improvements developed for inhalation exposure in these two SHEDS models have also been incorporated into the regulatory models, such as HAPEM and APEX. MCCEM (US EPA, 2007d) and WPEM (US EPA, 2001e, 2007e) represent indoor exposure models that were developed by OPPTS to provide more detailed exposure assessments of indoor air pollutants from sources such as carpeting and wall paint. Although these models could potentially be used for assessing exposures to new chemicals, the level of specificity provided by these models is typically not required for making routine decisions under Section 5 of TSCA. Currently, these models are being used internally by OPP to assess antimicrobial exposures, and in instances where there is a potential concern about a major commodity chemical that requires a more detailed or accurate assessment of potential consumer uses and exposures (e.g., formaldehyde in homes). The WPEM model was also designed as a support tool for industry to assist in the early design of safer new painting products (i.e., products that would not result in exposures of concern). These models have many potential external applications and users, including states, academic researchers, consulting firms, and other governments. For example, MCCEM has been used to perform chemical-specific indoor air exposure assessments for benzene and toluene under the US EPA’s VCCEP (ACC, 2006a, 2006b). IAQX (Guo, 2000a, 2000b) is a similar type of indoor exposure model that was developed within ORD primarily for external users conducting highend exposure research (not for regulatory or internal research purposes). The IAQX model implements over 30 source models and five sink models, and is often used as a teaching tool by various universities (Guo, 2008, personal communication).7 This model has also been used to perform chemical-specific indoor air exposure assessments for acetone (i.e., to estimate exposure concentrations for the nail tip removal scenario) under VCCEP (ACC, 2003). The SWIMODEL was developed by OPP’s Antimicrobial Division as a screening tool to assess exposures to pool chemicals and breakdown products found in swimming pools and spas (US EPA, 2003b). This model is typically used to assess three primary exposure routes (incidental ingestion, dermal absorption, and inhalation), but can also be used to assess other less significant exposure routes. The SWIMODEL generally relies on conservative equations and default input parameter values to calculate worst-case exposures for either competitive or non-competitive swimmers. Although this model can be used to assess exposures in either indoor or outdoor settings, predicted outdoor exposures will tend to be more conservative than predicted indoor exposures. OPP is in the process of updating some of the model input parameters (e.g., recreational exposure time and frequency), but these updates have not yet been incorporated into the model itself. This model is used primarily by OPP to support the registration and re-registration of pesticides, and has been used by pesticide registrants and researchers to assess potential swimmer exposures. PIRAT (US EPA, 2007f) is another common screening-level exposure model used internally by OPP to support the US EPA’s evaluation of new inert ingredients as part of the pesticide risk assessment process. This model generally relies on conservative assumptions, default scenarios and model equations that are codified in the
88
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: Selected exposure models used by the US EPA. Model
Model purpose
Exposure routes
Model inputs
Model outputs
HAPEM
Estimates population-level exposures to criteria pollutants (CO, PM) and air toxics (HAPs) for general population or subgroups at urban and national levels.
Inhalation (indoors, outdoors, in-vehicle commuting).
Ambient air concentrations (measured or modelled) based on mobile, area or point sources, near road factor, indoor/outdoor data (ratio), census tract population and demographic data, commuting data, exposure factors, activity patterns.
Monthly, seasonal or annual average concentrations (mg m3 ).
APEX
Estimates population-level exposures and doses to criteria pollutants (CO, ozone) and air toxics (benzene, chromium) for general population or subgroups at local, urban and consolidated metropolitan levels.
Inhalation (indoors, outdoors, in-vehicle commuting).
Ambient air concentrations (measured or modelled) based on mobile, area or point sources, indoor/ outdoor data (ratio or mass balance model), census tract population and demographic data, commuting data, exposure factors, activity patterns.
1-hour, 8-hour, daily, monthly or annual average concentrations (ppm or mg m3 ); delivered doses (mg kg1 day1 ) for CO only.
89
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Screening level (appropriate for assessing average long-term exposures at urban/national scale; stochastic process addresses variability in activity patterns and behaviours using one-stage Monte Carlo).
External SAB peer review of model in 2001 as part of evaluating NATA 1996 data, and model reviewed as part of 2007 MSAT rule; some model components (activity data, microenvironmental factors, commuting data) have been evaluated; model results compared with ASPEN (.30 HAPs) and APEX (benzene).
Uses predicted air concentrations from ASPEN, AERMOD or CMAQ.
Higher-tiered (stochastic process addresses variability in population, and accounts for spatial and temporal variability of microenvironmental parameters using one-stage Monte Carlo; new version will also address uncertainty in input parameters using two-stage Monte Carlo).
External CASAC peer review in 2006 as part of evaluating ozone NAAQS; extensive evaluation of computer code; some model components (activity data, microenvironmental factors, commuting data) have been evaluated; model results compared with HAPEM (benzene).
Uses predicted air concentrations from AERMOD or CMAQ; serves as inhalation exposure module for TRIM. a
( continued)
90
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
SHEDSAir Toxics
Estimates individual or population-level exposures and doses to air toxics (benzene, formaldehyde) for general population or subgroups at local and urban levels.
Inhalation (indoors, outdoors, in-vehicle commuting), dermal (showering/ bathing and residues/spillage during refuelling), and ingestion (dietary).
Ambient air concentrations (measured or modelled) based on ambient (outdoor) and non-ambient (indoor) sources, indoor/ outdoor data (ratio, mass balance model, or regression equation), residue data and dietary diaries, census tract population and demographic data, commuting data, exposure factors, activity patterns.
Daily average concentrations (mg m3 ); total daily average exposure (mg m3 ), and total daily intake dose (mg kg1 day1 ); contribution of ambient and non-ambient sources to each.
Inhalation (indoors, outdoors, in-vehicle commuting).
Ambient air concentrations (measured or modelled) based on ambient (outdoor) and non-ambient (indoor) sources, indoor/ outdoor data (ratio, mass balance model, or regression equation), census tract population and demographic data, commuting data, exposure factors, activity patterns.
Daily average concentrations (mg m3 ); total daily average exposure (mg m3 ), total daily intake dose (mg), and total daily deposited dose (mg); contribution of ambient and non-ambient sources to each.
SHEDS-PM b Estimates individual or population-level exposures and doses to particulate matter for different size fractions (fine, coarse, ultrafine) and species (ions, metals) for general population or subgroups at local and urban levels.
91
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered simulation model (stochastic process addresses variability within and between individuals in a population, and uncertainty in input parameters at event level, using two-stage Monte Carlo).
Model has not been externally peer-reviewed because it is an internal research model that is continually modified (but peer-reviewed publications related to its application are expected); model has not been evaluated because of limited access to good air toxics data (but NERL’s recently completed exposure study in Detroit will be used to evaluate this model).
Some algorithms have been incorporated into HAPEM and APEX; model combines SHEDS-PM approach for air pollutants with SHEDS-Multimedia algorithms for dietary and dermal exposures; case study applications with CMAQ, AERMOD modelled concentrations.
Higher-tiered (stochastic process addresses variability across population in exposure factors and uncertainty in input parameters using two-stage Monte Carlo).
External peer review is currently under way; model predictions were compared with exposure field studies (community and personal exposures), including data from the NERL PM Panel Study in RTP (new project will compare current model version with same data).
Some algorithms have been incorporated into HAPEM and APEX; model capable of using predicted PM air concentrations from AERMOD or CMAQ.
( continued)
92
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
MCCEM
Estimates individual-level exposures and doses to chemicals released from consumer products or materials in residential setting.
Inhalation (indoors).
Indoor environment (e.g., type of residence, zone volumes, interzonal air flows, air exchange rate), pollutant emission rate (as a function of time), exposure factors, occupant activity patterns.
Single-event dose (mg); 15-min, 1-hour, and daily average concentration (mg m3 ); ADD, LADD and acute peak dose (mg kg1 day1 ).
WPEM
Estimates Inhalation individual-level (indoors). exposures and doses to chemicals released from wall paint (latex, oil based) applied using a roller or brush for residents or workers in residential setting.
Chemical and paintspecific data (e.g., type of paint, chemical weight fraction), painting scenario (e.g., room size, building type, air exchange rate), exposure factors, occupancy and activity patterns (e.g., weekdays/ weekends, during painting event).
Highest instantaneous, 15-min, 8-hour, and lifetime average daily concentration (mg m3 ); LADD (mg kg1 day1 ).
IAQX
Estimates Inhalation individual-level (indoors). exposures to chemicals in residential setting (general simulation program and specific programs for indoor paint, indoor spills, particles, and flooring/ carpeting).
Pollutant sources (e.g., number), building features (e.g., volume of zone, air flow rates, number of interior surface types), ventilation systems and interior sinks (adsorption and desorption rates), exposure factors, emission rate.
Concentration (mg m3 ) or exposure (mg m3 3 time) – based only on breathing rate, not body weight.
93
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (stochastic process addresses uncertainty in input parameters using one-stage Monte Carlo).
External letter peer review in Uses WPEM for paint 1998; model published in scenarios. peer-reviewed literature; model was evaluated using data from test house and through comparisons against other models (model compared well with monitoring data and other models).
Higher-tiered (does not have stochastic process to address variability or uncertainty).
External letter peer review in 1998; model was evaluated extensively by creating emission rate algorithms based on small-chamber studies and comparing modelled estimates with measured data in the US EPA test home (results were comparable).
Model is subset of MCCEM (i.e., engine and mathematics are from MCCEM, but tailored for specific application of wall paint); model is based on data generated from IAQX.
Higher-tiered (designed mainly for advanced users; requires more knowledge to use than other indoor air quality models; does not have stochastic process to address variability or uncertainty).
External peer review via published papers; limited model evaluation (except for paint model) because model based on existing models.
Model implements over 30 source models and five sink models; complements and supplements existing IAQ simulation programs (e.g., RISK, MCCEM, CONTAMW); data generated from model is used in WPEM.
( continued)
94
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
SWIMODEL Estimates individual-level exposures and doses to pool chemicals and breakdown products in swimming pools and spas.
Inhalation (indoor, outdoor), dermal, incidental ingestion (other routes include buccal/ sublingual, nasal/orbital, aural).
Physico-chemical data, water concentration (expected mean and maximum at the maximum label use rate), air concentration (empirically measured or estimated using Henry’s law or Raoult’s law), exposure factors, dermal permeability values, absorption rates.
Potential dose rate or intake (mg event1 ), ADD (mg kg1 day1 ), LADD (mg kg1 day1 ).
PIRAT
Estimates individual-level exposures and doses to pesticide inert ingredients in residential setting.
Inhalation (indoor, outdoor), dermal (product, pets, dust, surface), incidental ingestion (hand, toy, grass, dirt).
Type of product, product formulation, function of inert, application technique and rate, exposure factors, use patterns.
ADD, LADD and acute potential dose (mg kg1 day1 ); MOE.
DEEM TM
Estimates Dietary individual or ingestion population-level (food, water). dietary exposures and doses to pesticides for the general population or subgroups in residential settings.
Residues (pesticides and analytes), consumption data, CSFII population data, exposure factors, toxicity data rates.
Average daily intake (mg kg1 ); acute dose, ADD and LADD (mg kg1 day1 ); MOE and % RfD.
95
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Conservative screening level (uses default equations and parameter values, and assumes worst-case exposures; does not have stochastic process to address variability or uncertainty).
Limited external peer review (data on exposure duration and frequency published in peer-reviewed literature); limited model evaluation (indirect validation by comparing model results with biomonitoring studies bridged using PBPK models).
Used, along with other OPP models, in support of pesticide registration and re-registration decisions.
Conservative screening level (uses default assumptions and scenarios, and assumes continuous exposures to predicted concentrations; does not have stochastic process to address variability or uncertainty).
External letter (modified) peer Relies on same review in 2004 (residential assumptions and SOPs as SOPs were previously reviewed active ingredients. by an SAP); limited model evaluation because based on default residential SOPs.
Higher-tiered (stochastic process addresses variability in residue values using one-stage Monte Carlo).
External SAP peer review in Used as dietary exposure 1998; validation of model model in Calendex. outputs using other Monte Carlo software when using same inputs; modelling results compared with other dietary and aggregate exposure models (SHEDS-Multimedia) and with biomonitoring data.
( continued)
96
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
CalendexTM
Estimates individual or population-level dietary and residential exposures and doses to pesticides for the general population or subgroups in residential setting.
Inhalation, dermal, dietary ingestion (food, water), and incidental ingestion (dust, surfaces).
Product use, residues and concentrations, consumption data, contact probabilities, degradation rates, CSFII population data, activity patterns, exposure factors, toxicity data.
Average daily intake (mg kg1 ); acute dose, 21day average dose, ADD and LADD (mg kg1 day1 ); MOE and % RfD.
CARESTM
Estimates individual or population-level dietary and residential exposures and doses to pesticides for the general population or subgroups in residential settings.
Inhalation, dermal, dietary ingestion (food, water), and incidental ingestion (dust, surfaces).
Product use, residues and concentrations, consumption data, contact probabilities, degradation rates, US Census (PUMS) population data, activity patterns, exposure factors, toxicity data.
Daily, short-term or intermediate (2–30 days or 1–3 months), and 1-year average dose and toxic equivalent dose (mg kg1 day1 ); MOE, hazard index, or toxicity equivalence factor.
LifeLineTM
Estimates individual or population-level dietary and residential exposures and doses to pesticides for the general population or subgroups in residential settings.
Inhalation (shower/bath), dermal (shower/ bath), dietary ingestion (food, water), and incidental ingestion (dust, surfaces, soil, pets, hand-tomouth).
Product use, residues and concentrations, consumption data, contact probabilities, degradation rates, NCHS Natality population data, activity patterns, exposure factors, absorption.
Maximum daily absorbed dose, average daily absorbed dose, average seasonal absorbed dose, and 1-year average absorbed dose (mg kg1 day1 ); MOE, % RfD, or toxicity equivalence factor.
97
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (stochastic process addresses variability in residue values and contact levels using one-stage Monte Carlo).
External SAP review in 2000, and high level of external review by stakeholders; extensive QA/QC testing and verification of processes; modelling results compared with other dietary and aggregate exposure models (CARES, LifeLine) and with biomonitoring data for chlorpyrifos.
Uses DEEM as dietary exposure model; uses PRZM/EXAMS to estimate drinking water concentrations.
Higher-tiered (stochastic process addresses variability in residue levels, consumption, and activity patterns using one-stage Monte Carlo; contribution analysis function can be used to conduct sensitivity analyses).
External SAP review in 2002; Builds off former REX modelling results compared model. with other dietary and aggregate exposure models (Calendex, LifeLine) and with biomonitoring data for carbaryl; acute dietary exposure values compared with residue data (monitoring and market survey data) for chlorpyrifos.
Higher-tiered (stochastic process addresses variability in population by age and season using one-stage Monte Carlo).
External SAP review in 1999 and 2000; modelling results compared with other dietary and aggregate exposure models (Calendex, CARES).
Software probabilistic methodologies and basic approaches used in Tribal LifeLineTM model.c
( continued)
98
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.4: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
Estimates SHEDSMultimediad individual or population-level exposures and doses to pesticides and other chemicals (metals, persistent bioaccumulative toxins) for the general population or subgroups in residential settings.
Inhalation, dermal, dietary ingestion (food, water), and incidental handto-mouth and object-to-mouth ingestion (dust, soil, surface residues).
Product use (optional), residues and concentrations (modelled or measured), application/decay rates (optional), consumption data, media contact probabilities, US Census population data (built in), activity patterns (CHAD data built in), exposure factors, absorption rates.
Average daily exposure (mg kg1 ), average daily absorbed dose (mg kg1 day1 ).
SHEDSWood
Dermal and incidental hand-to-mouth ingestion (soil, wood residue).
Residues and concentrations, contact probabilities, transfer efficiencies, US Census population data, activity patterns (longitudinal), exposure factors, absorption rates.
15-day and 90-day average absorbed doses, average daily absorbed dose, and lifetime average daily absorbed dose (mg kg1 day1 ).
Estimates individual or population-level exposures and doses to wood preservatives (arsenic, chromium) for children in frequent contact with CCAtreated decks and playsets.
a Although there is a TRIM-Expo module within the TRIM Framework, it currently relies solely on APEX for evaluating inhalation exposures. b SHEDS-ozone model is currently under development. c New version has been developed for tribal communities and other focused populations (Tribal LifeLineTM ) that relies on the same probabilistic methodologies and basic approaches as existing software (changes made to some software operational functions, exposure opportunity modules, and knowledgebases). d SHEDS-dietary module is currently being incorporated into SHEDS-Multimedia (version 4); additional enhancements will include cumulative algorithms and other changes to address 2007 SAP comments.
US EPA’s (1997a) residential SOPs to assess aggregate residential exposures to pesticide inert ingredients. Prior to August 2006 these residential SOPs were also used by OPP for screening purposes to assess REDs for active pesticides. Reg Reviews have replaced the former RED process, and OPP is in the process of upgrading the
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
99
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (stochastic process addresses variability in population and uncertainty in input parameters using bootstrap method and two-stage Monte Carlo; several sensitivity analysis methods applied separately).
External SAP peer review in 2002, 2003 and 2007; model results compared with other dietary and aggregate exposure models (DEEM, CARES, Calendex, LifeLine) and with data collected from NHEXAS, NHANES, US EPA/ORD/ NERL and other field measurement studies.
Can be linked with air quality and PBPK models (used in MENTOR Framework; interfaces with ORD’s ERDEM).
Higher-tiered (stochastic process addresses variability in population and uncertainty in input parameters using bootstrap method and two-stage Monte Carlo; sensitivity analysis option).
External SAP peer review in 2002 and 2003; two published papers; US EPA reports; model results compared with other CCA exposure models (produced similar mean estimates).
Uses modified version of LifeLine to calculate body weight and surface area for children.
residential SOPs (e.g., to include distributions) for use in future Reg Reviews. Other models that are used by OPP to support Reg Reviews, and which predict dietary and/ or residential aggregate pesticide exposures, include DEEM TM (Kidwell et al. 2000), CalendexTM (Peterson et al., 2000), CARESTM (Farrier and Pandian, 2002; Young et
100
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
al., 2006) and LifeLineTM (LifeLine Group, Inc., 2000; Price et al., 2001). These models were designed in response to the 1996 FQPA to assess multimedia pesticide exposures in residential settings, although they can also be used for other chemicals and institutions (e.g., school, office). DEEM TM is the most frequently used by OPP to assess exposures, because many pesticides require refinement only in dietary assessments. CalendexTM is another model that is often used by OPP to assess both dietary exposures (based on DEEM TM ) and multi-pathway residential exposures to pesticides in support of FQPA. For example, CalendexTM was the primary model used to evaluate the 31 OP pesticides under the OP cumulative risk assessment (US EPA, 2006f) and the 10 NMC pesticides under the NMC cumulative risk assessment (US EPA, 2007g). CARESTM , which began as a spreadsheet model called Residential Exposure Assessment (REX), was developed by industry in consultation with the US EPA as a publicly available alternative to CalendexTM that is free of charge (i.e., DEEM TM and CalendexTM have a licensing fee). LifeLineTM was also developed as a publicly accessible model as part of a cooperative effort between the US EPA and external researchers to assess aggregate exposures in support of pesticide registration under FQPA. In addition to their regulatory applications, all of these models have been (or can be) used by external users such as academics or communities for broader research purposes (e.g., to determine important exposure pathways and risk drivers). SHEDS-Multimedia (Stallings et al., 2007), formerly called SHEDS-Pesticides, is ORD’s state-of-the-science aggregate and cumulative exposure model. This model is similar to OPP’s dietary and residential aggregate exposure models, and is also intended to support OPP’s exposure and risk assessments under FQPA. However, SHEDS-Multimedia has some unique capabilities, such as using a microenvironmental approach to track the movement of individuals throughout the day; relying on a within-day (1–60 min) time step (rather than a daily time step); simulating hand-tomouth residue ingestion as a function of dermal exposure; accounting for dermal loading and removal processes (e.g., washing, bathing); and using a more sophisticated algorithm for constructing longitudinal activity patterns of simulated individuals. To date, this model has been used to provide supplemental information on children’s exposure (via hand-to-mouth contact) in OPP’s NMC cumulative risk assessment (US EPA, 2007g), was used in OPP’s aldicarb and methomyl RED assessments (US EPA, 2007h), and is being used in OPP’s upcoming pyrethroids cumulative risk assessment. SHEDS-Multimedia has also been applied by various academic institutions and other government agencies for research and regulatory purposes. SHEDS-Wood (Zartarian et al., 2005b, 2006) is a scenario-specific version of SHEDS-Multimedia developed by ORD specifically to assess children’s exposures to wood preservatives from decks and playsets. This model was used in OPP’s children’s risk assessment for chromated copper arsenate (CCA) (US EPA, 2008c). SHEDS-Wood has also been used outside the Agency by industry and state agencies for CCA and other wood preservative assessments. All of the identified exposure models couple environmental pollutant concentrations in specific environmental media or microenvironments with estimates of the actual or assumed amount of time individuals spend in contact with these media or microenvironments to provide the most robust characterisation of exposure. These
101
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
models simulate and track an individual’s movements through time and space (i.e., microenvironmental approach) and/or apportion the time of day spent in various activities or locations in order to yield a time series or estimate of daily exposure to a pollutant. Similar steps among these models include: • • •
simulating an individual and their activity patterns (and in some cases simulating longitudinal activity patterns); combining activity information, environmental media concentrations, and exposure factors in exposure algorithms; and simulating population estimates using probabilistic sampling (see Figure 3.2).
However, specific aspects of these models may differ for each step, such as using different datasets and information sources for demographic characteristics, environmental media concentrations, activity patterns, and other exposure factors. Assumptions about longitudinal activities – i.e., simulating a person’s activity pattern over a year or longer based on diary data for one day or a few days – can also differ among the models (e.g., models may assume a person has the same activity pattern every day, can draw a random activity pattern each day, or can account for correlated activities from one day to the next). Additionally, exposure algorithms and underlying fate/ transport models can differ across the models (e.g., some models track dermal hand
Figure 3.2: General SHEDS model approach. Generate population for simulation
EPA’s Consolidated Human Activity Database: Time–location–activity Diaries
Simulate longitudinal activity diaries Winter Winter Weekday Weekend
Spring Weekday
90
1
Summer Weekday
Spring Weekend
Summer Weekend
180
Fall Weekday
270
Fall Weekend
360
Day of year Uncertainty: Sample N sets of parameter distributions
Calculate exposure or dose for simulated population
Variability: perform M iterations from each input distribution M
N
Dose
Time
Time
Exposure or dose
Sample from input dist’ns
Exposure
Calculate individual exposure and dose
102
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
loading together with body hand loading, whereas others track these separately, and do not use dermal hand loading as an input to hand-to-mouth ingestion). MCCEM and WPEM differ from the population-based simulation models in that they represent steady-state, box models that rely on mass balance equations to estimate airborne exposures in different locations (i.e., zones) by distributing air within a home or other indoor setting, and apportion the amount of time individuals spend each day within each zone based on published time–activity patterns. IAQX is also an individual-level model, but it is not a steady-state box model (i.e., it calculates the time history of indoor air concentrations or personal exposures). The SWIMODEL also relies on a series of equations that calculate route-specific exposures based on chemical concentrations, physiochemical data, and exposure times assumed for selected swimmers. With the exception of MCCEM, WPEM, IAQX, SWIMODEL and PIRAT, all of the exposure models summarised here are designed primarily to characterise exposures at the population level (although the SHEDS and OPP models can also be used to characterise exposures at the individual level by using exposure time profiles). Common inputs to the exposure models include product use, physico-chemical data, residue and concentration data (measured or modelled), consumption data, commuting data, indoor/outdoor relationships (e.g., using a factors or mass balance approach), indoor environment data, degradation and transfer rates, contact probabilities, population and demographic data, activity patterns, and other microenvironment data or exposure factors. Population and demographic data are typically based on the US Census or the USDA’s CSFII, while activity patterns and scenarios are typically based on CHAD (McCurdy et al., 2000; Stallings et al., 2002) and the US EPA’s (1997a) residential SOPs. Other exposure factors, such as intake rates and body weight, are generally based on the US EPA’s (1997b) Exposure Factors Handbook, CSFII, or NHANES III for specified population or subpopulation groups. Outputs to these models generally include distributions of ambient concentrations (ppm or mg m3 ), average daily intake or exposure (mg kg1 ), and potential or absorbed doses (mg kg1 day1 ) averaged over the short-term (e.g., 15 min, 1 hour, 8 hour, 1 day), intermediate (e.g., annual average concentration or ADD), or longer-term (e.g., lifetime average concentration or LADD) durations. However, because many of these models either generate time-series (hour by hour) exposure profiles over the course of a day, or use the calendar day as the basic unit of time for calculating exposures, estimated exposures can be averaged over any specified duration or number of days. Some of these models also compare estimated exposure or dose levels to existing toxicity criteria to evaluate potential human risks (e.g., MOE, % of RfD, hazard index, toxicity equivalence factor). With the exception of several models used to assess individual and/or screeninglevel exposures (i.e., MCCEM, WPEM, IAQX, SWIMODEL and PIRAT), all of the exposure models reviewed are probabilistic models that utilise stochastic processes to address the variability and/or uncertainty in population estimates or model input parameters. For these models, the variability in population exposures is generally accounted for by running simulations for many individuals and then aggregating across all individuals. Uncertainty (and sometimes intra- and inter-subject variability) is typically addressed by defining various input variables using a distribution rather
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
103
than a point estimate. For example, some of the exposure models specify a probability distribution for the following model input parameters to address uncertainty and/or variability: ambient air concentrations; pesticide residue values in food; consumption rates; microenvironment factors; and different activity patterns or durations. Most of the exposure models rely on one-stage Monte Carlo techniques to address either variability or uncertainty (or both combined). Among the models reviewed here, the SHEDS models are the only exposure models that currently use two-stage Monte Carlo techniques to address both the variability in input parameters and the uncertainty of the mean of the first distribution (this feature will soon be available for APEX). SHEDS-Multimedia and SHEDS-Wood also rely on a bootstrap method for addressing uncertainty (i.e., so that fewer data result in more uncertainty and more data result in less uncertainty) (Xue et al., 2006), and SHEDS-Wood includes an additional sensitivity analysis feature that uses a percentile scaling approach or multiple stepwise regression (this feature will soon be available for SHEDS-Multimedia). CARESTM also includes a contribution analysis function that can be used to conduct sensitivity analyses. Of these models, SWIMODEL and PIRAT are the only screening-level models intentionally designed to perhaps overestimate exposures, but any of the exposure models can be modified to use conservative input parameters to produce high-end or bounding estimates. Although HAPEM is sometimes referred to as a ‘‘screening-level’’ model, this is due solely to its limited spatial and temporal abilities (i.e., it is most appropriate for assessing average long-term exposures at the national scale). Because most of the exposure models included here are designed to support higher-tiered assessments, these models have generally undergone extensive internal review by the US EPA, and many have also been externally peer-reviewed. For example, HAPEM underwent an external SAB (2001) peer review as part of the evaluation of the 1996 NATA programme, and was externally reviewed as part of the 2007 MSAT rule (US EPA, 2007b). APEX also underwent an external peer review by CASAC (2006) as part of its evaluation of the ozone NAAQS. SHEDS-PM is currently being peer-reviewed, prior to its public release in 2009, but SHEDS-Air Toxics has not been externally peer-reviewed, because this model was developed primarily as an internal research model, and is continually being modified as part of NERL’s ongoing research projects (although it will be described in subsequent peer-reviewed publications related to various application projects). MCCEM and WPEM both had external letter peer reviews in 1998, and MCCEM was patterned after an earlier DOS version that has been described in the peer-reviewed literature (Koontz and Nagda, 1991). IAQX has also been published in the peer-reviewed literature (Guo, 2002a, 2002b), but has not had a separate external letter peer review. The SWIMODEL has not undergone a formal external peer review, but components of this model (e.g., exposure duration and frequency for competitive swimmers) have been published in the peerreviewed literature (Reiss et al., 2006). PIRAT underwent a modified external letter peer review in 2004, and the residential SOPs have been reviewed previously by the US EPA’s SAP. Because the US EPA’s OPP requires that their FIFRA SAP review any model being used in exposure assessment for regulatory purposes, all of the OPP and SHEDS models developed to assess dietary and residential aggregate exposures have
104
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
undergone such peer reviews, including DEEM TM (SAP, 2000a), CalendexTM (SAP, 2000b), CARESTM (SAP, 2002a), LifeLineTM (SAP, 2001), SHEDS-Multimedia (SAP, 2007) and SHEDS-Wood (SAP, 2002b). Although it is difficult to evaluate the results of some of these models in their entirety, most of the exposure models reviewed have undergone some degree of model evaluation. For example, despite the limited availability of personal monitoring data to perform direct comparisons with HAPEM, many of the key components of this model (e.g., activity data, microenvironment factors, and commuting data) have been ¨ zkaynak et al., 2008). HAPEM has also evaluated in the peer-reviewed literature (O been evaluated relative to ASPEN by comparing modelling results for more than 30 HAPs, which illustrated the importance of accounting for time–activity patterns, commuting patterns, and other factors that can result in lower or higher estimated ¨ zkaynak et al., 2008). Attempts to evaluate APEX include comparisons exposures (O of model results with personal ozone concentration measurements as part of the NAAQS assessment for ozone, in which model results were found to predict average personal exposure concentrations reasonably well, but to underestimate the variability in these estimates (US EPA, 2007c). In another study, APEX was found to underpredict personal ozone exposure measurements in indoor and in-vehicle microenvironments when windows were open, and to overpredict concentrations when windows were closed (Long et al., 2008). The results of APEX were also compared with HAPEM in a case study of benzene emissions in Houston, in which these models were found to provide similar estimated distributions for population exposures (Rosenbaum et al., 2002). SHEDSPM model predictions have been compared with some community and personal exposure field studies (Burke et al., 2001), including data from the NERL’s PM Panel Study in RTP (Burke et al., 2002), and a new project is under way that will compare the current version of this model with these same PM data. However, personal exposure studies with appropriate study designs and sufficient measurements for a thorough evaluation of the SHEDS-PM model are limited (Burke et al., 2001) and, to date, there has been limited access to a good air toxics dataset for evaluation of the SHEDS-Air Toxics model (although NERL’s recently completed exposure study in Detroit will be used to evaluate the SHEDS models for air toxics species, and PM mass and components). The prior (DOS) version of MCCEM was extensively evaluated, and included comparisons of model predictions with outputs from two other well-recognised indoor air models (CONTAM and INDOOR). Model outputs from the current (Windows) version of MCCEM have also been compared with the prior version using equivalent inputs. In addition, measured indoor air concentrations of toluene from an adhesive used in installing floor tiles have been compared with MCCEM model predictions based on small-chamber and research house testing (Nagda et al., 1995). Similarly, WPEM has been extensively evaluated using data generated from small-chamber testing to develop emission rate algorithms, and by comparing model predictions with measured data collected in a US EPA research test home in North Carolina involving alkyd and latex primer and paint. In general, the comparisons for MCCEM and WPEM have shown a high degree of correspondence between modelled and measured
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
105
values. The IAQX model has undergone only limited evaluation (except for the paint module, which was based on test house and small-chamber data), because this model is based on a number of existing models. The SWIMODEL has undergone limited evaluation using a PBPK model to bridge the output of published biomonitoring data to compare with model predictions in an effort to provide an indirect validation of or ‘‘reality check’’ on the modelling outputs (US EPA, 1999). PIRAT has also undergone little evaluation, because it is based on the US EPA’s default residential SOPs. However, a comparison of measured insecticide (chlorpyrifos) exposure estimates from contact with turf based on urine biomonitoring with exposure estimates calculated using the US EPA’s residential SOPs revealed that the measured residue transfers were well below the SOP estimates (Bernard et al., 2001). Similarly, former exposure assessments performed under OPP’s REDs process for OPs were found to overestimate human exposures when compared with biomonitoring data (Duggan et al., 2003). Other analyses presented by the US EPA (2004b) to the FIFRA SAP have also suggested that the use of pharmacokinetic and biomonitoring data provides more refined estimates of carbaryl exposures than estimates based on the US EPA’s residential SOPs. CalendexTM , CARESTM and LifeLineTM have also been evaluated by comparing modelling results with other dietary and residential aggregate exposure models, as well as with biomonitoring and environmental monitoring or market survey data (Duggan et al., 2003; Shurdut et al., 1998; Wright et al., 2002). In addition, model-to-model comparisons have shown that the dietary and residential aggregate exposure models can produce varying results, because of differences in methodologies (Young et al., 2008). However, the US EPA’s FIFRA SAP has recommended that the US EPA continue to use all of these models as a means of incorporating model uncertainty into an assessment, because each model possesses unique features that will prove useful in looking at different issues and more complex questions (SAP, 2004). Additionally, a number of initial efforts have been made to evaluate the SHEDSMultimedia model, including comparing the results of this model with those of other dietary or aggregate exposure models (Price and Zartarian, 2001; Xue et al., 2004, 2008), comparing individual model predictions with available field measurements and biomonitoring data (Hore et al. 2005; Zartarian et al. 2000), and performing pathwayspecific comparisons (Driver and Zartarian, 2008; Price and Zartarian, 2001). In general, the SHEDS-Multimedia predictions have compared reasonably well with biomonitoring data and other models, especially when evaluating the dietary module. However, the model was found to underpredict aggregate exposure results for chlorpyrifos based on a National Human Exposure Assessment Survey (NHEXAS) Minnesota biomonitoring study, most likely because it did not include a pathway for ingestion of environmental degradate (3,5,6-trichloro-2-pyridinol) residues. Efforts are currently under way within ORD’s NERL to address this pathway, and additional evaluations will be performed using data from NHANES and NERL’s Measurement Study in Jacksonville for metals and pyrethroid pesticides. More research is needed to obtain data for critical model inputs (e.g., dermal transfer coefficient, longitudinal activity data), and to conduct model evaluations for different chemical classes. The results of SHEDS-Wood have also compared well with other models (Xue et al., 2006).
106
3.4.3
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Integrated fate/transport and exposure models
The integrated fate/transport and exposure models included here represent a combination of screening-level and higher-tiered models that are focused on either inhalation or multimedia exposures for human and/or ecological receptors (see Table 3.5). HEM (US EPA 2007i) represents a population-based air dispersion modelling system that couples estimated ambient concentrations from an air dispersion model (AERMOD) with information on US Census block locations to predict potential population-level inhalation exposures and risks. HEM is often referred to as a screening-level model that provides surrogate exposure estimates, because human receptors are assumed to be continuously exposed at the census tract concentration over a lifetime. HEM was developed as a risk assessment tool in the early 1990s to support the US EPA’s Residual Risk Program, and to calculate industry sector risks for stationary sources. PERFUM (Reiss and Griffin, 2008) is a similar type of inhalation exposure model that integrates computer code from an atmospheric dispersion model (ISCST3) with some modifications to calculate downwind concentrations and potential acute exposures to nearby residents and other bystanders from fields treated with soil fumigants. This model was developed by industry in consultation with OPP to perform realistic and accurate buffer zone calculations, and to support the US EPA’s registration of soil fumigants, such as iodomethane. The US EPA’s (2002c, 2004c) Johnson and Ettinger (1991) Model for Subsurface Vapor Intrusion into Buildings estimates indoor air concentrations of pollutants and incremental cancer risks based on subsurface vapour intrusion from contaminated groundwater and soils. This model predicts the volatilisation of contaminants located in the subsurface soil or water (e.g., chemical fate/transport within soils) and subsequent mass transport of vapours into indoor spaces (e.g., chemical fate/transport between soil column and enclosed spaces) by relating the vapour concentration at the source of the contaminant to the vapour concentration in the indoor space. This model can be used to evaluate steady-state (infinite or non-diminishing source) as well as quasi-steady-state (finite or diminishing source) vapour transport. The US EPA’s vapour intrusion model is based on several modifications to the Johnson and Ettinger (1991) model, which relied on a number of simplifying assumptions and was developed for use as a screening-level (fate/transport) model. Specifically, the US EPA has developed a series of spreadsheets that allow for site-specific application of the Johnson and Ettinger (1991) model. Depending on the source characteristics, and on whether default or site-specific data are used, the US EPA’s modified vapour intrusion model can be used for either screening-level or more advanced exposure and risk applications. Potential applications for this model include RCRA Corrective Action sites, CERCLA Superfund sites, and voluntary clean-up sites (but this model does not account for contaminant attenuation, and should not be used for sites contaminated with petroleum products from leaking underground storage tanks). ChemSTEER (US EPA 2004d, 2007j), E-FAST (US EPA, 2007k) and IGEMS (US EPA, 2007l) represent multimedia models that were developed to support OPPT’s new and existing chemical programmes, such as new chemicals submitted for PMN review under TSCA Section 5 and existing chemicals evaluated under TSCA Section
107
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
6. Specifically, ChemSTEER was designed to assess potential human (worker) exposures to and environmental releases of chemicals during manufacturing, processing and use operations, while E-FAST was developed to assess potential human and environmental exposures from consumer products and industrial releases (see Figure 3.3). Both of these models, which are used to assess a few thousand new and existing chemicals each year, are considered to be conservative screening-level models that generally overpredict receptor exposures by using conservative (e.g., high-end) default assumptions and scenarios. However, the default values in these models can be modified, and should be changed if other values are deemed more suitable for the specific exposure scenario being evaluated. IGEMS, which was also developed to assess potential human and environmental exposures from fugitive and industrial releases, is characterised as a higher-tiered model, because it provides more details at the receptor level than E-FAST (e.g., users can change any parameter and select receptors to run ISC air models). This model is therefore typically reserved for those chemicals where a more accurate assessment of exposure is needed, or where a screening-level model is not applicable. Although all three of these models are routinely used by the US EPA for regulatory review of new and existing chemicals, they also serve as ‘‘all purpose’’ models because of their broad potential applications, and have been widely used outside the Agency by consultants, academics, communities, local and state governments, and internationally.
Figure 3.3: General E-FAST model approach. Physical–chemical properties and fate component (select a chemical ID button)
Models for screening-level exposure estimates component
General population and ecological exposure from industrial releases
Surface water
Report generator component (under construction)
Down-the-drain
Landfill
Ambient air (SCREEN3)
Consumer exposure pathway
Probabilistic dilution model (PDM)
108
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.5: Selected integrated fate/transport and exposure models used by the US EPA. Model
Model purpose
Exposure routes
Model inputs
Model outputs
HEM a
Estimates population-level exposures to air toxics (HAPs) for general population or subgroups at urban and national levels.
Inhalation (outdoor).
Ambient air concentrations (modelled) based on point sources, census tract population and demographic data, exposure factors.
Maximum and annual average concentrations (mg m3 ); hazard index; risk (per million).
PERFUM
Inhalation Estimates individual-level (outdoor). (acute) exposures to fumigants and degradation products for nearby residents and other bystanders near fields treated with soil fumigants.
ISCST3 computer code, field emissions or flux data, meteorological data, exposure factors.
Distribution of average daily concentrations (mg m3 ); MOE.
EPA’s Vapor Intrusion Modelb
Estimates indoor Inhalation (indoor). air pollutant concentrations and risk levels due to subsurface vapour intrusion from contaminated groundwater and soils.
Chemical properties, saturated and unsaturated soil properties (soil type, porosity, soil gas flow), chemical concentrations (groundwater, soil vapour), building properties (air exchange rate, building area and mixing height, crack width).
Steady-state or time-averaged concentration (mg m3 per mg kg1 soil or per mg L1 water), risk-based media concentration (mg kg1 soil or mg L1 water), incremental cancer risk.
109
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Conservative screening level (assumes continuous exposure at census tract over lifetime; does not have stochastic process to address variability or uncertainty).
Model components have been peer-reviewed (e.g., AERMOD), and the model itself underwent a SAB consultation in December 2007 as part of the US EPA’s Risk and Technology Review Assessment Plan; an application using this model is currently undergoing a formal SAB review.
Uses AERMOD as atmospheric dispersion model.
Higher-tiered (estimates based on variability in meteorological conditions; prior version accounted for uncertainty in flux rates).
External SAP peer review of model in 2004; model published in peer-reviewed literature; limited model evaluation (flux rates back calculated based on multiple field measurements).
Uses ISCST3 as atmospheric dispersion model (AERMOD is being considered for future versions).
Conservative screening level (infinite source) or highertiered (finite source) (conservative default or sitespecific data can be used; does not have stochastic process to address variability or uncertainty).
Original model published in peer-reviewed literature; limited model evaluation (few empirical data for either bench-scale or field-scale calibration or verification).
Based on Johnson and Ettinger model; modified by the US EPA in 1998, 2001, 2002 and 2004.
( continued)
110
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.5: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
ChemSTEER Estimates environmental releases and individual or population-level exposures and doses to chemicals for workers during manufacturing, processing and use operations.
Inhalation (indoor, outdoor), dermal (product).
Physico-chemical properties, production/use volume, case-specific parameters (e.g., operating days, batch amounts, container type), release sources, exposure factors, worker activities.
Potential dose rates (mg day1 ); ADD, LADD and acute potential dose (mg kg1 day1 ); releases (kg site1 day or kg yr1 all sites).
E-FAST
Estimates population-level exposures and doses to chemicals for humans and ecological receptors from industrial releases and consumer products.
Inhalation (indoor, outdoor), dermal (consumer product), ingestion (drinking water, fish).
Physico-chemical properties, chemical release information (e.g., amount, media, days, location), fate information, air dispersion model, type of consumer product, exposure factors, and use patterns.
Lifetime average daily or acute concentrations (e.g., mg L1 , mg m3 ); LADD (mg kg1 day1 ); percentage exceedances.
IGEMS
Estimates population-level exposures and doses to chemicals for humans and ecological receptors from industrial releases.
Inhalation (outdoor), dermal (water), ingestion (drinking water).
Physico-chemical properties, chemical release information (e.g., amount, media, days, location), fate information, air dispersion model, surface water and groundwater models, exposure factors, and use patterns.
Lifetime average daily concentrations (e.g., mg L1 , mg m3 ); LADD (mg kg1 day1 ).
111
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
Model characterisation
Model peer review and evaluation
Links to other models
Conservative screening level (uses default parameter values and calculations; assumes worst-case scenarios for each source and worker activity; does not have stochastic process to address variability or uncertainty).
Components of ChemSTEER have been externally peerreviewed (and any models or scenarios incorporated into ChemSTEER would have gone through an external peer review process); modelling results compared with monitoring data.
Uses several dozen release and exposure models; selected model outputs (environmental releases) are used as inputs to E-FAST.
Conservative screening level (uses default parameter values and high-end assumptions; assumes continuous exposures to predicted concentrations; does not have stochastic process to address variability or uncertainty).
External letter peer review of consumer exposure module in 1998 and the general population, down-the-drain, and probabilistic dilution model modules in 2001; limited model evaluation except for the consumer paint module.
Uses SCREEN3 as atmospheric dispersion model and WPEM emission algorithm for consumer latex paint exposures; other consumer emissions based on Chinn algorithm; in OPPT New Chemical Program, EPI-Suite provides fate information and ChemSTEER provides release information.
Higher-tiered, screening level (assumes continuous exposures to predicted concentrations, but is more detailed at the receptor level than E-FAST; does not have stochastic process to address variability or uncertainty).
Components of IGEMS have been peer-reviewed and evaluated because it is based on existing models.
Uses other environmental models for air (ISC), soil and groundwater (CSOIL, AP123D), and surface water (PRout).
( continued)
112
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 3.5: ( continued ) Model
Model purpose
Exposure routes
Model inputs
Model outputs
TRIM
Estimates fate and transport, environmental media concentrations, and populationlevel exposures and doses for human and ecological receptors from pollutants.
Inhalation, ingestion (ecological), dermal (ecological).
Physico-chemical properties, chemical release information (e.g., amount, media, days, location), fate information, air dispersion model, surface water and groundwater models, exposure factors, and use patterns.
Mass concentration in media and biota (g); biota and pollutant intakes (mg kg1 day1 ); exposure concentrations (ppm or mg m3 ); delivered doses (mg kg1 day1 ).
3MRA
Estimates population-level exposures and doses to chemicals for human and ecological receptors from land-based WMU releases.
Inhalation (outdoor, indoor), ingestion (drinking water, garden and farm products, fish), dermal.
Physico-chemical properties and fate information for air, surface water and groundwater models, human and ecological exposure (doses).
Annual average daily concentrations (e.g., mg L1 , mg m3 ) and applied dose (mg kg1 day1 ); cancer risk, hazard quotient, MOE.
a
HEMScreen, which contained the ISCLT air dispersion model, is no longer used or supported by the US EPA. b EPA’s Johnson and Ettinger (1991) model for subsurface vapour intrusion into buildings.
TRIM (US EPA, 2003c, 2006e) and 3MRA (US EPA, 2003d) potentially represent the ‘‘next generation’’ of highly integrated multimedia models that can support various regulatory and research efforts. The TRIM framework, which was developed by the US EPA’s OAQPS, contains three modules that assess the fate and transport of pollutants in the environment (TRIM-FaTE), potential multimedia exposures to human receptors (TRIM-Expo), and potential noncancer and cancer risks to human or ecological receptors (TRIM-RISK). This model framework is expected to support Agency activities such as the Residual Risk Program, the Integrated Urban Air Toxics Strategy, petitions to delist individual HAPs and/or source categories, the review and setting of NAAQS, and regulatory impact analyses for air toxics regulations. 3MRA is a similar type of multimedia model that operates within the broader FRAMES framework. This model, which was originally developed by ORD to support OSW’s HWIR for conducting risk assessments around hazardous waste sites, simulates
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
113
Model characterisation
Model peer review and evaluation
Links to other models
Higher-tiered (stochastic process addresses variability and uncertainty in input parameters using two-stage Monte Carlo; sensitivity analysis option).
External SAB peer review of conceptual approach and TRIM.FaTE in 1998 and second review in 1999; several peer-reviewed publications, verification of model approach and performance, benchmarking results against other models, and comparisons with field data.
Modules include TRIMFaTE, TRIM-Expo, and Trim-Risk; uses APEX to estimate inhalation exposures.
Higher-tiered (stochastic process addresses variability and uncertainty in input parameters using two-stage Monte Carlo).
External SAB peer review of the complete model system in 2004; verification of model approach and performance, benchmarking results against other models, and comparisons with other analytical solutions, numerical models, and field data.
Uses other environmental models for air (ISCST3), subsurface transport (EPACMTP), surface water transport (EXAMS), and groundwater (MULTIMED).
potential population-level exposures and risks for human or ecological receptors from land-based WMU releases (see Figure 3.4). This model can also be used to assess a wide range of multimedia risk assessment problems including national and sitespecific applications, such as evaluations of remedial actions at hazardous, toxic and radioactive waste sites. All of the identified integrated fate/transport and exposure models include algorithms for assessing a pollutant’s fate and transport in the environment; yield ambient pollutant concentrations in different environmental media; and estimate potential exposures, doses or risks for human and/or ecological receptors. However, modelled exposures do not account for actual time–activity patterns, and are generally based on the assumption that an individual or population has daily or continuous contact with predicted environmental media concentrations. For example, HEM, E-FAST and 3MRA predict outdoor ambient air concentrations, and assume that
114
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 3.4: General 3MRA model approach.
Solid/semi-solid wastes
Liquid wastes
Ecological risk Surface impoundment
Terrestrial food web
Air
Aquatic food web
Aerated tank Surface water
Watershed Landfill
Human exposure
Waste pile Land application unit
Ecological exposure
Vadose zone
Aquifer
Farm food chain Human risk
Sources
Transport
Food chain
Exposure/risk
receptors are stationary and remain exposed to this concentration for 24 hours per day (even when indoors or at another location). E-FAST and 3MRA also calculate pollutant concentrations in ground and surface water, and assume that receptors consume this water (untreated) as their sole drinking water source on a daily basis (even if this water is not known to be used for consumption). ChemSTEER utilises a somewhat different approach in which generic scenarios that combine sources and worker activities for a given operation are used to yield conservative (reasonable worst-case) results based on several dozen release and exposure models (see Table 3.6). For example, common worker activities with the potential for inhalation exposures include sampling, drumming, and clean-up of equipment (Matthiessen, 1986). Because the integrated fate/transport and exposure models are based on assumed rather than actual contact with a contaminant, they are more relevant for characterising potential (rather than actual) exposures to human or ecological receptors. It is noteworthy that the primary use of some of these models (e.g., E-FAST) is to provide a quick turnaround, first-tiered analysis for evaluating a large number of chemicals in a short time period in order to screen out those that are not likely to be of concern. Chemicals or facilities that do not pass this initial screen will generally undergo more refined analyses by substituting default assumptions with more accurate or site-specific data, or by using a higher-tiered model. General inputs to these models include chemical or product-specific information (e.g., product type, formulation, chemical and physical properties), product use patterns (e.g., volume, rate, technique), chemical release information (e.g., amount, media), site-specific information (e.g., operations, releases, location, land use, build-
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
115
ing-related parameters), pollutant concentrations in a given waste stream or medium (e.g., soil gas), and other exposure factors (e.g., intake rates, body weight). These latter values are generally based on the US EPA’s (1997b) Exposure Factors Handbook for model default (or user-defined) populations or subpopulation groups. These models also typically incorporate other fate/transport models, or rely on simple algorithms or standard mass balance equations. The US EPA’s (2009) EPI SuiteTM model is also sometimes used to provide estimates of physical/chemical and environmental fate and transport properties that can be used as inputs to screening-level exposure models in the absence of reliable measured data. Outputs to these models generally include environmental media concentrations (e.g., mg m3 , mg kg1 , mg L1 ) and potential exposures or doses (mg kg1 day1 ) averaged over longer-term (e.g., LADD), intermediate (e.g., ADD) and short-term (e.g., acute potential dose) durations. Some of these models also compare estimated exposure or dose levels with existing toxicity criteria to calculate potential human or ecological risks (e.g., MOE, cancer risk, fraction of exceedance). With the exception of PERFUM, TRIM and 3MRA, none of the identified integrated fate/transport and exposure models attempt to address the variability or uncertainty in model input parameters or exposure estimates. This is not surprising for screening-level models such as ChemSTEER and E-FAST, which are designed to provide upper-bound estimates of exposure. PERFUM accounts for daily and seasonal variability in meteorological conditions by calculating a distribution of average daily soil fumigant concentrations at each receptor point around the field. TRIM is a probabilistic model that includes features for performing sensitivity analyses as well as two-stage Monte Carlo analyses that can address both the variability and uncertainty in selected input parameters. 3MRA has a similar two-stage Monte Carlo analysis function that operates through a separate parallel computing model called SuperMUSE (Babendreier and Castleton, 2005). Although TRIM and 3MRA are considered to be higher-tiered models, both of these models can be used to perform simple deterministic screening-level analyses using conservative default parameters. Like HAPEM, 3MRA is also sometimes referred to as a screening-level model because of its limited spatial and temporal scope (i.e., it was originally designed for national-level and chronic health assessments, although it can be adapted to sitespecific and regional scales). All of the integrated fate/transport and exposure models included here have been internally reviewed in accordance with Agency-wide policies and procedures, and these models have undergone varying degrees of external peer review. Although HEM itself has not undergone a formal peer review, because it comprises a regulatory default model (i.e., its air dispersion model, AERMOD, has been peer reviewed), it underwent an SAB consultation in December 2007 as part of the US EPA’s Risk and Technology Review Assessment Plan. An application using HEM is also currently undergoing a formal SAB review. PERFUM underwent an external SAP peer review in 2004, and its conceptual approach and methodology have been published in the peer-reviewed literature (Reiss and Griffin 2006). The US EPA’s modified vapour intrusion model is based on fate/transport equations that have been published in the peer-reviewed literature (Johnson and Ettinger, 1991). Components
Cleaning liquid residues from bottles, small containers, drums, and totes used to transport raw material or products; cleaning liquid residuals from storage or transport vessels; sampling liquids.
Penetration Model Estimates releases to air from evaporation of a chemical from an open, exposed liquid surface (indoor sources of release).
Conditional default model for estimating releases to air from the Estimates releases of a volatile cooling tower additive chemical as a result of evaporation of the recirculating fluid Recirculating Water-Cooling Tower Additive Releases source/ activity. (e.g., water).
Default for calculating worker inhalation exposures to a volatile Estimates the amount of chemical inhaled by a worker (typical and worst case) during an activity in which chemical chemical while performing the following sources/activities: cleaning liquid residuals or loading and unloading of liquids into vapour is generated. transport containers/vessels, sampling of liquids, vapour release from open liquid surfaces. ( continued)
Cooling Tower Blowdown Loss Model
Mass Balance Inhalation Model
Automobile Finish Estimates releases of overspray of non-volatile chemicals in Default for calculating multimedia releases of a chemical to air, as Coating Overspray coatings during their application to refinished automobiles well as water, incineration, or landfill for the Automobile Refinish Spray Coating Application source/activity. using spray guns within a spray booth with controls to Loss Model capture overspray from the exhaust.
Cleaning liquid residues from tank trucks or rail cars used to transport raw material or products or equipment cleaning losses of liquids.
Mass Transfer Estimates releases to air from evaporation of a chemical Coefficient Model from an open, exposed liquid surface (outdoor sources of release).
Estimates releases to air from displacement of air containing Loading or unloading of liquids into transport container or vessels. chemical vapour as a container or vessel is filled with liquid.
AP-42 Loading Model
Default sources/activities
Model description
Model
Table 3.6: Examples of default models included in ChemSTEER screening-level model.
116 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Default for calculating worker dermal exposures to a liquid chemical while performing liquid sampling sources/activities. Default for calculating worker dermal exposures to a liquid chemical while performing the following sources/activities: automobile spray coating, miscellaneous sources/activities related to liquid processing.
Utilises worst-case and typical exposure rates to estimate the amount of chemical inhaled by a worker during handling of ‘‘small volumes’’ (,54 kg/worker-shift) of solid/powdered materials.
Estimates dermal exposure to the chemical for one-hand contact with liquid containing the chemical.
Estimates dermal exposure to the chemical for two-hand immersion in liquid containing the chemical .
Small Volume Handling Model
1-Hand Dermal Contact with Liquid Model
2-Hand Dermal Immersion in Liquid Model
Default for calculating worker inhalation exposures to a chemical while performing any source/activity for sampling solids (handling of these small volumes is presumed to be scooping, weighing and pouring of the solid materials).
Estimates the amount of chemical inhaled by a worker who Default for calculating worker inhalation exposures to a chemical conducts activities near roll coater(s) using coatings or inks. while performing the roll coating source/activity.
UV Roll-Coating Inhalation Model
Default sources/activities
Model description
Model
Table 3.6: ( continued )
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
117
118
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
of ChemSTEER have been externally peer-reviewed, and because models in ChemSTEER have been extensively used in Agency assessments for over 10 years, any models or scenarios incorporated into ChemSTEER would have gone through an external peer review process. E-FAST had an external letter peer review of its consumer exposure module in 1998, and a review of its general population, downthe-drain and probabilistic dilution model modules in 2001. IGEMS has undergone limited external peer review, because it is based on existing models that have already been peer-reviewed, and it is still under development. TRIM and 3MRA have been extensively reviewed (internally and externally), although it has not been possible to assess these models in their entirety owing to their complexity. TRIM underwent two external US EPA SAB (1998, 2000) peer reviews, in which the development of TRIM and the TRIM-FaTE module was found to be conceptually sound and scientifically based. 3MRA also underwent an external US EPA SAB (2004) peer review, and its approach and equations have been published in the peer-reviewed literature (Marin et al., 2003). For most of the reviewed models, attempts have been made to evaluate them when data are available for such an evaluation, but it can be difficult to evaluate screeninglevel models designed to yield upper-bound or worst-case estimates. HEM, PERFUM and the US EPA’s modified vapour intrusion model have undergone limited model evaluation, while ChemSTEER and E-FAST have been partially evaluated when data were available for this purpose. For example, the mass balance approach used by ChemSTEER has been evaluated by comparing model predicted exposures for specific operations with monitoring data reported in selected studies from the available literature (Fehrenbacher and Hummel, 1996). This evaluation illustrated that estimated exposures based on the midpoint of the range of default input values were well within one order of magnitude of the measured exposures, but that selection of more conservative model input values overestimated exposures by one or more orders of magnitude. The consumer paint module in E-FAST also underwent extensive evaluations as part of the WPEM model review, and components of IGEMS have been evaluated because it is based on existing models. A number of efforts have been made to evaluate TRIM (particularly the TRIM-FaTE module) and 3MRA by verifying the approach and performance of these models, including performing sensitivity analyses and benchmarking the results of these models against one another (US EPA, 2002d). The results of TRIM-FaTE have also been tested by the US EPA (2005c) using field data on organic and inorganic pollutants (e.g., PAHs and mercury), and the results of 3MRA have been compared with other analytical solutions, numerical models, and field data.
3.5
DISCUSSION
In this chapter, we have provided an overview of 35 exposure assessment models that are currently supported and used by the US EPA for regulatory, research, voluntary programme or other purposes. These include selected fate/transport models, exposure models, and integrated fate/transport and exposure models. We can draw a number of observations based on our review of these models.
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
119
First, this review included models that were developed for a specific purpose, route(s) of exposure, or category of pollutants, as well as those that were designed for more general applications. For example, most of the fate/transport models were developed as generic models to assess compliance with environmental standards or estimate media-specific chemical concentrations applicable to both human and ecological receptors. The exposure models, on the other hand, were typically developed to assess human exposures and risks due to either inhalation exposures from criteria and toxic air pollutants or aggregate multimedia exposures to pesticides or other chemicals in residential settings. The integrated fate/transport and exposure models were designed for either a specific purpose (e.g., human inhalation exposures from HAPs or fumigants) or to assess multimedia exposures for human or ecological receptors to many different chemicals from a variety of sources (e.g., industrial releases; manufacturing, processing and use operations; waste disposal sites; consumer products). Despite their original purpose, many of the exposure assessment models included here have evolved over time to include broader applications. Second, many of the models we reviewed were found to rely on a common set of underlying inputs, databases, equations, or other models. For example, the air quality and dispersion models rely on similar sources of emissions and meteorological data, and the surface water and drinking water models rely on similar types of environmental fate and application rate data. For the exposure models, many use data collected from the US Census Bureau to define population characteristics (e.g., demographics) and receptor locations (e.g., census tracts). These models also typically use human exposure factors (e.g., body weight, intake rates) based on the US EPA’s (1997b) Exposure Factors Handbook, and time–activity patterns (e.g., hand-to-mouth contacts) based on the US EPA’s (1997a) CHAD. In addition, many of the default scenarios and equations included in the dietary and residential aggregate exposure models for pesticides and inert ingredients are based on the US EPA’s residential SOPs, and food consumption data in these models are often based on the USDA’s CSFII. A number of the exposure models and integrated fate/transport and exposure models also include modules for, or build off, the fate/transport models. Third, an important distinction among the various models summarised here is their level of analysis and spatial and temporal resolution. For example, the fate/ transport models generally represent either steady-state or dynamic conditions and predict ambient concentrations that are applicable to local, urban, regional or national scales for short- or longer-term time periods. The exposure models and integrated fate/transport and exposure models are also applicable for assessing exposures at the local, urban or national level. Most of these models allow for the estimation of acute (short-term, single-dose), subchronic and/or chronic (long-term, lifetime) potential or actual exposures or doses. Some of these models produce time-averaged or timeintegrated exposure estimates, whereas others produce a time series or time profile of exposure estimates. In addition, these models are generally designed to assess exposures either to individuals (e.g., residents, consumers, workers) or to the general population or subgroups, although some models can be used to assess both individuallevel and population-level exposures.
120
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Fourth, the exposure assessment models evaluated here differ in regard to how to characterise their outputs and modelling results. That is, these models generally provide either conservative (e.g., upper-bound) estimates for screening-level applications, or more refined estimates for higher-tiered purposes. The US EPA relies on several conservative screening-level models to estimate potential exposures to human or ecological receptors, such as those that support OPP’s registration and re-registration of pesticides and OPPT’s new and existing chemical programmes in order to quickly screen and prioritize several thousand chemicals each year. As noted, a few of the US EPA’s models are referred to as screening-level models because of their limited spatial or temporal resolution, rather than because they provide conservative estimates of exposure. The US EPA also relies on a number of higher-tiered models to provide best estimates or the most accurate characterisation of chemical concentrations or exposures. Fifth, only a subset of the models that we reviewed had capabilities, such as stochastic processes, to assess the variability and/or uncertainty in modelled estimates and input parameters. These tended to be the higher-tiered exposure models and some of the integrated fate/transport and exposure models. For these models, such assessments were usually accomplished by performing one- or two-dimensional Monte Carlo analyses, sensitivity analyses, and/or contribution analyses. Time–activity patterns and chemical or pesticide residue values in different environmental media were the most common model input parameters that were varied in order to address variability or uncertainty. Because the screening-level models are generally designed to produce overestimates of exposure, they typically do not address variability or uncertainty, and instead use deterministic methods to produce point estimates of concentration or exposure. Sixth, all of the exposure assessment models supported and used by the US EPA have been internally peer-reviewed to ensure consistency with Agency-wide policies and procedures, and many of these models have undergone external peer review by independent outside experts. External peer reviews can consist of letter peer reviews, panel reviews, reviews by scientific advisory boards, and/or publication in the peerreviewed literature. These rigorous internal and external peer review efforts have resulted in continuous updates and improvements to the US EPA’s models. Seventh, the models included in this chapter have undergone varying degrees of model evaluation. Although complex computational models can never be truly validated (NAS, 2007), and it has been difficult to evaluate many of the US EPA’s models in their entirety owing to limitations in analytical monitoring technologies and other factors (e.g., personal monitors are usually passive devices that yield timeaveraged rather than time-series results), some of the key components of the US EPA’s exposure assessment models have been evaluated using different approaches. For example, detailed studies have been undertaken in order to obtain real-world activity and commuting data and information on microenvironmental factors for use in some of the exposure models. Many of these models have also compared modelling outputs with actual air measurements or field data, and some of the dietary and residential aggregate exposure models have compared modelled estimates with biomonitoring data. Additionally, a large number of these models have compared their results with
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
121
those of other models. Such ‘‘model to model’’ comparisons are considered to be a useful way to assess or corroborate a model’s performance, and to address model uncertainty (NAS, 2007). Eighth, in many cases there is not a single ‘‘right’’ or ‘‘best’’ model, and several models may be used to estimate environmental concentrations or exposures (US EPA, 1992). This finding was apparent in our review of the various models included in this paper, in which several models were sometimes available for the same or similar purpose, although the methodologies and outputs of these models might differ. For example, more than one fate/transport model is available to assess outdoor ambient pollutant concentrations at different receptors, and multiple exposure models are available for assessing inhalation exposures to criteria and toxic air pollutant and other indoor pollutants. Several exposure models are also available to assess dietary and residential aggregate exposures to pesticides. Although improved coordination among and within the US EPA’s programme offices may be warranted in order to avoid model duplicity, or to develop more uniform models, it may be advantageous to rely on multiple models for regulatory and research purposes. For example, the US EPA’s FIFRA SAP has recommended that several complementary models continue to be used by OPP as a way to evaluate and address model uncertainties (SAP, 2004), and the NAS (2007) has stated that such ‘‘model to model’’ comparisons are a useful way to assess or corroborate a model’s performance. Ninth, although most of the US EPA’s exposure models have been designed to be ‘‘stand-alone’’ models, recent and ongoing efforts in the US EPA have focused on developing integrated modelling approaches. For example, attempts have been made to conduct integrated air quality and exposure modelling in order to identify those sources and microenvironments that contribute to the greatest portion of personal or population exposures, and to determine optimum risk management strategies (Isakov et al., 2006). Advanced approaches that can combine regional and local models have also been touted as a future direction for air quality modelling of HAPs in order to address the spatial variability of air concentrations and allow for better treatment for chemically reactive air toxics (Touma et al., 2006). The US EPA’s (2008d) draft White Paper on Integrated Modeling for Integrated Environmental Decision Making further recommends that the Agency adopt a ‘‘systems thinking approach’’ and consistently and systematically implement integrated modelling approaches and practices that inform Agency decision-making. Tenth, each of the individual models and model categories contains various strengths and weaknesses. For example, many of the exposure assessment models have a wide range of applications and are used by a number of internal and external users. Most of these models are also self-contained, well documented, and relatively easy to use. However, a few of the exposure and integrated fate/transport and exposure models (particularly those that assess multiple pathways, scenarios or receptors) are complex, and require many data inputs or more advanced users. In addition, only a subset of the models reviewed consider multiple sources and pathways, with the remaining models addressing only a single route of exposure (e.g., inhalation). Some of these models are also limited in their spatial or temporal scope, their ability to address model variability and uncertainty, and the extent to which they have been externally peer-reviewed and
122
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
evaluated. The exposure assessment models also differ in regard to the accuracy and characterisation of their estimates, with the fate/transport and screening-level models providing ‘‘potential’’ or conservative estimates of exposure, and the higher-tiered exposure models providing the most accurate estimates of exposure. In summary, this chapter provides an overview of 35 exposure assessment models that are currently supported and used by the US EPA. These models represent the first half of the source-to-outcome continuum, and include selected fate/transport, exposure, and integrated fate/transport and exposure models. Although our review does not include all of the US EPA’s models, the information presented here should provide a useful up-to-date resource for exposure and risk modellers and practitioners. This work also supports recent and ongoing efforts at the US EPA, such as proposed strategies by the CREM, to further inventorise, characterise and evaluate its models.
ACKNOWLEDGEMENTS We thank various model developers and managers within the US EPA’s programme offices for their assistance in better understanding US EPA’s OAQPS models (John Langstaff, Ted Palmer), NERL/CEAM models (Gerry Laniak, Robert Ambrose, Chris Knightes, Luis Suarez, John Johnston), NERL/SHEDS models (Janet Burke, Brad Schultz, Jianping Xue, Haluk Ozkaynak), NRMRL models (Zhishi Guo, Lewis Rossman), OPPT models (Christina Cinalli, Conrad Flessner, Sharon Austin, Fred Arnold, Nhan Nguyen), OPP models (Thomas Brennan, Kerry Leifer, James Hetrick, Nelson Thurman, Cassi Walls), and OSWER models (Henry Schuver, Subair Saleem). We also thank Noha Gaber, with the US EPA’s CREM, for her informal internal review of this chapter. Any views or opinions expressed herein are those of the authors only, and do not necessarily reflect those of the Agency. This chapter has been subjected to Agency review and approval for publication.
ENDNOTES 1. 2. 3. 4. 5. 6. 7.
Gary Bangs, US EPA, Office of the Science Advisor, Risk Assessment Forum, Washington, DC. Noha Gaber, US EPA, Washington, DC. www.epa.gov/oppt/exposure/ www.epa.gov/scram001/ aqmindex.htm www.epa.gov/ceampubl/ www.epa.gov/nerl/topics/models.html Zhishi Guo, US EPA, Research Triangle Park, NC.
REFERENCES ACC (2003). ACETONE (CAS No. 67-64-1). VCCEP Submission. American Chemistry Council, Arlington, VA.
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
123
ACC (2006a). Voluntary Children’s Chemical Evaluation Program (VCCEP) Tier 1 Pilot Submission for BENZENE (CAS No. 71-43-2). American Chemistry Council, Arlington, VA. ACC (2006b). Voluntary Children’s Chemical Evaluation Program (VCCEP) Tier 1 Pilot Submission for Toluene (CAS No. 108-88-3). American Chemistry Council, Arlington, VA. Aiyyer, A., Cohan, D., Russell, A., Steyn, D., Stockwell, W., Tanrikulu, S., Vizuete, W. and Wilczak, J. (2007). Final Report: Third Peer Review of the CMAQ Model. University of North Carolina, Chapel Hill, NC. Available at: www.cmascenter.org/PDF/CMAQ_Third_Review_Final_ Report.pdf Amar, P., Bornstein, R., Feldman, H., Jeffries, H., Steyn, D., Yamartino, R. and Zhang, Y. (2004). Final Report Summary: December 2003 Peer Review of the CMAQ Model. University of North Carolina, Chapel Hill, NC. Available at: www.cmascenter.org/r_and_d/first_review/pdf/final_ report.pdf Amar, P., Chock, D., Hansen, A., Moran, M., Russell, A., Steyn, D. and Stockwell, W. (2005). Final Report: Second Peer Review of the CMAQ Model. University of North Carolina, Chapel Hill, NC. Available at: www.cmascenter.org/PDF/CMAQ_Scd_Peer_Rev_July_5.pdf Ambrose, R.B., Martin, J.L., and Wool, T.A. (2006). Wasp7 Benthic Algae: Model Theory and User’s Guide. EPA/600/R-06/106. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/athens/publications/reports/Ambrose600R06106WASP7.pdf Aschengrau, A., Weinberg, J., Rogers, S., Gallagher, L., Winter, M., Vieira, V., Webster, T. and Ozonoff, D. (2008). Prenatal exposure to tetrachloroethylene-contaminated drinking water and the risk of adverse birth outcomes. Environmental Health Perspectives. 116: 814–820. Babendreier, J.E. and Castleton, K.J. (2005). Investigating uncertainty and sensitivity in integrated, multimedia environmental models: tools for FRAMES-3MRA. Environmental Modelling & Software. 20: 1043–1055. Bernard, C.E., Nuygen, H., Truong, D. and Krieger, R.I. (2001). Environmental residues and biomonitoring estimates of human insecticide exposure from treated residential turf. Archives of Environmental Contamination and Toxicology. 41: 237–240. Burke, J. (2005). SHEDS-PM Stochastic Human Exposure and Dose Simulation for Particulate Matter: User Guide. US Environmental Protection Agency, Research Triangle Park, NC. ¨ zkaynak H. (2001). A population exposure model for particulate matter: Burke, J., Zufuall, M. and O case study results for PM2:5 in Philadelphia, PA. Journal of Exposure Analysis and Environmental Epidemiology. 11: 470–489. ¨ zkaynak, H. (2002). Ambient particulate Burke, J., Rea, A., Suggs, J., Williams, R., Xue, J. and O matter exposures: a comparison of SHEDS-PM exposure model predictions and estimates derived from measurements collected during NERL’s RTP PM Panel Study. Presentation at International Society of Exposure Analysis Conference, Vancouver, Canada. Available on request. See also http://cfpub.epa.gov/ordpubs/nerlpubs/nerlpubs_heasd_2002.cfm?ActType¼ Publications&detype¼document&excCol¼General Burns, L.A. (2000). Exposure Analysis Modeling System (EXAMS): User Manual and System Documentation. EPA/600/R-00/081. US Environmental Protection Agency, Athens, GA. September 2000. Available at: www.epa.gov/ATHENS/publications/reports/EPA_600_R00_ 081.pdf Burns, L.A. (2006). The EXAMS-PRZM Exposure Simulation Shell, User Manual for EXPRESS. EPA/600/R-06/095. US Environmental Protection Agency, Athens, GA. Available at: www. epa.gov/athens/publications/reports/Burns600R06095Express%20Manual.pdf Byun, D. and Ching, J. (1999). Science Algorithms of the EPA Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. EPA/600/R-99/030. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/asmdnerl/CMAQ/ CMAQscienceDoc.html CASAC (2006). Clean Air Scientific Advisory Committee’s (CASAC) Peer Review of the Agency’s 2nd Draft Ozone Staff Paper. EPA-CASAC-07-001. US Environmental Protection Agency, Washington, DC. Daniels, W.J., Lee, S. and Miller, A. (2003). EPA’s exposure assessment tools and models. Applied Occupational and Environmental Hygiene. 18: 82–86.
124
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Driver, J. and Zartarian, V. (2008). Residential exposure model algorithms: comparisons by exposure pathway across four models. Presentation at ACS Conference, Philadelphia, PA. Available on request. See also http://oasys2.confex.com/acs/236nm/techprogram/(under title ‘‘Residental pesticide exposure assessment’’). Dubus, I.G., Brown, C.D. and Beulke, S. (2003). Sensitivity analyses for four pesticide leaching models. Pesticide Management Science. 59: 962–982. Duggan, A., Charnley, G., Chen, W., Chukwudebe, A., Hawk, R., Krieger, R.I, Ross, J. and Yarborough, C. (2003). Di-alkyl phosphate biomonitoring data: assessing cumulative exposure to organophosphate pesticides. Regulatory Toxicology and Pharmacology. 37: 382–395. Emery, C.E., Jia, Y., Kemball-Cook, S., Mansell, G., Lau, S. and Yarwood, G. (2004). Modeling an August 13–22, 1999 Ozone Episode in the Dallas/Fort Worth Area. ENVIRON, Novato, CA. Available at: www.camx.com/publ/pdfs/Rev_DFW_1999_Report_9–30–04.pdf ENVIRON (2006). User’s Guide: Comprehensive Air Quality Model With Extensions. ENVIRON International Corporation, Novato, CA. Available at: http://mail.ess.co.at/MANUALS/ AIRWARE/CAMxUsersGuide_v4.30.pdf Farrier, D.S. and Pandian, M.D. (2002). CARES (Cumulative and Aggregate Risk Evaluation System) Users Guide. CropLife America, Washington, DC. Available at: www.epa.gov/scipoly/SAP/ meetings/2002/april/cares/cares_documentation/umanual.pdf Fehrenbacher, M.C. and Hummel, A.A. (1996). Evaluation of the mass balance model used by the Environmental Protection Agency for estimating inhalation exposure to new chemical substances. American Industrial Hygiene Association. 57: 526–536. Furtaw, E.J. (2001). An overview of human exposure modeling activities at the US EPA’s National Exposure Research Laboratory. Toxicology and Industrial Health. 17: 302–314. Guo, Z. (2000a). Simulation Tool Kit for Indoor Air Quality and Inhalation Exposure (IAQX). User’s Guide. EPA-600/R-00-094. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/appcdwww/iemb/Docs/IAQX_doc.pdf Guo, Z. (2000b). Development of a Windows-based indoor air quality simulation software package. Environmental Modeling & Software. 15: 403–410. Guo, Z. (2002a). Review of indoor emission source models: Part 1. Overview. Environmental Pollution. 120: 533–549. Guo, Z. (2002b). Review of indoor emission source models: Part 2. Parameter estimation. Environmental Pollution. 120: 551–564. Hore, P., Zartarian, V., Xue, J., Ozkaynak, H., Wang, S.W., Yang, Y.C., Chu, P.L., Sheldon, L., Robson, M., Needham, L., Barr, D., Freeman, N., Georgopoulos, P. and Lioy, P.J. (2005). Children’s residential exposure to chlorpyrifos: Application of CPPAES field measurements of chlorpyrifos and TCPy within MENTOR/SHEDS-Pesticides model. Science of the Total Environment. 366: 525–537. ¨ zkaynak, H. (2006). Linking air quality and exposure models. Isakov, V., Graham, S., Burke J. and O Journal of the Air & Waste Management Association. September, 26–29. Jayjock, M.A., Chaisson, C.F., Arnold, S. and Dederick, E.J. (2007). Modeling framework for human exposure assessment. Journal of Exposure Science and Environmental Epidemiology. Suppl 1: S81–9. Johnson, P.C. and Ettinger, R.A. (1991). Heuristic model for predicting the intrusion rate of contaminant vapors in buildings. Environmental Science and Technology. 25: 1445–1452. Kidwell, J., Tomerlin, J. and Peterson, B. (2000). DEEMTM (Dietary Exposure Evaluation Model) Users Manual. Novigen Sciences, Inc., Washington, DC. Available at: www.epa.gov/scipoly/ SAP/meetings/2004/april/deemmanual.pdf Koontz, M.D. and Nagda, N.L. (1991). A multichamber model for assessing consumer inhalation exposure. Indoor Air. 1: 593–605. Kumar, N. and Lurmann, F.W. (1997). Peer Review of Environ’s Ozone Source Apportionment Technology and the CAMx Air Quality Models. Revised Final Report. STI-996203-1732-RFR. Santa Rosa, CA. Available at: www.camx.com/publ/pdfs/stipeer.pdf LifeLine Group, Inc. (2000). LifeLine TM Users Manual, version 1.0. Software for Modeling Aggre-
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
125
gate and Cumulative Exposures to Pesticides. Available at: www.epa.gov/scipoly/SAP/ meetings/2001/march28/lifeline.pdf Long, T., Johnson, T. and Capel, J. (2008). Comparison of continuous personal ozone measurements to ambient concentrations and exposure estimates from the APEX-ozone exposure model. Presentation at Joint International Society for Environmental Epidemiology & International Society of Exposure Analysis Conference. Pasadena, CA. Available on request. See also http:// secure.awma.org/events/isee-isea/images/Conference_Abstract_Book.pdf Lupo, P. and Symanski, E. (2008). Model to monitor comparison of hazardous air pollutants (HAPs) in Texas. Presentation at Joint International Society for Environmental Epidemiology & International Society of Exposure Analysis Conference. Pasadena, CA. Available on request. See also http://secure.awma.org/events/isee-isea/images/Conference_Abstract_Book.pdf (under title ‘‘A comparative analysis of monitored ambient hazardous air pollutant levels with modeled estimates from the assessment system for population exposure nationwide’’) Marin, C.M., Guvanasen, V. and Saleem, Z.A. (2003). The 3MRA risk assessment framework: a flexible approach for performing multimedia, multipathway, and multireceptor risk assessments under uncertainty. Human and Ecological Risk Assessment. 9: 1655–1677. Maslia, M.L., Sautner, J.B., Aral, M.M., Reyes, J.J., Abraham, J.E. and Williams, R.C. (2000). Using water-distribution system modeling to assist epidemiologic investigations. Journal of Water Resources Planning and Management. 126: 180–198. Matthiessen, R.C. (1986). Estimating chemical exposure levels in the workplace. Chemical Engineering Progress. April: 30–34. McCurdy T., Glen G., Smith L. and Lakkadi Y. (2000). The National Exposure Research Laboratory’s consolidated human activity database. Journal of Exposure Analysis and Environmental Epidemiology. 10: 566–578. Morley, K., Janke, R., Murray, R. and Fox, K. (2007). Drinking water contamination warning systems: water utilities driving water security research. Journal of American Water Works Association. June: 40–46. Morris, R.E., Yarwood, G., Emery, C. and Koo, B. (2004). Development and application of the CAMx regional one-atmosphere model to treat ozone, particulate matter, visibility, air toxics and mercury. Presentation at 97th Annual Conference and Exhibition of the A&WMA, Indianapolis, IN. Available at: www.camx.com/publ/pdfs/CAMx.549_AWMA_2004_Final032604.pdf Nagda, N.L., Koontz, M.D. and Kennedy, P.W. (1995). Small-chamber and research-house testing of tile adhesive emissions. Indoor Air. 5: 189–195. NAS (2007). Models in Environmental Regulatory Decision Making. Report of the Committee on Models in the Regulatory Decision Process. National Research Council, National Academy of Sciences, Washington, DC. ¨ zkaynak, H., Palma, T., Touma, J.S. and Thurman, J. (2008). Modeling population exposures to O outdoor sources of hazardous air pollutants. Journal of Exposure Science and Environmental Epidemiology. 18: 45–58. Paine, R.J., Lee, R.F., Brode, R., Wilson, R.B., Cimorelli, A.J., Perry, S.G., Weil, J.C., Venkatram, A. and Peters, W.D. (1998). Model Evaluation Results for AERMOD. Available at: www.epa.gov/ scram001/7thconf/aermod/evalrep.pdf Pascual, P., Stiber, N. and Suderland, E. (2003). Draft Guidance on the Development, Evaluation, and Application of Regulatory Environmental Models. US Environmental Protection Agency, Washington, DC. Available at: www.modeling.uga.edu/tauc/other_papers/CREM%20 Guidance%20Draft%2012_03.pdf Peterson, B., Youngren, S., Walls, C., Barraj, L. and Peterson S. (2000). Calendex TM : Calendar-Based Dietary & Non-Dietary Aggregate and Cumulative Exposure Software System. Novigen Sciences, Inc., Washington, DC, and Durango Software, Bethesda, MA. Available at: www. epa.gov/scipoly/SAP/meetings/2000/september/calendex_sap_document_draft8_aug_2900.pdf Price, P.S. and Zartarian, V.G. (2001). Panel discussion of results from residential aggregate exposure assessment models. Presentation at International Society for Exposure Analysis Conference, Charleston, SC. Available on request.
126
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Price, P.S., Young, J.S. and Chaisson, C.F. (2001). Assessing aggregate and cumulative pesticide risks using a probabilistic model. Annals of Occupational Hygiene. 45: S131–S142. Reiss, R. and Griffin, J. (2006). A probabilistic model for acute bystander exposure and risk assessment for soil fumigants. Atmospheric Environment. 40: 3548–3560. Reiss, R. and Griffin, J. (2008). User’s Guide for the Probabilistic Exposure and Risk Model for FUMigants (PERFUM). Exponent, Inc., Alexandria, VA. Reiss, R., Schoenig, G.P.I and Wright, G.A. (2006). Development of factors for estimating swimmers’ exposures to chemicals in swimming pools. Human and Ecological Risk Assessment. 12: 139– 156. Rosenbaum, A. (2005). The HAPEM5 User’s Guide. Hazardous Air Pollutant Exposure Model. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ ttn/fera/hapem5/hapem5_guide.pdf Rosenbaum, A. and Huang, M. (2007). The HAPEM6 User’s Guide. Hazardous Air Pollutant Exposure Model. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/hapem6/HAPEM6_Guide.pdf Rosenbaum, A., Huang M. and Cohn J. (2002). Comparison of HAPEM5 and APEX3. Technical Memorandum. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/apex322/apex3_hapem5_comparison.pdf Rosenbaum, A., Hartley, S., Holder, C., Turley, A. and Graham, S. (2008). Nitrogen dioxide (NO2 ) exposure assessment in support of US EPA’s NAAQS review: application of AERMOD and APEX to Philadelphia County. Presentation at Joint International Society for Environmental Epidemiology & International Society of Exposure Analysis Conference, Pasadena, CA. Available on request. See also http://secure.awma.org/events/isee-isea/images/Conference_ Abstract_Book.pdf Rossman, L.A. and Boulos, P.F. (1996). Numerical methods for modeling water quality in distribution systems: a comparison. Journal of Water Resources Planning and Management. 122: 137– 146. Rossman, L.A., Clark, R.M. and Grayman, W.M. (1994). Modeling chlorine residuals in drinking water distribution systems. Journal of Environmental Engineering. 120: 803–820. SAB (1996). An SAB Report: The Cumulative Exposure Project. EPA-SAB-IHEC-ADV-96-004. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/sab/pdf/ ihea9604.pdf SAB (1998). Advisory on the Total Risk Integrated Methodology (TRIM). EPA-SAB-EC-ADV-99003. US Environmental Protection Agency, Washington, DC. SAB (2000). Advisory on the Agency’s Total Risk Integrated Methodology: (TRIM). EPA-SAB-ECADV-00-004. US Environmental Protection Agency, Washington, DC. SAB (2001). NATA – Evaluating The National-Scale Air Toxics Assessment 1996 Data – An SAB Advisory. SAB-EC-ADV-02-001. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/ttn/atw/sab/sabrept1201.pdf SAB (2004). EPA’s Multimedia, Multipathway. and Multireceptor Risk Assessment (3MRA) Modeling System. EPA-SAB-05-003. US Environmental Protection Agency, Washington, DC. SAB (2006a). SAB Consultation on Efforts to Update EPA’s Exposure Assessment Guidelines. EPASAB-07-003. US Environmental Protection Agency, Washington, DC. SAB (2006b). Review of Agency Draft Guidance on the Development, Evaluation, and Application of Regulatory Environmental Models and Models Knowledge Database. EPA-SAB-06-009. US Environmental Protection Agency, Washington, DC. SAP (2000a). Sets of Scientific Issues Being Considered by the Environmental Protection Agency Regarding: Session II – Dietary Exposure Evaluation Model (DEEM) and MaxLIP (Maximum Likelihood Imputation Procedure) Pesticide Residue Decompositing Procedures and Software; Session III – Dietary Exposure Evaluation Model (DEEM); Session IV – Consultation on Development and Use of Distributions of Pesticide Concentrations in Drinking Water for FQPA Assessments. SAP Report No. 2000-01. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. SAP (2000b). A Set of Scientific Issues Being Considered by the Environmental Protection Agency
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
127
Regarding: Session III – Residential Exposure Models – Rex; Session IV – Calendex TM Model; Session V – Aggregate and Cumulative Assessments Using LifeLine TM – A Case Study Using Three Hypothetical Pesticides. SAP Report No. 2000-03. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. SAP (2001). A Set of Scientific Issues Being Considered by the Environmental Protection Agency Regarding: HRI Lifeline TM – System Operation Review. SAP Report No. 2001-07. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. SAP (2002a). A Set of Scientific Issues Being Considered by the Environmental Protection Agency Regarding: Cumulative and Aggregate Risk Evaluation System (Cares) Model Review. SAP Meeting Minutes No. 2002-02. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. SAP (2002b). Stochastic Human Exposure and Dose Simulation Model (SHEDS): System Operation Review of a Scenario Specific Model (SDEDS-Wood) to Estimate Children’s Exposure and Dose to Wood Preservatives from Treated Playsets and Residential Decks Using EPA/ORD’s SHEDS Probabilistic Model. SAP Meeting Minutes No. 2002-06. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. SAP (2004). A Model Comparison of Dietary and Aggregate Exposure in Calendex, CARES, and Lifeline. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. Available at: www.epa. gov/scipoly/SAP/meetings/2004/april/sapminutesapril2930.pdf SAP (2007). Review of EPA/ORD/NERL’s SHEDS-Multimedia Model, Aggregate Version 3. FIFRA Scientific Advisory Panel Meeting, Arlington, VA. Available at: www.epa.gov/scipoly/SAP/ meetings/2007/081407_mtg.htm Shurdut, B., Barraj, L. and Francis, M. 1998. Aggregate exposures under the Food Quality Protection Act: an approach using chlorpyrifos. Regulatory Toxicology and Pharmacology. 28: 165–177. Silverman, K.C., Tell, J.T., Sargent, E.V. and Qiu, Z. (2007). Comparison of the Industrial Source Complex and AERMOD dispersion models: case study for human health risk assessment. Journal of Air & Waste Management Association. 57: 1439–1446. Stallings, C., Tippett, J.A., Glen, G. and Smith, L. (2002). CHAD User’s Guide: Extracting Human Activity Information from CHAD on the PC. US Environmental Protection Agency, Research Triangle Park, NC. Stallings, C., Zartarian, V and Glen, G. (2007). SHEDS: Multimedia User Manual, Version 3. Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/scipoly/SAP/meetings/2007/august/sheds_multimedia3_usersguide_06_13.pdf Stallings, C., Graham, S., Glen, G., Smith, L. and Isaacs, K. (2008). SHEDS AirToxics Users’ and Technical Guide. US Environmental Protection Agency, Research Triangle Park, NC. Suarez, L.A. (2006). PRZM-3, A Model for Predicting Pesticide and Nitrogen Fate in the Crop Root and Unsaturated Soil Zones: Users Manual. EPA/600/R-05/111. US Environmental Protection Agency, Athens, GA. Sunderland, E.M., Cohen, M.D., Selin, N.E. and Chmura, G.L. (2008). Reconciling models and measurements to assess trends in atmospheric mercury deposition. Environmental Pollution. 156: 526–535. Touma, J.S., Isakov, V. and Ching, J. (2006). Air quality modeling of hazardous pollutants: current status and future directions. Air & Waste Management Association. 56: 547–558. Tsiros, I.X. and Dimopoulos, I.F. (2007). An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model. Journal of Environmental Science and Health, Part A. 42: 661–670. Tufford, D.L., McKellar, H.N., Flora, J.R.V. and Meadows, M.E. (1999). A reservoir model for use in regional water resources management. Lake and Reservoir Management. 15: 220–230. US EPA (1989). Risk Assessment Guide for Superfund: Volume I. Human Health Evaluation Manual (Part A). EPA/540/I-89/002. US Environmental Protection Agency, Washington, DC. US EPA (1992). Guidelines for Exposure Assessment. EPA/600/Z-92/001. US Environmental Protection Agency, Washington, DC. US EPA (1995). User’s Guide for the Industrial Source Complex (ISC3) Dispersion Models. Volume I
128
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
– User Instructions. EPA-454/B-95-003a. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/scram001/userg/regmod/isc3v1.pdf US EPA (1997a). Draft Standard Operating Procedures (SOPs) for Residential Exposure Assessments. Contract no. 68-W6-0030. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppfead1/trac/science/trac6a05.pdf US EPA (1997b). Exposure Factors Handbook. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/NCEA/pdfs/efh/front.pdf US EPA (1999). SWIMODEL and the Validation Report (Interim Guidance). US Environmental Protection Agency, Washington, DC. US EPA (2000a). Policy and Program Requirements for the Mandatory Agency-Wide Quality System. EPA Order 5360.1A2. US Environmental Protection Agency, Washington, DC. US EPA (2000b). User’s Guide for the Assessment System for Population Exposure Nationwide (ASPEN, Version 1.1) Model. EPA-454/R-00-017. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/scram001/userg/other/aspenug.pdf US EPA (2000c). EPANET 2 Users Manual. EPA/600/R-00/057. US Environmental Protection Agency, Cincinnati, OH. Available at: www.epa.gov/nrmrl/wswrd/dw/epanet/EN2manual.PDF US EPA (2001a). FIRST – (F)QPA (I)ndex (R)eservoir (S)creening (T)ool Users Manual. Version 1.0 – Tier One Screening Model for Drinking Water Pesticide Exposure. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppefed1/models/water/first_ users_manual.htm US EPA (2001b). GENEEC – (GEN)eric (E)stimated (E)nvironmental (C)oncentration Model. Users Manual. Version 2.0 – Tier One Screening Model for Pesticide Aquatic Ecological Exposure Assessment. US Environmental Protection Agency, Washington, DC. Available at: www.epa. gov/oppefed1/models/water/geneec2_users_manual.htm US EPA (2001c). SCI-GROW – (S)creening (C)oncentration (I)n (GRO)und (W)ater. Users Manual. Tier One Screening Model for Ground Water Pesticide Exposure. US Environmental Protection Agency, Washington, DC. US EPA (2001d). Better Assessment Science Integrating Point and Nonpoint Sources: Basins Users Manual. EPA-823-B-01-001. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/waterscience/basins/bsnsdocs.html#buser US EPA (2001e). Wall Paint Exposure Assessment Model (WPEM): User’s Guide. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/ wpemman.pdf US EPA (2002a). Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency. EPA/260R-02008. US Environmental Protection Agency, Washington, DC. US EPA (2002b). Compendium of Reports from the Peer Review Process for AERMOD. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ scram001/7thconf/aermod/dockrpt.pdf US EPA (2002c). OSWER Draft Guidance for Evaluating the Vapor Intrusion to Indoor Air Pathway from Goundwater and Soils (Subsurface Vapor Intrusion Guidance). EPA530-D-02-004. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/osw/hazard/ correctiveaction/eis/vapor/complete.pdf US EPA (2002d). Evaluation of TRIM.FaTE. Volume I: Approach and Initial Findings. EPA-453/R02-012. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/trim/eval_rept_vol1_2002.pdf US EPA (2003a). Total Risk Integrated Methodology TRIM.Expo Inhalation User’s Document. Volume I: Air Pollutants Exposure Model (APEX, version 3) User’s Guide. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/ttn/fera/data/apex322/apexusersguide voli4-24-03.pdf US EPA (2003b). User’s Manual – Swimmer Exposure Assessment Model (SWIMODEL). US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppad001/ swimodelusersguide.pdf US EPA (2003c). Total Risk Integrated Methodology TRIM.FaTE User’s Document. US Environmen-
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
129
tal Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/ trim/ug_intro.pdf. US EPA (2003d). Multimedia, Multipathway, and Multireceptor Risk Assessment (3MRA) Modeling System. Volume I: Modeling System and Science. EPA 530-D-03-001a. US Environmental Protection Agency, Athens, GA, Research Triangle Park, NC, and Washington, DC. Available at: http://epa.gov/osw/hazard/wastetypes/wasteid/hwirwste/sab03/vol1/1_toc.pdf US EPA (2004a). AERMOD: Description of Model Formulation. EPA-454/R-03-004. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/scram001/ 7thconf/aermod/aermod_mfd.pdf US EPA (2004b). Use of Pharmacokinetic Data to Refine Carbaryl Risk Estimates from Oral and Dermal Exposure. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/scipoly/sap/meetings/2004/december2/carbarylpksap.pdf US EPA (2004c). User’s Guide for Evaluating Subsurface Vapor Intrusion Into Buildings. US Environmental Protection Agency, Washington, D.C. Available at: www.epa.gov/oswer/ riskassessment/airmodel/pdf/2004_0222_3phase_users_guide.pdf US EPA (2004d). ChemSTEER. Chemical Screening Tool for Exposures and Environmental Releases. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/ exposure/pubs/chemsteerdl.htm US EPA (2005a). Revision to the Guideline on Air Quality Models: Adoption of a Preferred General Purpose (Flat and Complex Terrain) Dispersion Model and Other Revisions; Final Rule. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/scram001/ guidance/guide/appw_05.pdf US EPA (2005b). Regulatory Impact Analysis of the Clean Air Mercury Rule, Final Report. EPA-452/ R-05-003. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/ecas/regdata/RIAs/mercury_ria_final.pdf US EPA (2005c). Evaluation of TRIM.FaTE. Volume II: Model Performance Focusing on Mercury Test Case. EPA-453/R-05-002. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/trim/eval_rept_vol2_2005.pdf US EPA (2006a). System Life Cycle Management Policy. US Environmental Protection Agency, Washington, DC. US EPA (2006b). Peer Review Handbook. EPA/100/B-06/002. US Environmental Protection Agency, Washington, DC. US EPA (2006c). 1999 National-Scale Air Toxics Assessment. US Environmental Protection Agency, Washington, D.C. Available at: www.epa.gov/ttn/atw/nata1999/nsata99.html US EPA (2006d). Technical Support Document for the Proposed Mobile Source Air Toxics Rule: Ozone Modeling. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/otaq/regs/toxics/tsd-oz-md.pdf. US EPA (2006e). Total Risk Integrated Methodology (TRIM) Air Pollutants Exposure Model Documentation (TRIM.Expo/APEX, version 4). Volume I: User’s Guide. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/apex/ APEX4UsersGuideJuly2006.pdf. US EPA (2006f). Organophosphorus Cumulative Risk Assessment – 2006 Update. US Environmental Protection Agency, Washington, DC. Available at: www.panna.org/documents/epaFqpa Summary20060802.pdf US EPA (2007a). Community Multiscale Air Quality (CMAQ) Model. Atmospheric Sciences Modeling Division NOAA-EPA Sponsorship. Available at: www.epa.gov/AMD/CMAQ/cmaq_ model.html US EPA (2007b). Control of Hazardous Air Pollutants from Mobile Sources. Final Rule. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/fedrgstr/EPAAIR/2007/February/Day-26/a2667a.pdf US EPA (2007c). Ozone Population Exposure Analysis for Selected Urban Areas. US Environmental Protection Agency, Triangle Park, NC. Available at: www.epa.gov/ttn/naaqs/standards/ozone/ data/2007_07_o3_exposure_tsd.pdf US EPA (2007d). Multi-Chamber Concentration and Exposure Model (MCCEM). US Environmental
130
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/ mccem.htm US EPA (2007e). Wall Paint Exposure Assessment Model (WPEM). US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/wpem.htm US EPA (2007f). Pesticide Inert Risk Assessment Tool. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/pirat.htm US EPA (2007g). Revised N-Methyl Carbamate Cumulative Risk Assessment. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppsrrd1/REDs/nmc_ revised_cra.pdf US EPA (2007h). Reregistration Eligibility Decision for Aldicarb. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/opp00001/reregistration/REDs/aldicarb_ red.pdf US EPA (2007i). The HEM-3 User’s Guide. Hem-3 Human Exposure Model, Version 1.1.0 (AERMOD version). US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/ttn/fera/data/hem/hem3_users_guide.pdf US EPA (2007j). ChemSTEER Overview. Chemical Screening Tool for Exposures and Environmental Releases. Sustainable Futures Initiative. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/chemsteerslideshow.htm US EPA (2007k). Exposure and Fate Assessment Screening Tool (E-FAST). Documentation Manual. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/ exposure/pubs/efast2man.pdf US EPA (2007l). Internet Geographical Exposure Modeling System (IGEMS). US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/gems.htm US EPA (2008a). Guidance on the Development, Evaluation and Application of Environmental Models. Draft. US Environmental Protection Agency, Washington, DC. Available at: www. epa.gov/crem/library/CREM-Guidance-Public-Review-Draft.pdf US EPA (2008b). AERMOD Implementation Guide. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/scram001/7thconf/aermod/aermod_implmtn_ guide_09jan2008.pdf US EPA (2008c). Chromated Copper Arsenate (CCA): Final Probabilistic Risk Assessment for Children Who Contact CCA-Treated Playsets and Decks. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppad001/reregistration/cca/final_cca_ factsheet.htm US EPA (2008d). Integrated Modeling for Integrated Environmental Decision Making. Draft White Paper. US Environmental Protection Agency, Washington, D.C. US EPA (2009). Estimation Program Interface (EPI) Suite. US Environmental Protection Agency, Washington, DC. Available at: www.epa.gov/oppt/exposure/pubs/episuite.htm Wagstrom, K.M., Pandis, S.N., Yarwood, G., Wilson, G.M. and Morris, R.E. (2008). Development and application of a computationally efficient particulate matter apportionment algorithm in a three-dimensional chemical transport model. Atmospheric Environment. 42: 5650–5659. WHO (2004). IPCS Risk Assessment Terminology. Part 2: IPCS Glossary of Key Exposure Assessment Terminology. Harmonization Project Document No. 1. International Programme on Chemical Safety, World Health Organization, Geneva. Wool, T.A., Ambrose, R.B. Jr and Martin, J.L. (2001). Water Quality Analysis Simulation Program (WASP). US Environmental Protection Agency, Atlanta, GA. Wool, T.A., Davie, S.R. and Rodriguez, H.N. (2003). Development of three-dimensional hydrodynamic and water quality model to support total maximum daily load decision process for the Neuse River Estuary, North Carolina. Journal of Water Resources Planning and Management. 129: 295–306. Wright, J., Shaw, M. and Keeler, L. (2002). Refinements in acute dietary exposure assessments for chlorpyrifos. Journal of Agricultural and Food Chemistry. 50: 235–241. ¨ zkaynak, H., Liu, S., Glen, G. and Smith, L. (2004). Application and Xue, J., Zartarian, V.G., O evaluation of an aggregate physically-based Monte Carlo probabilistic model for quantifying children’s residential exposure and dose to chlorpyrifos. Presentation at International Society
AN OVERVIEW OF EXPOSURE ASSESSMENT MODELS USED BY THE US EPA
131
of Exposure Analysis Conference, Philadelphia, PA. Available on request. See also http:// cfpub.epa.gov/ordpubs/nerlpubs/nerlpubs_heasd_2004.cfm?ActType¼Publications&detype¼document&excCol¼General ¨ zkaynak, H., Dang, W., Glen, G., Smith, L. and Stallings, C. (2006). A Xue, J., Zartarian, V., O probabilistic arsenic exposure assessment for children who contact chromium copper arsenate (CCA)-treated playsets and deck, Part 2: sensitivity and uncertainty analysis. Risk Analysis. 26: 533–541. Xue, J., Zartarian, V., Weng, S. and Georgopolous, P. (2008). Model estimates of arsenic exposure and dose and evaluation with 2003 NHANES data. Presentation at Joint International Society for Environmental Epidemiology & International Society of Exposure Analysis Conference, Pasadena, CA. Available on request. See also http://secure.awma.org/events/isee-isea/images/ Conference_Abstract_Book.pdf Young, B., Mihlan, G., Lantz, J., Pandian, M. and Reed, H. (2006). CARES Dietary Minute Module (DMM) User Guide, version 2. CARES (Cumulative and Aggregate Risk Evaluation System). Bayer CropScience, Research Triangle Park, NC and Infoscientific, Henderson, NV. Available at: http://cares.ilsi.org/NR/rdonlyres/736F70D6-402B-477B-B168-75615CF92044/0/CARES DMMUserGuide12Jul06.pdf Young, B., Driver, J., Zartarian, V., Xue, J., Smith, L., Glen, G., Johnson, J., Delmaar, C., Tulve, N. and Evans, J. (2008). Dermal, inhalation, and incidental exposure results from the models: how did they handle the data? Presentation at International Society for Environmental Epidemiology & International Society of Exposure Analysis, Pasadena, CA. Available on request. See also http://secure.awma.org/events/isee-isea/images/Conference_Abstract_Book.pdf Zartarian, V., Ott, W.R. and Duan, N. (1997). A quantitative definition of exposure and related concepts. Journal of Exposure Analysis and Environmental Epidemiology. 7: 411–437. ¨ zkaynak H., Burke J.M., Zufall M.J., Rigas M.L. and Furtaw E.J. (2000). A modeling Zartarian V.G., O framework for estimating children’s residential exposure and dose to chlorpyrifos via dermal residue contact and non-dietary ingestion. Environmental Health Perspectives. 108: 505–514. Zartarian, V., Bahadori, T. and McKone, T. (2005a). Adoption of an official ISEA glossary. Journal of Exposure Analysis and Environmental Epidemiology. 15: 1–5. ¨ zkaynak, H., Dang, W., Glen, G., Smith, L. and Stallings, C. (2005b). A Zartarian, V., Xue, J., O Probabilistic Exposure Assessment For Children Who Contact CCA-Treated Playsets and Decks. Using the Stochastic Human Exposure and Dose Simulation for the Wood Preservatives Exposure Scenario (SHEDS-Wood). Final Report. US Environmental Protection Agency, Research Triangle Park, NC. Available at: www.epa.gov/heasd/sheds/CCA_all.pdf ¨ zkaynak, H., Dang, W., Glen, G., Smith, L. and Stallings, C. (2006). A Zartarian, V., Xue, J., O probabilistic arsenic exposure assessment for children who contact CCA-treated playsets and decks, Part 1: model methodology, variability results, and model evaluation. Risk Analysis. 26: 515–531.
PART II Aquatic Modelling and Uncertainty
CHAPTER
4
Modelling Chemical Speciation: Thermodynamics, Kinetics and Uncertainty Jeanne M. VanBriesen, Mitchell Small, Christopher Weber and Jessica Wilson
4.1
INTRODUCTION
Chemical speciation refers to the distribution of an element amongst chemical species in a system. It is critical for understanding chemical toxicity, bioavailability, and environmental fate and transport. Despite the central importance of knowing the full speciation of a chemical in order to predict its behaviour in a system, it is generally not possible to determine a speciation analysis using analytical chemistry methods alone. Most techniques are focused on the detection of free metal ion concentrations (e.g., Galceran et al., 2004; Salaun et al., 2003; Zeng et al., 2003) or total metal concentrations (e.g., Wang et al., 1998). Direct speciation measurement using traditional analytical methods requires significantly complex and generally hyphenated techniques (Moldovan et al., 2004; Sanzmedel, 1995; Yan and Ni, 2001). A relatively new method, capillary electrophoresis (CE), allows the direct measurement of the chemical speciation of some metals (Beale, 1998; Dabek-Zlotorzynska et al., 1998; Haumann and Bachmann, 1995). However, because environmental concentrations of most metals of interest are low, and because many relevant forms of metals cannot be measured directly, analytical techniques often are not effective for determining overall speciation (Fytianos, 2001). Thus chemical speciation determination generally relies on utilising analytical methods in conjunction with chemical speciation models. Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
136
4.2
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
CHEMICAL SPECIATION MODELLING
Multicomponent thermodynamic equilibrium speciation modelling has been described in detail (e.g., Morel and Morgan, 1972), and has been incorporated into publicly available and commercial modelling software (e.g., MINEQL;1 Westall et al., 1976). Components are selected such that all species can be formed by the components, and no component can be formed by a combination of other components. Mass balances can thus be written for each component (a proton balance is used for acidic hydrogen), and mass action equations define the interrelationships among the different components. For example, consider the system of ethylenediaminetetraacetate (EDTA) with a limited set of environmentally relevant metals (Cu2þ , Zn2þ , Ni2þ , Ca2þ , Mn2þ , Mg2þ , Pb2þ , Co2þ , Al3þ and Fe3þ ). Eleven mass balances (one for each component metal plus EDTA) and 28 mass action equations (one for each species formed by the components, including a protonated form, H-EDTA3 ) define the system, and their solution predicts the equilibrium concentration of each species and component. Mass balances are set to equal the total concentration of each component in the system (generally measured analytically), and mass action equations are defined by equilibrium constants (K or values, also determined experimentally). Thus the mass balances are defined as Mj ¼
Nc X
A ij
j¼1
xi ªi
(4:1)
where M j is the total mass of component j; A ij is the stoichiometric coefficient giving the number of moles of component j in species i; xi is the activity of the aqueous species i; ª i is the activity coefficient of species i; and Nc is the number of components. The mass action equations are defined as xi ¼ i
Nc Y
A
c j ij
(4:2)
j¼1
where i is the overall equilibrium formation constant for species i. Table 4.1 provides the species and their reported formation constants from the Joint Expert Speciation System (JESS) thermodynamic database (May, 2000; May and Murray, 2001). Although speciation modelling is highly informative, it involves significant uncertainties (Nitzsche et al., 2000). Following the framework of Finkel (1990), uncertainty can, in general, be divided into four distinct types: decision rule uncertainty, model uncertainty, parameter uncertainty, and parameter variability (Finkel, 1990; Hertwich et al., 1999). In the context of chemical speciation modelling, the last three of Finkel’s types are important. When we consider decision-making based on speciation, the first type is also important. Model uncertainty includes the decision of whether to include certain species or certain processes that may or may not be important in the given application. For speciation modelling of metals, these might include humic complexation, changes in redox states, or adsorption to solids. Model
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
137
Table 4.1: Species and reported log values for the 10 metal–EDTA system. log 0
log 0 H-EDTA3 H2 -EDTA2 H3 -EDTA H4 -EDTA Cu-EDTA2 CuH-EDTA CuOH-EDTA3 Zn-EDTA2 ZnH-EDTA ZnOH-EDTA3 Ni-EDTA NiH-EDTA NiOH-EDTA3 Ca-EDTA2 CaH-EDTA
10.61 17.68 19.92 23.19 20.36 23.9 22.39 18.15 21.7 19.9 20.13 24 21.8 12.38 16
Mn-EDTA2 Mn-EDTA Mg-EDTA2 MgH-EDTA Pb-EDTA2 PbH-EDTA Co-EDTA2 CoH-EDTA Al-EDTA AlH-EDTA AlOH-EDTA2 Fe-EDTA FeH-EDTA FeOH-EDTA2
15.38 19.1 10.27 15.1 19.67 21.86 17.94 21.34 19.12 21.6 27.19 27.52 29.2 33.8
uncertainty would also include the chosen models used for humic complexation or solid-phase adsorption (if applicable), in addition to the assumption of equilibrium conditions, which is of vital importance to the appropriate utilisation of speciation models (Fytianos, 2001). Finally, model uncertainty includes the models used to correct thermodynamic data for ionic strength, temperature and medium interactions, although this uncertainty could also be considered to be part of parameter uncertainty and variability, as cited thermodynamic data have usually already been corrected using these models (May, 2000). Chemical speciation modelling is thus strongly dependent on the presumption of equilibrium, and on the values of the equilibrium constants determined experimentally.
4.2.1
Local equilibrium assumption
As described above, chemical speciation models rely on mass balance and thermodynamics to determine the concentration of each species that contains a given component. Chemical speciation is usually predicted with thermodynamic expressions, because the reactions involved take place in the bulk aqueous phase and are generally rapid (e.g., acid/base reactions). The local equilibrium assumption (LEA) states that, because aqueous-based reactions are reversible and generally rapid compared with other system processes, they may be assumed to reach equilibrium. The thermodynamics of the system components will be predictive of the final state of the system (Rubin, 1983). This assumption is generally accurate, although notable exceptions have been reported when complexation exchange reactions are slow (Nowack et al., 1997; Pohlmeier and Knoche, 1996; Xue et al.,1995). The LEA is an excellent foundation for modelling speciation when rapid exchange reactions are observed. However, speciation models using the LEA generally
138
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
neglect reaction kinetics, and for some systems kinetics play a pivotal role (Brooks, 2005; Nowack et al., 1997; Pohlmeier and Knoche, 1996; Xue et al., 1995). For example, Xue et al. performed a comprehensive study on EDTA exchange kinetics, focusing on reactions of Fe-EDTA exchanging with Ca, Cu and Zn (Xue et al., 1995). They observed that Fe-EDTA exchanges very slowly with higher-concentration, lowstability-constant metals (half-lives of 10–20 days), owing to its high stability and sometimes also to the relative unavailability of Ca, Cu and Zn, which complex with natural ligands such as HCO3 and organic matter. Hering and Morel (1988) found similar kinetic hindrances in the reverse reaction: the concentration of Ca in natural waters was found to be a master variable for controlling Ca-EDTA re-speciation to thermodynamically favoured Fe-EDTA and Zn-EDTA. This is due to a rate limitation in the disjunctive step of the exchange, or the step where the ligand and metal first split apart before the ligand forms a complex with a different metal. This limitation is not a problem if the re-speciation takes place via the adjunctive pathway, where an intermediate dual complex is formed before the original metal is replaced with the new metal. These two mechanisms are shown below. k 1
Disjunctive pathway: AL ! A þ L ka
Adjunctive pathway: AL þ Me ! ALMe k b
ALMe ! A þ MeL When Ca-EDTA splits into Ca(II) and EDTA by the disjunctive mechanism, another Ca(II) will tend to take its place because of abundance, even though thermodynamics will favour the replacement of Ca(II) with a trace metal such as Cu(II). Such slow complexation reactions are important, because they can affect biodegradation rates of pollutants such as EDTA. Satroutdinov et al. (2000) report slow re-speciation results in the effective non-biodegradability of stable EDTA chelates. Willett and Rittmann (2003) report that the cessation of biodegradation of EDTA corresponds to the concentration of dissolved Fe(III), and that the nonbiodegradability of this form, along with the slow re-speciation to biologically available Ca-EDTA, reduces EDTA biodegradation in the environment. Similar results by Kagaya et al. (1997) indicate that slow re-speciation affects the photodegradation of metal–EDTA complexes. Brooks (2005) suggests that equilibrium speciation models are insufficient to understand chelate systems because of slow kinetics, which are clearly observed through direct speciation measurement using CE. When the LEA is not valid, kinetically controlled speciation reactions must be integrated into thermodynamically based speciation models. This has been done in the work of Willett and Rittmann (2003), who modify the CCBATCH model developed by VanBriesen and Rittmann (1999). Briefly, their kinetically controlled speciation model removes certain species from the list of complexes available for equilibrium speciation and instead solves for their concentrations over time, along with other kinetic reactions. This requires the consideration of rate constants for the forward and reverse
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
139
reactions controlling the exchange reaction of metals among the different chelates (e.g., ka and kb or k1 above).
4.2.2
Equilibrium constant values
Equilibrium constants are determined by measurement of the relevant concentrations of the species under differing experimental conditions. Concentrations of species can be measured in multiple ways, including potentiometrically, spectrophotometrically, and through NMR chemical shift measurement. Several papers report the use of these analytical techniques for the determination of EDTA equilibrium constants under various conditions (Hering and Morel, 1988; Jawaid, 1978; Xue and Sigg, 1994). Equilibrium constants are determined at specific conditions for ionic strength and temperature, and the use of these values in modelling requires adjustment to the conditions in the system being modelled. These adjustments, as well as the differences in conditions and different methods for determination, can lead to uncertainty in chemical speciation constants. Several databases exist that report experimental results for speciation constants, including the International Union of Pure and Applied Chemistry Critical Database (IUPAC, 2005), the National Institute of Standards and Technology (NIST) standard reference database (Martell et al., 2003) and the JESS thermodynamic database (May, 2000; May and Murray, 2001). Chelate-specific values were recently reviewed by Popov and Wanner (2005). These databases include experimental results, conditions of the experiments, and methods to adjust values for differing conditions, which is critical for real-time updating of constants within speciation models. Most mainstream aqueous speciation programs, such as MINTEQA2, MINEQL+ and PHREEQC, come supplied with a standard thermodynamic database, which includes reaction constants, enthalpy values, and occasionally Debye–Huckel or specific interaction theory parameters for ionic strength correction (MINEQL). Unfortunately, many of these databases have been shown to contain significant errors (Serkiz et al., 1996). Generally these speciation program databases offer only a single value for the overall formation reaction of each aqueous component, which has generally been calculated from accepted or so-called ‘‘critical’’ values in broader databases of thermodynamic constants such as the IUPAC Critical Database, the NIST standard reference database, or any of a number of other critical compilations of thermodynamic data (IUPAC, 2005; Martell et al., 2003; May, 2000; May and Murray, 2001). These broader thermodynamic sources have been reviewed recently for chelate-associated thermodynamic data (Popov and Wanner, 2005). Use of these values for speciation calculations can lead to significant uncertainty. Several authors have investigated aspects of these uncertainties (Criscenti et al., 1996), including examinations of the Fe(II)–Fe(III)–SO4 aqueous system (Stipp, 1990), actinide speciation in groundwater (Ekberg and LundenBuro, 1997), and U(VI) aqueous speciation (Denison and Garnier-LaPlace, 2005). These studies have often relied on several assumptions, such as a fixed equation for ionic strength extrapolation (Denison and Garnier-LaPlace, 2005; Haworth et al., 1998; Odegaard-Jensen et al., 2004) and the assumption of statistical independence of model parameters (Schecher
140
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and Driscoll, 1987). We agree with Denison and Garnier-Laplace (2005) that the latter assumption of independence rarely holds, and recently we developed a new stochastic method specifically aimed at resolving these difficulties (Weber et al., 2006).
4.3
ANALYTICAL METHODS
As mentioned above, there are two ways in which speciation is determined: laboratory analysis provides a direct measure of different forms of a target chemical, whereas chemical modelling applies known thermodynamic relationships among chemical forms to predict the overall equilibrium distribution. Often, a combination is used when chemical analysis can only determine certain forms, and modelling is used to predict the other forms relative to the measured form. For example, metal ion concentrations can be measured in the laboratory by a variety of techniques, including graphite furnace atomic absorption spectroscopy (GFAAS), inductively coupled plasma mass spectroscopy (ICP-MS), atomic emission spectrometry (AES), X-ray spectroscopy, and neutron activation analysis. Most existing as well as novel techniques are focused on the detection of free metal ion concentrations. Liquid membranes have been used for the separation of free metal ions, including Cd(II), Pb(II), Cu(II) (de Gyves and San Miguel, 1999; Parthasarathy and Buffle, 1994; Parthasarathy et al., 1997). There are also fluorescence-based methods (Thompson et al., 1996, 1999; Zeng et al., 2003) and, more recently, electroanalytical techniques that use microelectrodes to measure the free metal ion concentration (Galceran et al., 2004, 2007; Huidobro et al., 2007). Total metal analytical methods include using electrochemical flow sensors or probes in situ to measure total metal concentrations (Taillefert et al., 2000; Wang et al., 1995, 1998). Direct speciation measurement (beyond free ion or total metal) often requires combinations of complex analytical techniques. Certain speciation analyses include liquid chromatography (LC) with ICP-MS (Rottmann and Heumann, 1994) and AES with both high-performance liquid chromatography (HPLC) (Robards and Starr, 1991; Sanzmedel, 1995) and ICP (Sanzmedel, 1991; Xue et al., 1995). Additionally, ICPMS can be coupled with gas chromatography (GC) for the determination of metal concentrations (Sutton et al., 1997). CE is a relatively new analytical method that allows the direct measurement of the chemical speciation of metals. CE is an advanced analytical technique that separates species based on their mobility under an applied electric field. The CE system consists of an electrolyte solution, power supply, capillary, electrodes, and detector (typically UV–visible for environmental analyses). A sample is introduced into the capillary, filled with electrolyte solution by hydrodynamic or hydrostatic pressure. After the sample has been introduced, the capillary ends and electrodes are submerged in the electrolyte solution. An electric field is applied across the electrodes, and the fluid migrates from one end of the capillary to the other. The addition of an electrolyte solution through the capillary allows for the complexes to separate based on their charge and size. The composition of the electrolyte solution (ionic strength, pH) affects the migration time and peak shape of the complexes (Beckers and Bocek, 2003; Reijenga et al., 1996). The most common
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
141
method of separating metal ions by CE involves the conversion of these ions into stable, negatively charged complexes prior to introduction into the column (DabekZlotorzynska et al., 1998; Timerbaev, 2004). Various derivatising reagents have been used for the speciation of metals, including both organic and inorganic ligands. Organic ligands have been used extensively for the separation of metal complexes, and there have been a number of papers recently published that use aminopolycarboxylates because they form stable and water-soluble complexes with many metal ions (Burgisser and Stone, 1997; Padarauskas and Schwedt, 1997; Pozdniakova and Padarauskas, 1998; Owens et al., 2000; Carbonaro and Stone, 2005). CE offers significant potential for direct chemical speciation measurement; however, it is still a developing technique, and not widely used in assessing environmental concentrations of metals.
4.4
EXPERIMENTAL AND MODELLING METHODS
In order to consider how chemical speciation determination is affected by uncertainty in analytical methods and chemical speciation models, we undertook a series of modelling and experimental studies.
4.4.1
Modelling methods
A statistical model was constructed using Markov chain Monte Carlo (MCMC) with Gibbs sampling (Weber et al., 2006). The method is summarised here, and the reader is referred to Weber et al. (2006) for additional details. Using the speciation system described above, including 10 metals and their EDTA chelates (Cu2þ , Zn2þ , Ni2þ , Ca2þ , Mn2þ , Mg2þ , Pb2þ , Co2þ , Al3þ , and Fe3þ ) in addition to the protonated EDTA form, H-EDTA3 , we simultaneously fit the 27 thermodynamic parameters that control this system. For each aqueous species, a nonlinear regression of the parameters (log oj , b j ) was performed simultaneously on the available data. It was necessary to regress each of the species simultaneously to account for the correlation between the regression parameters for each species (correlation between log oj and b j ), as well as the correlation between the regressed parameters for one species with each of the other species’ regressed values. (i.e., correlation between log oj and log ok , where j and k are two different aqueous species). The model treated the first kind of correlation directly through bivariate distributions, whereas the second kind of correlation was treated indirectly by regressing each primary constant before regressing the secondary constants. Modelling this correlation, which has not been done in previous studies, is imperative in order to avoid creating highly unlikely (very low values of the joint likelihood function) sets of either the two regression parameters for a specific species, or a highly unlikely set of two different log oi values for two different species. WinBUGS 1.4 was utilised for the MCMC coding.2 The standard errors of each of the sets of regression parameters were assumed to follow a bivariate normal distribution with a standard Wishart prior distribution on the precision matrix. Pseudo-
142
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
informationless priors were utilised for all regression parameters (Gelman, 2004; Goyal and Small, 2005). Visual MINTEQ, a Windows version of the US EPA code MINTEQA2, was used for the chemical speciation calculations.3 An example aqueous speciation was examined for a simulated natural river water using data from the US EPA’s STORET database for Allegheny County, Pennsylvania, USA (US EPA, 2003).
4.4.2
Experimental methods
All solutions were prepared using distilled, deionised water (DDW) with a resistivity of 18 M cm. All stock solutions were stored in polypropylene bottles prior to use. All glassware and plasticware was soaked in 5 M HNO3 (Fisher Scientific) and rinsed several times with DDW prior to use. Ethylenediaminetetraacetic acid (H4 -EDTA), FeCl3 .6H2 O, NiCl2 , Na2 HPO4, NaH2 PO4, 3-(N-morpholino) propanesulfonic acid (MOPS) and tetradecyltrimethylammonium bromide (TTAB) were obtained from Fisher Scientific; 5.0 mM stock solutions of FeCl3 .6H2 O and NiCl2 were prepared by weighing the crystals on a balance and dissolving them in DDW. The experimental solutions were prepared by adding the metals to an EDTA solution buffered in MOPS (pH ¼ 7.1). All separations were conducted using an Agilent CE instrument equipped with a diode-array UV–visible detector. Bare fused-silica capillaries of 75 m i.d. were used for all separations (Agilent). Capillaries were treated between separations using a rinse of 0.1 M NaOH for 1 min, followed by a water rinse for 1 min and then a rinse of background electrolyte (BGE) for 1 min. Samples were injected under positive pressure for 10 s at 34.5 mbar. Capillaries were 58.0 cm in total length. All separations were conducted in constant-voltage mode at 25 kV. TTAB (0.5 mM) was used as an electro-osmotic flow (EOF) modifier. Detection at 192, 214 and 254 nm was employed, with the wavelength selected during method optimisation. The BGE used was phosphate (25 mM, pH ¼ 7.1).
4.5
RESULTS AND DISCUSSION
In general, modelling results show that uncertainty in thermodynamic constants affects the predicted speciation in chelate systems, and experimental results show slow speciation of chelates. The results suggest that chemical equilibrium models can be improved through a consideration of uncertainty in model parameters, and that kinetic equilibrium models can be improved through the experimental determination of kinetic exchange reactions.
4.5.1
Thermodynamic uncertainty in chemical speciation modelling
Results from our initial studies indicated that the uncertainty for thermodynamic values is much greater around the very dilute range and the more concentrated range, where data for the thermodynamic constants are comparatively sparse. This fact
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
143
underscores the advantage of using a stochastic correction method, as calculations of stability constants at conditions far from laboratory conditions (such as the relatively small ionic strengths of natural waters) will tend to be much more uncertain than calculations performed at conditions close to laboratory conditions. Further, the results of the stochastic correction function give the user a much better idea of the thermodynamic uncertainty of the model at all levels of ionic strength. The use of all available measured values for different ionic strengths, different media, different temperatures and different reaction formulations provides much more information than a one-value model. Stochastic chemical speciation diagrams are shown in Figure 4.1 for the simulated natural river water without precipitation (a) and with precipitation (b). Also shown are the 90% credible intervals as a function of pH for both cases ((c) and (d), respectively). Each of the simulations was run through a typical natural pH range of 5–9. The major conclusion from these diagrams is that the overall chemical system uncertainty is directly related to the system’s competitiveness. Minimal uncertainty is predicted in conditions where one species is dominant over all others, such as FeEDTA at low pH in the no-solids case. Mathematically, this result is caused by the simultaneous sampling of several marginal distributions of chemical importance. When one species is dominant, only uncertainty in its own stability constant is relevant, but when many species are close in stability, uncertainty increases substantially. The implication for policy and risk analysis is that it is probably defensible to use deterministic speciation modelling when chemical systems will be dominated by one species. In most cases, however, this is unlikely to be true, and thermodynamic uncertainty should be included in the analysis.
4.5.2
Kinetic uncertainty in chemical speciation modelling
The dominant form of uncertainty in chemical speciation modelling involving kinetic exchange reactions is the uncertainty in the determination of the relevant kinetic constants. Here our work focused on the kinetics of iron and nickel exchange reactions. Figure 4.2 shows: (a) the slow re-speciation from the initial Fe-EDTA complex to the more stable Ni-EDTA complex over the 280 h of the experiment; and (b) the slow re-speciation from the initial Ni-EDTA complex to Fe-EDTA over the 325 h of the experiment. For the initial Fe-EDTA system (Figure 4.2a), the Fe-EDTA is exchanging with the Ni(II) in the solution, and the Fe(III) is precipitating out of the system, as evidenced by a yellow/orange tint in the solution after 200 h. For the initial Ni-EDTA system (Figure 4.2b), the exchange with Fe(III) is also slow, and a yellow/ orange tint also developed in the solution, also indicating the precipitation of Fe(III). If the decrease in Fe-EDTA is treated as a pseudo-first-order reaction, then a first-order reaction rate can be calculated based on this dataset, and by following the equations of Xue et al. (1995). At pH ¼ 6.7, the first-order reaction rate for Fe-EDTA loss, k1,obs ¼ 2.7 3 103 h1 . Similarly, the first-order reaction rate for Ni-EDTA loss can be calculated as k2,obs ¼ 1.9 3 103 h1 . The first-order reaction rate for the loss of Fe-EDTA is greater than the first-order reaction rate for the loss of Ni-EDTA. Ni-EDTA is a stronger complex than Fe-EDTA,
144
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 4.1: Left: speciation diagram (a) and 90% credible intervals (c) for simulated natural water with no precipitation. Right: speciation diagram (b) and 90% credible intervals (d) for simulated natural water allowing precipitation. 100 90
Percent of TotEDTA
80
Fe-EDTA1
70 60
Ni-EDTA2
50 40 30 Cu-EDTA2
20
2
FeOH-EDTA
10 0 5
7 pH (a)
6
8
9
40 35
90% Credible interval
FeOH-EDTA2**
Fe-EDTA1**
30 25
Ni-EDTA2
FeOH-EDTA2 20 Fe-EDTA1
15 10
Cu-EDTA2
5 0 5
6
7 pH (c)
8
9
Source: Reprinted with permission from Weber, C.L., VanBriesen, J.M. and Small, M.S. (2006). A stochastic regression approach to analyzing thermodynamic uncertainty in chemical speciation modeling. Environmental Science & Technology. 40: 3872–3878. Copyright 2006 American Chemical Society.
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
100 90 80
Ni-EDTA2
70 60 50 40
Cu-EDTA2
30 20
Zn-EDTA2
10 0 5
6
7 pH (b)
8
9
30 Cu-EDTA2 25 Ni-EDTA2 20 15 Zn-EDTA2 10 Al-EDTA1
Pb-EDTA2
5 0 5
6
7 pH (d)
8
9
145
146
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 4.2: Capillary electrophoresis experiment to determine the kinetic exchange constant for the Fe-EDTA–Ni-EDTA reaction: (a) 500 M Ni(II) added to 500 M Fe(III)-EDTA buffered in MOPS (pH ¼ 6.7); (b) 500 M Fe(III) added to 500 M Ni(II)-EDTA buffered in MOPS (pH ¼ 6.7) 600
Concentration/µM
500 400 300
Fe-EDTA Ni-EDTA Tot M-EDTA
200 100 0 0
50
100
150 Time/h (a)
200
250
300
600
Concentration/µM
500 400 300
Fe-EDTA Ni-EDTA Tot M-EDTA
200 100 0 0
50
100
150 200 Time/h (b)
250
300
350
and thus is less likely to dissociate – hence the slower reaction rate. Additionally, the Fe(III) precipitates out of the system, and it is likely that the free EDTA will either sorb onto the Fe(III) precipitate, or recomplex with the Ni(II), given additional time. These results indicate that the slow exchange kinetics in certain metal–ligand systems can contribute to the uncertainty in chemical speciation. Xue et al. (1995) calculated a kinetic exchange constant for Fe-EDTA with Zn(II). Following the work of Xue et al. (1995), a second-order rate constant can be calculated from the overall equation for the two reactions in the experimental systems described above:
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
147
Fe-EDTA þ Ni(II) ! Fe(III) þ Ni-EDTA
(a)
Ni-EDTA þ Fe(III) ! Ni(II) þ Fe-EDTA
(b)
The rate law is then an expression of the rate on the concentrations of the reactants, Ni(II) and Fe-EDTA: d½Ni(II) d½Ni-EDTA ¼ ¼ k obs ½Fe-EDTA½Ni(II) dt dt
(a)
d½Fe(III) d½Fe-EDTA ¼ k obs ½Ni-EDTA½Fe(III) ¼ dt dt
(b)
ln([Fe(III)]0[Ni-EDTA]t /[Fe(III)]t[Ni-EDTA]0)
ln([Ni(II)]0[Fe-EDTA]t /[Ni(II)]t[Fe-EDTA]0)
Figure 4.3: Second-order rate function of: (a) Ni–Fe-EDTA exchange, k1,obs ¼ 0.0117 M 1 h1 ¼ 3.25 M 1 s1 ; (b) Fe–Ni-EDTA exchange, k2 ,obs ¼ 0.0072 M 1 h1 ¼ 2.0 M 1 s1 . 3.6 3.0 2.4 1.8 1.2 0.6 0
0
50
100 150 Time/h (a)
200
250
0
50
100 150 Time/h (b)
200
250
2.4 2.0 1.6 1.2 0.8 0.4 0
148
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
The kobs for system (a) was then calculated by taking the slope of ln([Ni(II)]0 [FeEDTA] t /[Ni(II)] t [Fe-EDTA]0 ) against time, and the kobs for system (b) was calculated by taking the slope of ln([Ni(II)]0 [Fe-EDTA] t /[Ni(II)] t [Fe-EDTA]0 ) (Figure 4.3). The observed second-order rate constant for Ni(II) exchange with Fe-EDTA is calculated as 3.25 M 1 s1 , and the observed second-order rate constant for Fe(III) exchange with Ni-EDTA is calculated as 2.0 M 1 s1 .
4.6
CONCLUSIONS
Chemical speciation is critical for understanding the form of the chemicals of interest in natural systems. It is crucial in assessing bioavailability and, ultimately, fate, transport and risk to humans and ecosystems. Chemical speciation is generally determined through analytical methods that measure free metal ion or total metal concentration, used in conjunction with thermodynamically based models. As discussed above, these models rely on the local equilibrium assumption, and on experimentally determined reaction constants. However, in chemical systems where the LEA is not valid (e.g., where speciation reactions are slow), or where the equilibrium constants are not certain, predictions from chemical speciation models show significant uncertainty. Our results demonstrate that uncertainty in equilibrium constant values can be propagated into model predictions for speciation, and, further, that analytical methods based on CE can be used to evaluate the kinetic constants for slow reactions. Both sources of uncertainty (model uncertainty due to the LEA and parameter uncertainty in thermodynamic and kinetic constants) are clearly important for determining speciation. As far as possible, uncertainty analysis should be included in future risk assessments where speciation is important.
ENDNOTES 1. MINEQL: A computer program for the calculation of the chemical the equilibrium composition of aqueous systems. Department of Civil Engineering, MIT, Cambridge, MA. 2. WinBUGS ver 1.4. MRC Biostatics Unit, Cambridge, UK. 3. Visual MINTEQ ver 2.32. Stockholm Royal Institute of Technology (KTH), Stockholm, Sweden.
REFERENCES Beale, S.C. (1998). Capillary electrophoresis. Analytical Chemistry. 70: 279r–300r. Beckers, J.L. and Bocek, P. (2003). The preparation of background electrolytes in capillary zone electrophoresis: golden rules and pitfalls. Electrophoresis. 24: 518–535. Brooks, S.C. (2005). Analysis of metal-chelating agent complexes by capillary electrophoresis. In Nowack, B. and VanBriesen, J.M. (Eds), Biogeochemistry of Chelating Agents. American Chemical Society, New York, pp. 121–138. Burgisser, C.S. and Stone, A.T. (1997). Determination of EDTA, NTA, and other amino carboxylic
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
149
acids and their Co(II) and Co(III) complexes by capillary electrophoresis. Environmental Science & Technology. 31: 2656–2664. Carbonaro, R.F. and Stone, A.T. (2005). Speciation of chromium(III) and cobalt(III) (Amino) carboxylate complexes using capillary electrophoresis. Analytical Chemistry. 77: 155–164. Criscenti, L.J., Laniak, G.F. and Erikson, R.L. (1996). Propagation of uncertainty through geochemical code calculations. Geochimica et Cosmochimica Acta. 60: 3551–3568. Dabek-Zlotorzynska, E., Lai, E.P.C. and Timerbaev, A.R. (1998). Capillary electrophoresis: the stateof-the-art in metal speciation studies. Analytica Chimica Acta. 359: 1–26. de Gyves, J. and San Miguel, E.R. (1999). Metal ion separations by supported liquid membranes. Industrial & Engineering Chemistry Research. 138: 2182–2202. Denison, F.H. and Garnier-LaPlace, J. (2005). The effects of database parameter uncertainty on uranium(VI) equilibrium calculations. Geochimica et Cosmochimica Acta. 69: 2183–2191. Ekberg, C. and LundenBuro, I. (1997). Uncertainty analysis for some actinides under groundwater conditions. Journal of Statistical Computation and Simulation. 57: 271–284. Finkel, A.E. (1990). Confronting Uncertainty in Risk Management: A Guide for Decision-Makers. RFF Press, Washington, DC. Fytianos, K. (2001). Speciation analysis of heavy metals in natural waters: a review. Journal of AOAC International. 84: 1763–1769. Galceran, J., Companys, E., Puy, J., Cecilia, J and Garces, J.L. (2004). AGNES: a new electroanalytical technique for measuring free metal ion concentration. Journal of Electroanalytical Chemistry. 566: 95–109. Galceran, J., Huidobro, C., Companys, E. and Alberti, G. (2007). AGNES: a technique for determining the concentration of free metal ions. The case of Zn(II) in coastal Mediterranean seawater. Talanta. 71: 1795–1803. Gelman, A. (2004). Bayesian Data Analysis. Chapman & Hall/CRC, Boca Raton, FL. Goyal, A. and Small, M.J. (2005). Estimation of fugitive lead emission rates from secondary lead facilities using hierarchical Bayesian models. Environmental Science & Technology. 39: 4929– 4937. Haumann, I. and Bachmann, K. (1995). On-column chelation of metal-ions in capillary zone electrophoresis. Journal of Chromatography A. 717: 385–391. Haworth, A., Thompson, A.M. and Tweed, C.J. (1998). Use of the HARPROB program to evaluate the effect of parameter uncertainty on chemical modelling predictions. Radiochimica Acta. 82: 429–433. Hering, J.G. and Morel, F.M.M. (1988). Kinetics of trace-metal complexation: role of alkaline-earth metals. Environmental Science & Technology. 22: 1469–1478. Hertwich, E.G., McKone T.E. and Pease, W.S. (1999). Parameter uncertainty and variability in evaluative fate and exposure models. Risk Analysis. 19: 1193–1204. Huidobro, C., Puy, J., Galceran, J. and Pinheiro, J.P. (2007). The use of microelectrodes with AGNES. Journal of Electroanalytical Chemistry. 606: 134–140. IUPAC (2005). Stability constants database and mini-SCDatabase. IUPAC, London. Jawaid, M. (1978). Potentiometric studies of complex-formation between methylmercury(II) and EDTA. Talanta. 25: 215–220. Kagaya, S., Bitoh, Y. and Hasegawa, K. (1997). Photocatalyzed degradation of metal-EDTA complexes in TiO2 aqueous suspensions and simultaneous metal removal. Chemistry Letters. 2: 155–156. Martell, A.E., Smith R.M. and Rotekaitis, R.J. (2003). NIST Critically Selected Stability Constants of Metal Complexes Database. National Institute of Standards and Technology, Gaithersburg, MD. May, P.M. (2000). A simple general and robust function for equilibria in aqueous electrolyte solutions to high ionic strength and temperature. Chemical Communications 14: 1265–1266. May, P.M. and Murray, K. (2001). Database of chemical reactions designed to achieve thermodynamic consistency automatically. Journal of Chemical and Engineering Data. 46: 1035–1040. Moldovan, M., Krupp, E.V., Holliday, A.E. and Donard, O. (2004). High resolution sector field ICP-
150
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
MS and multicollector ICP-MS as tools for trace metal speciation in environmental studies: a review. Journal of Analytical Atomic Spectrometry. 19: 815–822. Morel, F. and Morgan, J. (1972). Numerical method for computing equilibria in aqueous chemical systems. Environmental Science & Technology. 6: 58–63. Nitzsche, O., Meinrath, G. and Merkel B. (2000). Database uncertainty as a limiting factor in reactive transport prognosis. Journal of Contaminant Hydrology. 44: 223–237. Nowack, B., Xue, H.B. and Sigg, L. (1997). Influence of natural and anthropogenic ligands on metal transport during infiltration of river water to groundwater. Environmental Science & Technology. 31: 866–872. Odegaard-Jensen, A., Ekberg, C. and Meinrath, G. (2004). LJUNGSKILE: a program for assessing uncertainties in speciation calculations. Talanta. 63: 907–916. Owens, G., Ferguson, V.K., McLaughlin, M.J., Singleton, I., Reid, R.J. and Smith, F.A. (2000). Determination of NTA and EDTA and speciation of their metal complexes in aqueous solution by capillary electrophoresis. Environmental Science & Technology. 34: 885–891. Padarauskas, A. and Schwedt, G. (1997). Capillary electrophoresis in metal analysis investigations of multi-elemental separation of metal chelates with aminopolycarboxylic acids. Journal of Chromatography A. 773: 351–360. Parthasarathy, N. and Buffle, J. (1994). Capabilities of supported liquid membranes for metal speciation in natural waters: application to copper speciation. Analytica Chimica Acta. 284: 649–659. Parthasarathy, N., Pelletier, M. and Buffle, J. (1997). Hollow fiber based supported liquid membrane: a novel analytical system for trace metal analysis. Analytica Chimica Acta. 350: 183–195. Pohlmeier, A. and Knoche, W. (1996) Kinetics of the complexation of Al3þ with aminoacids, IDA and NTA. International Journal of Chemical Kinetics. 28: 125–136. Popov, K.I. and Wanner, H. (2005). Stability constants data sources: critical evaluation and application for environmental speciation. In: VanBriesen, J. and Nowack, B. (Eds) Biogeochemistry of Chelating Agents. American Chemical Society, New York, NY, pp. 50–75. Pozdniakova, S. and Padarauskas, A. (1998). Speciation of metals in different oxidation states by capillary electrophoresis using pre-capillary complexation with complexones. Analyst. 123: 1497–1500. Reijenga, J.C., Verheggen, T., Martens, J. and Everaerts, F.M. (1996). Buffer capacity, ionic strength and heat dissipation in capillary electrophoresis. Journal of Chromatography A. 744: 147–153. Robards, K. and Starr, P. (1991). Metal determination and metal speciation by liquid-chromatography: a review. Analyst. 116: 1247–1273. Rottmann, L. and Heumann, K.G. (1994). Determination of heavy-metal interactions with dissolved organic materials in natural aquatic systems by coupling a high-performance liquid-chromatography system with an inductively-coupled plasma-mass spectrometer. Analytical Chemistry. 66: 3709–3715. Rubin, J. (1983). Transport of reacting solutes in porous-media: relation between mathematical nature of problem formulation and chemical nature of reactions. Water Resources Research. 19: 1231–1252. Salaun, P., Guenat, O., Berdondini, L., Buffle, J. and Koudelka-Hep, M. (2003). Voltammetric microsystem for trace elements monitoring. Analytical Letters, 36: 1835–1849. Sanzmedel, A. (1991). Inductively coupled plasma-atomic emission-spectrometry: analytical assessment of the technique at the beginning of the 90s. Mikrochimica Acta. 2: 265–275. Sanzmedel, A. (1995). Beyond total element analysis of biological-systems with atomic spectrometric techniques. Analyst. 120: 583–763. Satroutdinov, A.D., Dedyukhina, E.G., Chistyakova, T.I., Witschel, M., Minkevich, I.G., Eroshin, V.K. and Egli, T. (2000). Degradation of metal-EDTA complexes by resting cells of the bacterial strain DSM 9103. Environmental Science & Technology. 34: 1715–1720. Schecher, W.D. and Driscoll, C.T. (1987). An evaluation of uncertainty associated with aluminum equilibrium calculations. Water Resources Research. 23: 525–534. Serkiz, S.M., Allison, J.D., Perdue, E.M., Allen, H.E. and Brown, D.S. (1996). Correcting errors in
MODELLING CHEMICAL SPECIATION: THERMODYNAMICS, KINETICS AND UNCERTAINTY
151
the thermodynamic database for the equilibrium speciation model MINTEQA2. Water Research. 30: 1930–1933. Stipp, S.L. (1990). Speciation in the Fe(II)-Fe(III)-SO4 -H2 O system at 258C and low pH: sensitivity of an equilibrium-model to uncertainties. Environmental Science & Technology. 24: 699–706. Sutton, K., Sutton, R.M.C. and Caruso, J.A. (1997). Inductively coupled plasma mass spectrometric detection for chromatography and capillary electrophoresis. Journal of Chromatography A. 789: 85–126. Taillefert, M., Luther, G.W. and Nuzzio, D.B. (2000). The application of electrochemical tools for in situ measurements in aquatic systems. Electroanalysis. 12: 401–412. Thompson, R.B., Ge, Z., Patchan, M., Huang, C.C. and Fierke, C.A. (1996). Fiber optic biosensor for Co(II) and Cu(II) based on fluorescence energy transfer with an enzyme transducer. Biosensors and Bioelectronics. 11: 557–564. Thompson, R.B., Maliwal, B.P. and Fierke, C.A. (1999). Selectivity and sensitivity of fluorescence lifetime-based metal ion biosensing using a carbonic anhydrase transducer. Analytical Biochemistry. 267: 185–195. Timerbaev, A.R. (2004). Capillary electrophoresis of inorganic ions: an update. Electrophoresis. 25: 4008–4031. US EPA (2003). EPA STORET Database Access. 03/09/2006 [cited 17 March 2009]. Available from: http://www.epa.gov/storet/dbtop.html VanBriesen, J.M. and Rittmann, B.E. (1999). Modeling speciation effects on biodegradation in mixed metal/chelate systems. Biodegradation. 10: 315–330. Wang, J., Larson, D., Foster, N., Armalis, S., Lu, J., Xu, R., Olsen, K. and Zirino, A. (1995). Remote electrochemical sensor for trace-metal contaminants. Analytical Chemistry. 67: 1481–1485. Wang J., Tian, B.M. and Wang, J.Y. (1998). Electrochemical flow sensor for in-situ monitoring of total metal concentrations. Analytical Communications. 35: 241–243. Weber, C.L., VanBriesen, J.M. and Small, M.S. (2006). A stochastic regression approach to analyzing thermodynamic uncertainty in chemical speciation modeling. Environmental Science & Technology. 40: 3872–3878. Westall, J.C., Zachary, J.L. and Morel, F.M.M. (1976) Mineql: general algorithm for computation of chemical-equilibrium in aqueous systems. Abstracts of Papers of the American Chemical Society. 172: 8–8. Willett, A.I. and Rittmann, B.E. (2003). Slow complexation kinetics for ferric iron and EDTA complexes make EDTA non-biodegradable. Biodegradation. 14: 105–121. Xue, H.B. and Sigg, L. (1994). Zinc speciation in lake waters and its determination by ligandexchange with EDTA and differential-pulse anodic-stripping voltammetry. Analytica Chimica Acta. 284: 505–515. Xue, H.B., Sigg, L. and Kari, F.G. (1995). Speciation of EDTA in natural waters: exchange kinetics of Fe-EDTA in river water. Environmental Science & Technology. 29: 59–68. Yan, X.P. and Ni, Z.M. (2001). Speciation of metals and metalloids in biomolecules by hyphenated techniques. Spectroscopy and Spectral Analysis. 21: 129–138. Zeng, H.H., Thompson, R.B., Maliwal, B.P., Fones, G.R., Moffett, J.W. and Fierke, C.A. (2003). Realtime determination of picomolar free Cu(II) in seawater using a fluorescence based fiber optic biosensor. Analytical Chemistry. 75: 6807–6812.
CHAPTER
5
Multivariate Statistical Modelling of Water- and Soil-Related Environmental Compartments Aleksander Astel 5.1
INTRODUCTION
The main goal of this chapter is to present the role of multivariate statistical approaches in the qualitative assessment of water- and soil-related environmental compartments as tools for classification, modelling and interpretation of large datasets originating from monitoring procedures. In the first part of the chapter, major multivariate statistical methods used for environmental data mining, and hence commonly referred to as environmetric techniques, will be briefly described: cluster analysis, principal component analysis, principal component regression (as a part of the source-apportioning approach), N-way principal component analysis, and selforganising maps (SOM) of Kohonen. The application of environmetric techniques as the most important and commonly applied modelling tools for water- and soil-related environmental compartments is illustrated by the following examples: • • •
5.2
an assessment of sustainable development rule implementation in a river ecosystem located at an international boundary area; multivariate classification and modelling in surface water pollution estimation; and exploration of the environmental impact of tsunami disasters.
CLUSTER ANALYSIS
The term cluster analysis, first used by Tryon (1939), encompasses a number of different algorithms and methods for grouping objects of similar kind into respective Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
154
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
categories. The aim of cluster analysis (CA) is to divide data into groups that are meaningful. However, in the case of messy environmental data, CA is often applied as a starting point, called exploratory data analysis, for solving classification problems, based on unsupervised learning (Massart and Kaufman, 1983). Meaningful groups (clusters) of objects that share common features play an important role in characterising many environmental compartments. CA is based on the concept of similarity, which expresses how alike two objects are. While the term ‘‘similarity’’ has no unique definition, it is common to refer to all similarity measures as ‘‘distance in multifeatures space’’ measures, as the same function is served. A similarity between two objects i and i9 is a distance if ð Di9i ¼ Dii9 Þ < 0 where Dii9 ¼ 1 if xi ¼ xi9
(5:1)
where xi and xi9 are the row vectors of the data table X with feature measurements describing objects i and i9. The goal of CA is for the objects within a group to be similar (or related) to one another and different from (or unrelated to) the objects in other groups. The greater the similarity within a group, and the greater the difference between groups, the better or more distinct the clustering. When two or more features are used to define object similarity, the one with the largest magnitude dominates. This is why primary standardisation of features becomes desired. Summarising, CA enables the stepwise aggregation of objects based only on information found in the data that describe the objects and their mutual relationships. An important step in any clustering is to select a distance measure, which will determine how the similarity of two elements is calculated. There are a variety of different measures of inter-case distances and intercluster similarities and distances to use as criteria when merging nearest clusters into broader groups, or when considering the relation of an object to a cluster. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. A few popular ways of determining how similar interval measured objects are to each other are as follows: •
Euclidean distance is the distance between two objects xi and xi9 is defined by Equation 5.2, where j presents repetition of measurements. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u J uX d ð xi ,xi9 Þ ¼ t ð xij xi9 j Þ2 (5:2) j¼1
•
Squared Euclidean distance removes the square root sign, and places greater emphasis on objects further apart, thus increasing the effect of outliers (Equation 5.3). d ð xi ,xi9 Þ ¼
J X 2 ð xij xi9 j Þ
(5:3)
j¼1
•
Manhattan distance (city-block distance, block distance) is the average absolute difference across two or more dimensions that are used to define distance. The
155
MULTIVARIATE STATISTICAL MODELLING
Manhattan distance is defined slightly differently from the Euclidean distance. Except for some specific cases when the Manhattan distance is equal to the Euclidean distance, it is always higher than the Euclidean distance (Equation 5.4). d ð xi ,xi9 Þ ¼
J X xij xi9 j
(5:4)
j¼1
•
Chebychev distance is the maximum absolute difference between a pair of cases on any of two or more dimensions (features) that are being used to define distance. Pairs will be defined as different according to their difference on a single dimension, ignoring their similarity on the remaining dimensions (Equation 5.5). d ð xi ,xi9 Þ ¼ maxj xi xi9 j
(5:5)
•
Mahalanobis distance takes into account that some features may be correlated, and so defines roughly the same properties of the object (Equation 5.6; C is the variance-covariance matrix of the features). qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d ð xi ,xi9 Þ ¼ ð xi xi9 Þ C 1 ð xi xi9 Þ9 (5:6)
•
Minkowski distance should be applied if the object weight is increasing, relative to the dimensions in each compared object, and indicates the lowest similarity. d ð xi ,xi9 Þ ¼
J qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X r ð xi xi9 Þ p
(5:7)
j¼1
•
Pearson correlation is based on the correlation coefficient. Since, for Pearson correlation, both high negative and high positive values indicate similarity, researchers usually select absolute values.
There are several other related distance measures (e.g., weighted Euclidean distance, standardised Euclidean distance, cosine, customised ), but there are usually specific reasons why a very sophisticated distance measure needs to be applied. In CA, one task is related to the determination of similarity between measured objects, but an equally important task is to define how objects or clusters are combined at each step of the similarity assessment procedure. One possibility for clustering objects is their hierarchical aggregation. In this case, the objects are combined according to their distances from or similarities to each other. Agglomerative and divisive methods can be distinguished within hierarchical aggregation. Divisive clustering is based on splitting the whole set of objects into individual clusters. The more frequently used agglomerative clustering method starts with single objects and gradually merges them into broader groups. With regard to distance measures, various algorithms (linkage techniques) are available to decide on the number of clusters,
156
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
which result in slightly different clustering patterns. The most popular linkage algorithms include: •
•
•
Nearest neighbour (single linkage): the distance between two clusters is the distance between their closest neighbouring objects. In other words, the similarity of the new group to all other groups is given by the highest similarity of either of the original objects to each other object (Equation 5.8). d ij þ d i9 j d ij d i9 j (5:8) d mj ¼ ¼ minð d ij , d i9 j Þ 2 2 where m is the new object or cluster, and i9, i, j are clustered ‘‘before’’ objects. This algorithm works well when the plotted clusters are elongated or chain-like. Moreover, the sizes of the clusters and their weight are assumed to be equal. Furthest neighbour (complete linkage): the distance between two clusters is the distance between their furthest member objects. The furthest neighbour algorithm of linkage refers only to the calculation of similarity measures after new clusters are formed, and the two clusters (or objects) with highest similarity are always joined first (Equation 5.9). d ij þ d i9 j d ij d i9 j (5:9) d mj ¼ ¼ maxð d ij , d i9 j Þ 2 2 This algorithm works well when the plotted clusters form distinct clumps (not elongated chains). Application of the procedure presented above leads to wellseparated, compact, spherical clusters. Average linkage: the distance between two clusters is the average distance between all inter-cluster pairs. There are two possible ways of calculating the average linkage algorithm, non-weighted (Equation 5.10) and weighted (Equation 5.11), according to the size of each group being compared (n). When the cluster sizes are equal to one another, the two algorithms give identical results. d ij þ d i9 j 2
(5:10)
ni n i9 d ij þ d i9 j with n ¼ n i þ n i9 n n
(5:11)
d mj ¼ d mj ¼
•
In applying the weighted average linkage algorithm, no deformation of the clusters is observed, although small cluster outliers might arise to some extent. Ward’s method is a minimum distance hierarchical method that calculates the sum of squared Euclidean distances from each case in a cluster to the mean of all variables (Equation 5.12). The cluster to be merged is the one that will increase the sum the least. Thus, this method minimises the sum of squares of any pair of clusters to be formed at a given step. d mj ¼
ni þ n j n i9 þ n j nj d ij þ d i9 j d ii9 n þ nj n þ nj n þ nj
(5:12)
157
MULTIVARIATE STATISTICAL MODELLING
•
Centroid linkage measures the distance between clusters as the distance between their centroids. Cluster centroids are usually the mean, and these change with each cluster merge (Equation 5.13). d mj ¼
•
n ij n i9 j n i n i9 d ij þ d i9 j d i d i9 n n n
(5:13)
Median linkage is identical to centroid linkage, except that weighting is introduced into the computations to take into consideration differences in cluster sizes. Thus it computes the distance between clusters using weighted centroids (Equation 5.14).
d mj ¼
d ij d i9 j d ii9 þ 2 2 4
(5:14)
An advantage of the median linkage algorithm is that the importance of a small cluster is preserved after aggregation with a large one. There are a variety of additional linkage algorithms (correlation of items, binary matching, etc.), but it would be rare that a researcher would need to apply too many combinations of distance and linkage measures; however, comparing many approaches may be a way of clustering pattern validation. In hierarchical agglomerative clustering, the graphical output of the analysis is usually a dendrogram – a tree-like graphic that indicates the linkage between the clustered objects with respect to their similarity (distance measure). Decisions about the number of statistically significant clusters could be made for different reasons. Often a fixed number of clusters are to be assumed. Sometimes a distance measure or an allowed difference between clusters (classes) is used to evaluate the number of significant clusters. For practical reasons, Sneath’s index of cluster significance is widely used. It represents this significance on two levels of distance measure D/Dmax relation: 13Dmax and 23Dmax. Only clusters remaining compact after breaking the linkage at these two distances are considered significant and are interpreted. The algorithms for non-hierarchical clustering offer the division of the studied objects into an a priori given number of clusters (determined by practical or theoretical reasons). In principle, the dataset could be considered as a matrix consisting of rows (objects) and columns (features describing the objects). CA permits classification of both the objects and the variables. It is of great importance (from a practical point of view) as, in environmetric studies, it is often desired to obtain information on relationships between the sampling locations, and between the monitoring parameters. The whole concept of risk assessment is actually based on the estimation of the relationship between monitored features.
158
5.3
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
PRINCIPAL COMPONENT ANALYSIS (PCA)
Principal component analysis, introduced in 1901 by Karol Pearson (Pearson, 1901) – also termed eigenvector analysis, eigenvector decomposition or Karhunen–Loe´ve expansion – is an important and well-studied subject in statistics (Jolliffe, 2002), and hence one of the most widespread environmetric techniques. It has found application in an abundance of studies dealing with various environmental compartments. Several of the latest techniques, grouped according to example water- and soil-related ecosystems, are: • • •
water (Czarnik-Matusewicz and Pilorz, 2006; Felipe-Sotelo et al., 2007; Kouras et al., 2007; Platikanov et al., 2007; Stanimirova et al., 2007); sediments (Idris, 2008; Marques et al., 2008; Terrado et al., 2006); soil (Pardo et al., 2008; Zhang et al., 2007).
PCA is an orthogonal linear transformation technique, and it therefore transforms the original data matrix into a product of two matrices: scores and loadings. These express different pieces of information desired in order to explore the data. The matrix of scores has as many rows as the original data matrix, and presents a projection of objects (i.e., samples) on principal components (PCs), whereas the matrix of loadings has as many columns as the original data matrix, and presents a projection of features (i.e., variables measured) on PCs. According to Brereton (2003), the mathematical transformation of the original data proceeding in PCA takes the form X ¼ T Pþ E
(5:15)
where T is a matrix of scores, P is a matrix of loadings, and E is an error matrix. PCA should be calculated on a covariance matrix when the data are centred, or on a correlation matrix when the data are standardised (Einax et al., 1998; Vandeginste et al., 1998). The number of columns in the matrix T equals the number of rows in the matrix P. It is possible to calculate scores and loadings matrices as large as desired, provided that the ‘‘common’’ dimension is no larger than the smaller dimension of the original data matrix, and corresponds to the number of PCs that are calculated. Each scores matrix consists of a series of column vectors, and each loadings matrix consists of a series of row vectors. Many authors use the notation t a and pa to represent these vectors, where a is the number of PCs. The matrices T and P are composed of several such vectors, one for each PC. The first scores vector and first loadings vector are often called the eigenvectors of the first PC. Each successive component is characterised by a pair of eigenvectors. Using f eigenvectors in one dimension, where f is smaller than or equal to the rank of the data, f PCs can be obtained. In exploratory data analysis, PCA is often used for dimensionality reduction in a dataset by retaining those features of the dataset that contribute most to its variance, by keeping and evaluating only lower-order PCs and ignoring higher-order ones. PCA assumes that a few low-order PCs enable efficient data description, while a few highorder PCs could be neglected without losing significant information about the data.
MULTIVARIATE STATISTICAL MODELLING
159
Elimination of minor PCs leads both to simplification of the analysis by dimensionality reduction, and to the removal of extraneous variability from the system, as they contain most of the random error. To estimate the number of informative PCs, various criteria can be applied, such as Kaiser’s (Kaiser, 1960), Cattell’s (Cattell, 1966), crossvalidation (Mosier, 1951), or percentage of explained variance. However, the Cattell (1966) scree test and the Kaiser (1960) rule are the most widely used. They are both based on inspection of the correlation matrix eigenvalues. Cattell’s recommendation is to retain only components above the point of inflection on a plot of eigenvalues ordered by diminishing size. Kaiser (1960) recommends that only eigenvalues at least equal to one be retained, which is the average size of eigenvalues in a full decomposition. The percentage of explained variance is applied in the sense of a heuristic criterion. It can be used if enough experience is gained by analysing similar datasets. If all possible PCs are used in the model, the variance can be 100% explained. Typically, a fixed percentage of explained variance is specified, e.g., 80%. In environmental studies, even 75% of explained variance is a satisfactory measure for the adequacy of the PCA model chosen (Astel and Simeonov, 2008). Cross-validation (CV) methods, critically discussed by Diana and Tommasi (2002), Bro et al. (2008) and Browne (2000), are based not on the eigenvalues of the sample covariance matrix but on the predictive ability of different PCA solutions. According to Bro et al. (2008), the purpose of CV is to find a suitable number of components for a PCA model. ‘‘Suitable’’ implies that the model describes the systematic variation in the data and preferably not the noise. The noise can be loosely defined as any specific variation in a measurement that is not correlated with any other variation in the measured data. Thus the aim of CV is to find the number of components for which adding more components does not provide a better description (‘‘better’’ in an overall least-squares sense) of the data not previously included. In the simplest case of a given CV procedure, every object of the input matrix X is removed (leave-one-out method ) from the dataset once, and a model with the remaining data is computed. Then the removed data are predicted by the use of the PCA model, and the sum of the square roots of residuals over all removed objects is calculated. The correct number of components can be chosen based on the global minimum of the PRESS (predictive residual sum of squares) index (Allen, 1971), or on first local PRESS minimum (which is most often used in practice). For large datasets, the leave-one-out method can be replaced by leaving out a whole group of objects. In spite of the usefulness of the mentioned criteria, a few more applied studies in various branches of science can be found and discussed here, since both Cattell’s and Kaiser’s criteria were introduced for the first time in psychology. Smith and Miao (1994) suggested that 1.4 is a threshold value for randomness, while Humphreys and Montanelli (1975) argued that the Kaiser rule is true only for very large correlation matrices. They proposed that criterion eigenvalue thresholds be estimated by simulation studies based on random data formed into matrices of relevant sizes. The number of non-random components is determined by comparing the eigenvalue vector of the empirical data matrix with the vector of mean eigenvalues from the simulations. Only those leading empirical components with eigenvalues greater than their simulated equivalents are retained.
160
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Interpretation of the results of PCA is usually carried out by visualisation of the component scores and loadings. In the score plot, the linear projection of objects is found, representing the main part of the total variance of the data (in a plot of PC1 against PC2). Other projection plots are also available (e.g., PC1 against PC3, or PC2 against PC3) but they represent a lower percentage of explained total variance of the system under consideration. Correlation and importance of feature variables are decided from the factor loadings plots presented in the form of analogous combinations.
5.4
PRINCIPAL COMPONENT REGRESSION (PCR)
The appearance of a variety of pollution sources and the deficiency of emission data very often make the unequivocal identification of pollution profiles and the assessment of their impact problematic. Thus it becomes necessary to extract information on the sources from the ambient monitoring data. There are several factor analysis methods that can be used to identify and apportion pollutant sources from environmental compartments. The basic factor analysis approach is PCA, described above. However, PCA does not provide direct balancing and apportionment. Therefore it is commonly being connected with source apportionment analysis, an important environmetric approach that aims to estimate the contribution of PCA-identified factors and particular components to the total mass investigated. Alternative approaches capable of accomplishing this include absolute principal components scores (APCS) analysis, introduced in 1985 by Thurston and Spengler (Thurston and Spengler, 1985), and applied successfully by many authors, predominantly for environmental compartments related to air (Bruno et al., 2001; Guo et al., 2004a, 2004b; Miller et al., 2002; Park and Kim, 2005; Song et al., 2006; Vallius et al., 2003, 2005), water (Pekey et al., 2004; Simeonov et al., 2003; Singh et al., 2005; Zhou et al., 2007a), sediment (Astel et al., 2006; Simeonov et al., 2007; Simeonova, 2006, 2007; Zhou et al., 2007b), and soil (Zuo et al., 2007). According to Henry et al. (1984) and Hopke (1985), the ratio between the number of cases (n) and variables (v) is essential when applying PCA-APCS in order to obtain statistically significant results, and to quantify the contribution of each of the sources to the total mass. As stated by Henry et al. (1984), significant results may be obtained when n . 30 + (v + 3)/2. Next, determination of the absolute principal component scores (APCS) adjugated with multiple regression of the total mass (dependent variable) on the APCS (independent variables) is calculated. The main advantage of this modelling technique is its receptor orientation, and the opportunity to estimate source emission without direct measurement. The first step in PCA-APCS is scaling of all component (variables) concentrations as Z ik ¼
C ik C i si
(5:16)
where Cik is the concentration of variable i in sample k, Ci is the arithmetic mean
161
MULTIVARIATE STATISTICAL MODELLING
concentration of variable i, and s i is the standard deviation of variable i for all samples included in the analysis. As the factor scores delivered by PCA are scaled with mean zero and standard deviation equal to unity, the true zero for each factor scored should be calculated by introducing an artificial sample with concentration equal to zero for all variables, according to ð Z0Þi ¼
0 Ci Ci ¼ si si
(5:17)
Finally, the source composition profiles and source contributions are estimated by multiple linear regression of the total mass concentration against the APCS values. According to this procedure, regressing the analyte concentration data on APCS gives an estimation of the coefficients that convert the APCS into pollutant source mass contributions from each source for each sample. The source contributions to Ci can be calculated by the above-mentioned linear regression procedure according to X C i ¼ ð b0 Þ i þ APCS p b pi , p ¼ 1, 2, . . ., n (5:18) where (b0 ) i is a constant term of multiple regression for variable i, b pi is the coefficient of multiple regression of source p for variable i, APCS p is the scaled value of the rotated factor p for the sample being considered, and APCS p .b pi represents the contribution of source p to Ci . The mean of the product APCS p .b pi on all samples represents the average contribution of the sources. This balancing approach accepts that all sources have been identified by PCA, and all of them participate in the source contribution procedure. Although the method estimates source profiles and contributions, a serious observable disadvantage is error propagation in centring and uncentring data.
5.5
N-WAY PCA
The most commonly applied chemometric techniques are based on two-way datasets, usually presented in the form of matrices with variables (features) set up in columns and cases (samples) set up in rows. However, because of the increasing interest in handling multidimensional datasets, N-way methods, such as Tucker3 and PARAFAC (PARAllel FACtor analysis), have became popular in recent years in the field of environmental data analysis (Astel and Małek, 2008; Astel et al., 2008a; Barbieri et al., 1999a, 2002; Felipe-Sotelo et al., 2007; Giussani et al., 2008; Simeonov, 2004; Simeonov et al., 2001; Simeonova and Simeonov, 2007; Stanimirova and Simeonov, 2005; Stanimirova et al., 2006). Application of the extension of two-way approaches (as PCA) to higher orders is desired when, for instance, assessing the pollution in a given environmental compartment requires the monitoring of selected chemical features at various locations over a period of time. These results, in the most typical data structure, can be arranged as features 3 sampling locations 3 time; however, examples in which two separate modes of the data array are formed by the time
162
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
dimension are also widely known (e.g., months, years) (Henrion, 1994; Astel and Małek, 2008). In the case of a typical analysis using N-way methods, trends in parameters, locations and time can be visualised, while dealing with the two-time dimensional arrays leads to long (i.e., annual) and short (i.e., seasonal) time variability assessment, simultaneously. The Tucker3 model derives its name from Ledyard R. Tucker, who proposed the model in 1966 (Tucker, 1966). Today, it has become one of the most basic N-way models used in environmetrics. It decomposes the three-way data matrix X (L 3 M 3 N) into three orthonormal loading matrices A (I 3 L), B (J 3 M) and C (K 3 N) (one for each mode) and a three-way core matrix Z (L 3 M 3 N). Very often, xijk is the value of the chemical, physical or biological parameter i (going from 1 to I), on month of sampling j (from 1 to J), at site k (from 1 to K) according to xijk
L X M X N X
a il b jm c kn z lmn þ e ijk
(5:19)
l¼1 m¼1 n¼1
where e ijk constitutes the residual, i.e., the part of the data not accounted for by the model. In this equation, the indices l, m and n represent the numbers of components chosen for describing the first, second or third way of the data array, respectively, and a il , b jm and c kn are the elements of the three component matrices A, B and C. The A (I 3 L) matrix describes the measured parameters, B (J 3 M) the sampling months, and C (K 3 N) the sampling sites. Graphical examples of decomposition realised by Tucker3 and PARAFAC models have been presented by Brereton (2003), Pravdova et al. (2002) and Astel and Simeonov (2008). Each of these matrices can be interpreted in the same manner as a loading matrix of the classical two-way PCA, as they are all columnwise orthogonal. The equation term z lmn represents an element of Z, an array with (l 3 m 3 n) dimensions, called the ‘‘core’’ of the model. The core matrix element z lmn weighs the products of the u component of the first way by the v component of the second way and the w component of the third way. The core matrix Z reflects mutual interactions among A, B and C, and the largest squared elements of it indicate the most informative factors that describe X. Moreover, the core matrix is needed for interpretation because of rotational freedom of the model (Henrion, 1994). As mentioned, the component matrices A, B and C are constrained to be orthogonal, and the matrix columns are scaled to unity length. In this way, the squared value of the core element, (z2lmn ), shows the entity of the interactions among the l, m and n components of the X ¼ {xijk } data array. The Tucker3 model is computed by an iterative procedure based on the alternating least squares (ALS) algorithm (Andersson and Bro, 1998), with the solution permitting the partition of the sum of squares of the X elements: SSð X Þ ¼ SSðmodelÞ þ SSðresidualÞ
(5:20)
The ratio SS(model)/SS(X) can be used to evaluate the strength of the model in representing its objects. In the following, this ratio will be called the explained variation of the model. The data array is usually pre-treated, by centring across mode
MULTIVARIATE STATISTICAL MODELLING
163
A and scaling within mode B in order to remove differences between environmental compartment quality parameters due to their magnitudes and different units of measure. The A, B, C matrices and the Z core array can be rotated, as demonstrated in classical factor analysis. A rotation method, called variance of squares (Henrion and Anderson, 1999), optimises the variance of the squared core elements by distributing the total variance among a small number of elements, thus permitting models with greater ease of interpretability. The Tucker3 algorithm delivers a set of possible solutions resulting in a large number of combinations of models with different complexities. In PARAFAC, on the other hand, the number of factors in each dimension is necessarily equal, and hence is considered a special case of Tucker3. To select a model with an optimal complexity, the variance of each combination of model complexities should be evaluated, from the model with the lowest number of factors in each mode to the model with the highest complexity. Most often, the optimal model is that which has the smallest possible number of factors in each of the modes but explains a large part of the data variance. In practice, a trade-off is needed between the two requirements. At the same time, a set of possible Tucker3 and PARAFAC models should be validated, e.g., using the CV procedure of multi-way component models (Louwerse et al., 1999), half split analysis (Harshman and de Sarbo, 1984), or core consistency analysis (only for PARAFAC) (Bro et al., 1999). Unlike PCA, where the unrotated factors have a purely abstract meaning, both Tucker3 and PARAFAC lead to results that are directly interpretable physico-chemically (Brereton, 2003).
5.6
SELF-ORGANISING MAPS (SOM, KOHONEN MAPS)
The self-organising map (SOM) algorithm was originally proposed in 1982 by T. Kohonen (Kohonen, 1982, 1984, 1989), and has recently become a popular neural network architecture for solving environmental-related problems, as it is noise tolerant. Interesting SOM applications have been reported, mainly in the field of exploratory data analysis and data processing, which can be grouped according to water- and soil-related environmental compartments as follows: •
• •
water (Astel et al., 2007, 2008a, 2008b; Ce´re´ghino et al., 2001; Kalteh et al., 2008; Lee and Scholz, 2006; Park et al., 2003; Richardson et al., 2003; Tutu et al., 2005; Walley et al., 2000); sediments (Alp and Cigizoglu, 2007; Alvarez-Guerra et al., 2008; Astel and Małek, 2008; Lacassie et al., 2004; Tsakovski et al., 2009); soil (Boszke et al., 2008; Ehsani and Quiel, 2008).
Moreover, SOM has been successfully applied to solve various problems related to ecological processes analysis, such as plant disease detection (Moshou et al., 2005) and ecological modelling (Lek and Gue´gan, 1999; Recknagel, 2003). The term ‘‘self-organising’’ refers to the ability to learn and organise information without being given the associated dependent output values for the input pattern (Mukherjee, 1997). One of the most valuable outcomes of SOM is that it
164
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
can be used simultaneously both to reduce the amount of data by clustering, and to project the data non-linearly. Since the SOM output is a virtual neuron community in a lower-dimensional lattice (usually two-dimensional) through an unsupervised learning algorithm, no researcher intervention is required during the learning process, and little needs to be known about the characteristics of the input data. Higher-dimensional output is also possible, but it is inconvenient for visualisation purposes, and hence not as common (Vesanto, 1999). The number of neurons in an output layer may vary in a certain range (from a few dozen up to several thousand) according to the dataset’s dimensionality. Each neuron is represented by a d-dimensional weight vector (also termed the prototype vector or codebook vector) m ¼ [m i , . . ., m d ], where d is equal to the dimension of the input vectors. The neurons are connected to adjacent neurons by a neighbourhood relation that dictates the topology, or structure, of the SOM, and hence similar objects should be mapped close together on the grid. For the SOM algorithm, there are no precise rules for choosing the various initialisation parameters, such as: global topology type, whether sheet (two-dimensional output), cylinder or toroid (both three-dimensional outputs); local topology type (hexagonal, rectangular); map dimension; or neighbours functions (e.g., Gaussian, bubble). The quality of mapping can be quantitatively measured with the quantisation error (QE) and the topographic error (TE). A small number of grid nodes will result in a high QE (scaled as 1–0) and well-defined clusters, whereas a large number of nodes results in a low QE and, in the most extreme case, a cluster for each data sample. In order to assess the SOM quality, topology preservation is quantified and represented by TE (Kohonen, 2001). The lower the TE (the closer to zero), the better SOM preserves the topology (Arsuaga and Dı´az, 2006). However, there are no reference values of QE and TE; they depend strongly on the map resolution. According to Peeters et al. (2006), QE decreases rapidly with increasing number of grid nodes, while TE converges to a stable value. As suggested by Vesanto et al. (2000), the most satisfactory compromise between clustering ability, QE and TE could be achieved when the number of nodes (n) is determined using the equation pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n ¼ 5 number of cases in a dataset (5:21) The discrete neuron lattice can be either hexagonal or rectangular; however, hexagonal is preferred, because it does not favour the horizontal or vertical direction (Kohonen, 2001). Moreover, it is effective for visualisation purposes (Vesanto, 1999). The successive stages requiring an application of the SOM technique can be characterised as follows.
5.6.1
Data gathering and preprocessing
Despite SOM’s high tolerance for missing values, the gaps should be filled if possible. The most common approaches are replacement by mean of neighbouring non-missing values, estimation using the iterative approach or maximum likelihood concept
165
MULTIVARIATE STATISTICAL MODELLING
(Stanimirova et al., 2007; Walczak and Massart, 2001a, 2001b), as well as assignment to one-half or one-third of the limit of detection (LOD), when the concentration of analytes is below LOD (Aruga, 2004; El-Shaarawi and Esterby, 1992; Helsel, 2005). However, the choice of the most suitable technique should be decided by the data structure and the details of experimental analysis. To avoid any effects of a scale of units, or to prevent variables from having a higher impact when compared with that due to differences in spread magnitude, autoscaling is recommended. By transforming all features to a range of, e.g., 0–1, all features have equal importance in the formation of the SOM.
5.6.2
Training
After the primary initialisation, the weight vectors are characterised by random values, and then the SOM is trained iteratively with one of two possible algorithms: sequential or batch. A sequential training algorithm (STA) constructs the nodes in an SOM in order to represent the whole dataset, and their weights are optimised at each iteration step. In each step, one sample vector x from the input dataset is chosen randomly, and the distances between it and all the weight vectors of the SOM are calculated using some distance measure, such as the Euclidean distance, squared Euclidean distance, or Mahalanobis distance, and hence the optimal topology is expected. Node c, with a weight vector (m) closest to the input vector x, is called the best matching unit (BMU): kx m c k ¼ minfkx m i kg i
(5:22)
where k k is a distance measure (typically the Euclidean distance). If missing data appear, they are handled by excluding them from the distance calculation (e.g., it is assumed that their contribution to the distance kx m i k is zero). Because the same missing value is ignored in each distance calculation (over which the minimum is taken), this is a valid solution. After finding the BMU, the weight vectors are updated in agreement with the update rule presented below (Equation 5.23), so that the BMU is moved closer to the input vector. The topological neighbours of the BMU are also moved closer, because of their mutual connection. m i ð t þ 1Þ ¼ m i ð tÞ þ Æð tÞh ci ð tÞ½ xð tÞ m i ð tÞ
(5:23)
where t is time; m(t ) is the weight vector, indicating the output unit’s location in the space at time t; Æ(t ) is the learning rate at time t; h ci is a neighbourhood nonincreasing function, centred on winning unit c at time t; and x(t ) is the input vector, drawn randomly from the input dataset at time t. The sequential training algorithm is usually performed in two phases. In the first phase, a relatively large initial learning rate Æ(t ¼ 0) and neighbourhood radius 0 are used. In the second phase, both the learning rate and the neighbourhood radius become smaller. The second possible training algorithm is called the batch training algorithm (BTA), because, instead of using a single data vector at a time, the whole dataset is
166
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
presented to the map before any adjustment is made. In each training step, the dataset is partitioned to the Voronoi regions of the map weight vectors (i.e., each data vector belongs to the dataset of the map unit to which it is closest). After that, the new weight vectors are calculated according to Pn j¼1 hic ðtÞxj m i ð t þ 1Þ ¼ Pn (5:24) j¼1 hic ðtÞ where c ¼ argmin k fkx j m k kg is the index of the BMU of data sample x j . The new weight vector is a weighted average of the data samples, where the weight of each data sample is the neighbourhood function value h ic (t ) at its BMU c. As in the STA, missing values are ignored in calculating the weighted average.
5.6.3
Interpretation
Once the competitive training process of the net has been accomplished, the resulting map can be used for visualisation, clustering or local modelling purposes. One of the most convenient ways of using SOM results in explanation support is visualisation in the form of so-called component planes. Component planes can be considered as a sliced version of the SOM, and they provide a powerful tool to analyse the community structure. There are as many component planes as there are data features. A single plane is a representation of the map that shows the values that take the same component of the weight vectors in each of the map units. A simple inspection of a component plane provides an idea of the feature abundance and, by comparing component planes, correlations between features can be seen (so that parallel gradients show positive correlations whereas anti-parallel gradients indicate negative correlations). When a large variety of features is considered, it is difficult to compare entire sets of component planes, and hence application of a generalisation approach is desirable. Component planes can be visualised in the form of a summary SOM (a unified distance matrix, or U-matrix) to show the contribution of each feature in the self-organisation of the map. The U-matrix visualises distances between neighbouring map units, and helps to identify the cluster structure of the map: high values of the U-matrix indicate a cluster border; uniform areas of low values indicate clusters themselves (Ultsch and Siemon, 1990). In other words, the U-matrix expresses semi-quantitative information about the distribution of a complete set of features for a complete set of the cases, whereas a single component plane visualises the distribution of a given feature for a complete set of the cases. Because of this, the U-matrix joined with the component planes can be effectively applied for the assessment of inter-feature and inter-case relations. According to the clustering task, one of the most commonly applied algorithms is the non-hierarchical K-means clustering algorithm. In this case, different values of k (a predefined number of clusters) are tried, and the sum of squares for each run is calculated. Finally, the best classification with the lowest Davies–Bouldin index should be chosen (it is a function of the ratio of the sum of within-cluster scatter and between-cluster separation) (Davies and Bouldin, 1979).
MULTIVARIATE STATISTICAL MODELLING
167
The issues regarding data gathering, preprocessing, training and post-processing have previously been published by Vesanto (1999, 2002), Vesanto and Alhoniemi (2000), Vesanto et al. (2000) and Kohonen (2001), and this is why only a brief introduction has been presented here.
5.7
CASE STUDIES
The multivariate statistical methods described above are considered to be the most important modelling tools for water- and soil-related environmental compartments, and can be illustrated by several specific case studies.
5.7.1
Environmetrics in assessing sustainable development rule implementation in a river ecosystem located at an international boundary area
The study was carried out in 2002, based on samples collected along a 75 km section of the middle Odra River (on the German/Polish border), between the estuary of the Nysa Łuz˙ycka River (544 km) and that of the Warta River (619 km). Seven crosssections were demarcated, as shown in Figure 5.1, and the samples from the German bank (A), the Polish bank (C) and the middle part of the riverbed (B) were collected during 8-hour surveys. Based on a long-term observation, the mean annual flow for the Odra River was estimated at 270 m3 s1 (Choin´ski, 1981), and the summary annual flow of smaller tributaries (without the Nysa Łuz˙ycka River) was estimated at 5 m3 s1 . These values permitted the assumption that the smaller tributaries did not contribute significantly to the Odra River pollution between the third and seventh cross-sections. The samples originating from the middle part of the riverbed and the Polish bank at cross-section 2 were not collected for economic reasons, and because of the lack of meaningful tributaries at the Polish site, and understanding of possible pollution sources. The mean annual flow for the Nysa Łuz˙ycka River was estimated at 25 m3 s1 (almost 10% of the Odra River flow), which is in the close vicinity of Eissenhuttenstadt City and the Odra–Sprewa channel. Intensive agricultural activity is present (fruit culture and market gardening) in the German territory, with an additional sample from the German bank (2A). Agricultural flow is thus considered as an impact on the Odra River bottom sediment pollution characteristics. The rest of the chosen locations were comparably affected by various anthropogenic sources of pollution (factories, traffic emissions, agricultural and urban activities). The summarised characteristics of each cross-section are presented in Table 5.1. Bottom sediments were collected from the boat using a tubular scoop-type hand corer made of stainless steel. The sediment samples were homogenised in an agate mortar, and dry-sieved to separate the required fractions from ,250 g bulk sediment. In order to avoid sample contamination, a nylon sieve was used. Samples of bottom sediments were subjected to sequential extraction according to the modified Tessier
Weria
Odra Wroclaw
. Poznan
Opole
Konin
Sieradz
Poland
Kietz
S´wiecko Rybocice
Slubice
Sprewa - Odra Canal
Eisenhuttestadt
Brieskow Canal
Brieskow Finkeneerd
Frankfurt O.
Lebus
Neu Manschnow
Kostrzyn
2 (550 km) 1 (544 km)
3 (551 km)
Cybinka
4 (566 km)
5 (587 km)
6 (617 km)
7 (619 km)
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
Legnica
Zielona Góra
Czech Republic
Gorlitz
.
Gubin
Odra
Gorzów Wielk.
Frankfurt O.
Germany
Berlin
cka
Szczecin
Baltic Sea
Figure 5.1: Location of cross-sections along shore of the Odra River (Germany/Poland border).
ra
Od
uzy
aL
Nys
168 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
169
MULTIVARIATE STATISTICAL MODELLING
Table 5.1: Characteristics of each sampling cross-section in the Odra River downstream direction. Sampling Location cross-section
Possible pollution sources
1
Agriculture
2
3
About 2 km below the estuary of Nysa Łuz˙ycka River Above the estuary of the Odra–Sprewa channel: at this cross-section only one sample was collected near the German bank, labelled 2A Below the estuary of the Odra–Sprewa channel, directly below the city of Eissenhuttenstadt
4
Below the estuary of the backwater channel passing through Cybinka
5
Below the cities Słubice and Frankfurt n/O
6 7
Above the estuary of the Warta River Below the estuary of the Warta River, below the city of Kostrzyn n/O
Agriculture
Steelworks, engineering industry, shipbuilding industry, processing industry Steelworks, metal processing industry, agriculture, feed plant, breeding farms Electrical industry, printing industry Agriculture Paper and wood factory
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
procedure (Tessier et al., 1979). Accordingly, the fractionation delivered the following fractions: I II III IV V
exchangeable fraction; metals bound with carbonates; metals bound with hydrated iron and manganese oxides; metals bound with organic matter; and mineral fraction.
The total content of heavy metals and the metals in the solutions obtained after each step of the sequential extraction were determined by atomic absorption spectroscopy (AAS) (Perkin Elmer, AAnalyst 300, USA). Cadmium was determined by means of the graphite furnace (GF) AAS technique. In order to assess the repeatability of the results provided by the method of sequential extraction, the experiment was conducted for five samples in parallel. A detailed description of the experimental methods, their calibration and validation can be found elsewhere (Boszke et al., 2004; Głosin´ska et al., 2005). An analysis of standard reference materials LKSD-2 (Canadian lake sediment) and SRM 2711 (Montana soil – agricultural soil) was carried out. The reference materials were subjected to exactly the same analytical procedure as for the bottom sediment samples. Heavy metal
170
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 5.2: Metal content compared with PIG, LAWA and US EPA guidelines for sediments and major statistics of the bulk samples (the specific guideline values for each classification system are presented in italic for the number of sediment samples). Element
N
Mean/ mg kg21
Cd
19
4.09
Pb
19
Cr
Minimum/ mg kg21
Maximum/ mg kg21
Standard deviation
Skewness factor
2.93
7.87
1.05
2.7
46.38
21.23
163.35
33.99
2.7
19
9.54
1.57
47.47
13.76
2.2
Zn
19
118.03
28.05
471.31
121.04
2.0
Cu
19
24.09
11.45
88.31
21.07
2.2
Ni
19
13.80
5.11
31.03
5.65
1.5
Mn
19
273.09
47.56
1241.52
282.15
2.6
Fe
19
6095.43
1493.71
37972.80
8959.63
2.9
n.c.l.: no content limit for Cd. Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
recoveries were as follows: Cu, 88%; Pb, 86%; Ni, 82%; Cr, 79%; Mn, 85%; Zn, 88%; and Fe, 79%. The distribution of metals in the bottom sediments was tested, comparing the total content sums throughout the extraction procedure. The results of the determination of Cd, Cu, Cr, Fe, Mn, Zn, Pb and Ni in the bottom sediments in the form of total content (mg kg1 ) of metal were given as the sum of the contents extracted at individual stages of the sequential analysis, the maximum and minimum contents, standard deviation and the skewness factor. This indicates the normality of the element content distribution in the bottom sediment samples compared with the guidelines for sediments from the Polish Geological Institute (PIG), the Working Group of the Federal States on water problems, Germany (LAWA) and the United States Environmental Protection Agency (US EPA), as listed in Table 5.2. The mean contents of heavy metals found in the samples collected from the German bank (in mg kg1 ) were: Cd, 4.50 1.56; Pb, 61.7 49.7; Cu, 34.7 26.9; Zn, 199 165; Cr, 18.5 19.1; Ni, 17.3 7.0; Fe, 11 934 13 045; and Mn, 413 429. These values were higher than those of the same metals in the sediments collected from the Polish bank (mg kg1 ): Cd, 3.86 0.39; Pb, 38.6 22.6; Cu, 22.4
171
MULTIVARIATE STATISTICAL MODELLING
PIG guidelines for sediments I
II
III
LAWA guidelines for sediments IV
Number of samples 0 , 0.7 6 , 30 19 , 50 15 , 125 14 , 20 14 , 16
6 0.7–3.5 12 30–100 0 50–100 1 125–300 5 20–100 5 16–40
12 3.5–6 1 100–200 0 100–400 3 300–1000 0 100–300 0 40–50
Not regulated Not regulated I, not polluted; II, negligibly polluted; III, moderately polluted; IV, heavily polluted.
I
II
III
US EPA guidelines for sediments IV
Number of samples 1 .6 0 . 200 0 . 400 0 . 1000 0 . 300 0 . 50
0 , 0.3 2 , 25 19 , 80 15 , 100 14 , 20 18 , 100
0 0.3–2.4 17 25–200 0 80–400 3 100–400 5 20–240 1 100–400
I
II
III
Number of samples 19 2.4–9.6 0 200–800 0 400–800 1 400–800 0 240–400 0 400–800
Not regulated Not regulated I, not polluted; II, negligibly polluted; III, moderately polluted; IV, heavily polluted.
0 . 9.6 0 . 800 0 . 800 0 . 800 0 . 400 0 . 800
n.c.l.
n.c.l.
13 , 40 17 , 25 -
2 40–60 2 25–75 -
1 .6 4 . 60 0 . 75 -
14 2 3 , 25 25–50 . 50 18 1 0 , 20 20–50 . 50 15 2 2 , 300 300–500 . 500 Not regulated I, not polluted; II, moderately polluted; III, heavily polluted.
20.7; Zn, 79.3 72.5; Cr, 6.4 7.3; Ni, 10.8 3.8; Fe, 3460 2780; and Mn, 124 102. The contents of most of the heavy metals in the samples collected from the central riverbed were the lowest, except for nickel (mg kg1 ): Cd, 3.84 0.72; Pb, 36.3 12.7; Cu, 13.4 1.1; Zn, 62.4 9.9; Cr, 2.2 0.37; Ni, 12.8 3.5; Fe, 1920 158; and Mn, 13.4 1.1. The chemical pollution of the sediments was further evaluated by comparison with the sediment quality guidelines proposed by the US EPA (Filgueiras et al., 2004), LAWA (Meyer, 2002) and PIG (Bojakowska, 2001). In Polish legislation, the PIG guidelines had the status of proposals, and were not obligatory in 2002. In the LAWA guidelines, the first permanent criterion of classification states that the foreign matter should not be present in the environment at concentration levels higher than the LOD of the currently available analytical techniques. The second criterion is created by the target content levels of heavy metals, which provide a sufficient protection level for aquatic organisms. According to the Polish classification, bottom sediments belonging to categories I and II can be arbitrarily applied in land or aquatic environments (fertilisers, riverbank reproduction processes, dyke building). Sediments belonging to category III
172
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
can be relocated to specified water reservoirs, or applied on land to a limited degree. Category IV includes bottom sediments that have to be purified before use in the environment, or which should be relocated to safety storage areas. According to the criteria shown in Table 5.2, the PIG guidelines for sediments are more restrictive than those of LAWA, and closer to those of the US EPA, in which the target values were estimated taking into account the harmful impact of heavy metals on the organisms living in water (MacDonald, 1994). Except for sample 2A, according to the PIG and LAWA classifications no heavily polluted sediments were found. As far as the presence of Ni, Cu, Zn and Cr was concerned, most sediments were characterised as negligibly polluted. The measured content of Pb indicated that, according to the PIG and LAWA guidelines, 63% (PIG) and 89% (LAWA) of bottom sediment samples were negligibly polluted. Against the US EPA guidelines, there were only four samples indicating heavy pollution with Pb, three indicating heavy pollution with Cu, and two samples heavily polluted with Mn. Depending on the element investigated, the number of moderately polluted sampling sites varied between one and two. The lowest pollution was observed in the cases of Cr and Ni for all classification quality guidelines compared. On the basis of heavy metal content values determined in the bottom sediment samples studied, it appeared that, despite the presence of numerous sources of pollutants, the reservoir of the Odra River could be classified as negligibly polluted. The contribution of particular fractions of heavy metals determined by sequential analysis permitted the estimation of the possibilities of metal migration, and hence their bioavailability potential. The most mobile forms of heavy metals occurred in fractions I and II – that is, in the exchangeable fraction and metals bound with carbonates. Some authors have investigated metal content within these fractions because of the opportunity to determine the level of mobility in bottom sediments (Adamiec and Helios-Rybicka, 2002; Svete et al., 2001). The content level provides information concerning potential environmental risk. The metals bound with carbonates are able to penetrate the bulk water easily as a result of a pH decrease in the river, and thus the metal content in fractions I and II depends strongly on water conditions. In bottom sediments, Cd, Pb and Cr did not occur in the exchange fraction, but were bound with carbonates in the amounts of ,35% (Cd), ,25% (Pb) and ,13% (Cr) of the total content. Zn, Ni, Cu and Mn appeared in the exchange fraction but their amounts, with the exception of Mn, did not exceed 10%, and their occurrence in fraction II varied between 25% (Zn) and 35% (Cu). Iron content was negligible in the bioavailable fractions (0.4% in fraction I, 2–3% in fraction II). The dendrogram of the sediment pattern resulting from the CA according to Ward’s method is presented in Figure 5.2. Using the method of CA, riverbanks with different pollution states could be clearly distinguished. As presented in Figure 5.2, there were two different groups of sediments in the Odra River, and the division was closely connected with the location of the major pollution sources along the shore of the river. In order to interpret the dendrogram more easily, a grey-scale map was generated. This is not a common feature of dendrograms. The grey-scale map of the standardised contents of heavy metals in bottom sediments collected in 19 locations is presented in Figure 5.3. The
173
MULTIVARIATE STATISTICAL MODELLING
Figure 5.2: Dendrogram of cluster analysis according to Ward’s method: total content of heavy metals in Odra River sediments.
100
100 D/Dmaks
80
2D 3 maks
60 Group II
Group I
40
20
0
3B 3A 5B 4B 4A 3C 1C 6C 7C 7A 5C 7B 6B 1B 2A 5A 6A 4C 1A
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
map clearly enhances interpretability, and its use is recommended by many authors (Stanimirova et al., 2005). In general, it could be seen that samples clustered in group I were characterised by a higher content of heavy metals analysed. Bottom sediment samples collected at 1A, 6A (German bank), and 4C (Polish bank) were characterised by a similar content of Cd, Pb, Cr, Zn and Cu, which was high. Bottom sediment samples collected near Słubice–Frankfurt on the Odra River at the German site were strongly polluted by Fe and Mn. The content of all heavy metals dramatically increased above the estuary of the Odra–Sprewa channel (2A), and thus indicated that there might be a variety of industrial factories and land with intense agricultural cultivation at the German bank. By application of PCA to the matrix of eight features (total content of Cd, Cu, Cr, Fe, Mn, Zn, Pb and Ni) and 19 samples of the Odra River sediments, two components were extracted describing approximately 88% of the common variance (Table 5.3).
174
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.3: Greyscale map representing standardised content of heavy metals in bottom sediment samples collected along the shore of the Odra River (mg kg1 ). Fe/mg kg1 Mn/mg kg1 Ni/mg kg1 Cu/mg kg1 Zn/mg kg1 Cr/mg kg1 Pb/mg kg1 Cd/mg kg1 3B 3A 5B 4B 4A 3C 1C 6C 7C 7A 5C 7B 6B 1B 2A 5A 6A 4C 1A
3 2 1 0 1
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
Table 5.3: Matrix of factor loadings. Element Cd Pb Cr Zn Cu Mn Ni Fe Eigenvalue % of variance
Factor 1
Factor 2
0.865 0.941 0.864 0.934 0.915 0.958 0.723 4.71 71.5
0.956 2.34 16.7
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
MULTIVARIATE STATISTICAL MODELLING
175
It should be emphasised at this point that the mean mixing distance of the lotic system was not examined in detail. In taking this decision, the following points were taken into account: •
•
•
•
•
The flow condition of the river was seasonally changeable, because the water quantity and flow conditions changed with the shape of the riverbank and the deepness of the river reach. The mean annual flow of the left- or right-located tributaries (between the third and seventh cross-sections) was estimated as less than 2% of the mean annual flow of the Odra River, and therefore should not influence the main stream significantly. The bottom sediment sample pollution is partly connected with the existence of hard-to-define, diffuse sources (i.e., surface flow or agricultural flow) with unknown inflow dynamics and magnitude. The width of the Odra River varied from 70 to 100 m in the area investigated, and diffuse sources of pollution introduced at one bank should not have any significant impact on the quality of bottom sediment at the other bank. The chemical composition of bottom sediments is a result of long-term accumulation processes, whereas the sampling campaign took place during a 1-day survey.
To interpret a group of variables associated with a particular factor, only loadings higher than 0.7 were considered as reliable. After normalised Varimax rotation, it was possible to precisely isolate factors that could represent groups of pollutants. PC1 described 71.5% of the common variance, and was highly loaded with Cd, Pb, Cr, Cu, Zn and Ni. This factor could be termed ‘‘anthropogenic’’, as the metals were apparently derived from a large number of evenly distributed contamination sources (dust fall-out, rock denudation, and water eroding soils that have been consistently over-fertilised) (Alloway and Ayres, 1999); from a large number of inputs of municipal and industrial waste, especially in the lower part of the river (metal-plating bath, production of lead glass, batteries, enamel and H2 SO4, steelworks, and metallurgical facilities); or from tributaries carrying homogenised, contaminated, fine-grained matter. Similar results were obtained by other authors (Banat and Howari, 2003; Borovec, 1996; Einax et al., 1997; Filgueiras et al., 2004; Kuang-Chung et al., 2001; Lee et al., 2003; Loska and Wiechuła, 2003; Rios-Arana et al., 2004; Simeonov et al., 2001; Stanimirova et al., 1999). Mn and Fe were contrarily loaded in PC2 (16.7% of the common variance explained), and thus termed a ‘‘semi-natural background’’. In Figure 5.4, the features are displayed using a rotated factor solution. Interpretation of the factorial solution can be confirmed by representation of the factor scores at each sampling point of the river course (against distance of flow). Figures 5.5a and 5.5b represent the position plots for observations corresponding to the two obtained factors. On the x axis the banks are presented as capital letters schematically: the equal distance between letters has no mathematical interpretation, and does not correspond to the real distance between the Polish and German banks through the river reach. All values presented for areas in between the main sampling
176
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.4: Loading plot of factors obtained using Varimax rotation approach. 1.2 1.0
Mn Fe
0.8 0.6 Ni
Factor 2
0.4
Cr Cu Pb Zn Cd
0.2 0 0.2 0.4 0.6 0.8 1.0 1.0 0.8 0.6 0.4 0.2
0 0.2 Factor 1
0.4
0.6
0.8
1.0
1.2
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
sites were interpolated, and not obtained by measurements. High scores correspond to a high influence of a given factor at the sampling site. The results obtained confirm the solution found by CA, an unsupervised technique. The plot of PC1 scores for different sampling sites showed that the sediment matrix composition was sharply different at the following four sites: 1A, 2A, 6A (German bank) and 4C (Polish bank). Two of three strongly polluted Odra subsections were directly related to the Odra tributaries: the estuary of the Odra–Sprewa channel (2A), and the estuary of the backwater channel passing through Cybinka (4C). The highest contents of Cd, Pb, Cr and Cu were determined in the bottom sediment samples collected in the upper part of the Odra River at the German bank (1A, 2A) and below the estuary of the backwater channel passing through Cybinka (4C), where the steelworks was located and surface runoff from cultivated fields was present. It was assumed that an industrial facility was located at the German bank. As shown from the plot of PC2 scores for different sampling sites, the sediment samples were highly polluted with Fe and Mn, but only in the proximity of Frankfurt on Odra. As
177
MULTIVARIATE STATISTICAL MODELLING
Figure 5.5: (a) Plot of factor scores for various sampling sites (PC1: Cd, Pb, Cr, Cu, Zn and Ni); (b) plot of factor scores for various sampling sites (PC2: Fe, Mn). 630 620
Sampling point distance/km
610 600 590 580 570 560 550 540
A
B Bank (a)
C
2.5 2 1.5 1 0.5 0 0.5 1
630 620
Sampling point distance/km
610 600 590 580 570 560 550 540 A
B Bank (b)
C
4 3 2 1 0 1
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
178
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
mentioned above, the relation between Fe and Mn should be interpreted as ‘‘seminatural background’’, as the Frankfurt on Odra agglomeration was supplied by drinking water coming from Tertiary and Cretaceous underground springs. Before being introduced to the water distribution system, raw water is usually deionised and demanganised in water treatment plants (Gouzinis et al., 1998; Teng et al., 2001; Vos et al., 1997). As a consequence, the Fe and Mn are immobilised on filter packing and periodically backwashed. After backwashing, the wastewater was most probably dumped in the Odra River, which explains why it was possible to observe the high content of Fe and Mn in the bottom sediments in the vicinity of Frankfurt on Odra. The PC score was computed for each PC identified above. These values were then converted to APCS and mass loadings for the samples (mg kg1 ) and then regressed on the APCS according to the procedure described previously. The weighted least squares regression of heavy metal contents determined in bottom sediment samples collected at the Odra River on the elemental APCS yielded significant regression coefficients (p , 0.05) for all principal components considered, a significant intercept (i.e., the impact of undefined sources) of 554.21 mg kg1 (4.43%), and an overall determination coefficient near 1. The predicted and observed total contents of heavy metals in bottom sediments collected at the Odra River were well correlated. As an example, a comparison of the estimated and measured totals of heavy metals (mass content) and total content of Cd, Cr and Mn (mg kg1 ) is presented in Figure 5.6. The total heavy metal pollution level studied in the reach of the river was estimated as 12 510.45 mg kg1 . The total contribution of PC1 was 1130.94 mg kg1 (9.04%), and PC2 was 10 825.29 mg kg1 (86.53%). In Table 5.4, the PCA-derived source profiles are presented as a mass percentage for eight heavy metals (Cd, Pb, Cr, Zn, Cu, Mn, Ni and Fe) determined in bottom sediments, for which the obtained PCs were characterised as ‘‘anthropogenic’’ and ‘‘semi-natural’’. It was apparent that the statistical significance of the receptor models obtained was quite satisfactory, as indicated by the value of the determination coefficient R2 for comparison of the experimentally determined mass of the elements with that calculated using the model. For Cd and Cr, the value of the intercepts obtained indicated that more than 10% of the content of these components was not described by the model. For other elements, the intercept value did not exceed 10%. The smallest values of the unexplained content of elements were obtained for Mn (4.2%) and Fe (4.3%). Moreover, the results indicated that, in the Odra River ecosystem, the heavy metals coming from the water treatment plants located near large agglomerations played a major role in the total pollution. The impact of many factories (smelters, paper plants, steelworks or metallurgical facilities) was more than eight times lower than that of the Fe and Mn sources. Analysis of the bioavailability of heavy metals introduced into the river ecosystem according to the source apportionment indicated that the content of heavy metals in bioavailable form (fraction I + II) introduced by anthropogenic sources was as follows (mg kg1 ): Cd, 20.53; Zn, 439.18; Ni, 42.33; Cr, 16.92; Cu, 120.78; Fe, 76.44; Mn, 127.85; and Pb, 175.12. For the bioavailable forms introduced by ‘‘semi-natural’’ sources, the results obtained were as follows (mg kg1 ): Cd, 2.38; Zn, 55.97; Ni, 21.45; Cr, 5.12; Cu, 26.91; Fe, 1031.90; Mn, 1265.42; and Pb, 26.65.
179
MULTIVARIATE STATISTICAL MODELLING
Figure 5.6: Comparison of content of Cd, Cr and Mn, measured and estimated by APCS model (mg kg1 ), and the sum of heavy metals. 1400
45 000 Total measured (R) Total estimated (R) Cd measured (L) Cd estimated (L) Cr measured (L) Cr estimated (L) Mn measured (L) Mn estimated (L)
1200
1000
40 000 35 000 30 000
800 25 000 600
20 000
400
15 000 10 000
200
5000 0 1A 1B 1C 2A 3A 3B 3C 4A 4B 4C 5A 5B 5C 6A 6B 6C 7A 7B 7C
0
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
Table 5.4: Contribution of the latent factors to the chemical composition of the sediment samples. Element Cd Pb Cr Zn Cu Mn Ni Fe
Intercept/ PC1 % (anthropogenic)/% 10.6 8.4 13.1 8.0 7.8 4.2 9.9 4.3
80.1 79.5 66.7 81.6 75.4 8.8 59.8 6.6
PC2 (seminatural)/% 9.3 12.1 20.2 10.4 16.8 87.1 30.3 89.1
Measured/ Estimated/ mg kg21 mg kg21 7.77 88.11 18.12 224.26 45.77 518.87 26.22 11 581.33
7.82 89.06 17.91 227.77 45.02 521.62 25.55 11 575.7
R2 0.85 0.88 0.83 0.81 0.81 0.84 0.81 0.84
Source: Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. With permission.
180
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Environmental analysis of the bottom sediments has indicated that the multivariate statistical strategies allowed the attainment of reliable information about the ecological system in consideration, and was also useful for testing the implementation of the sustainable development rule. The environmental approach to the dataset obtained by the analysis of bottom sediment samples collected in the Odra River (Germany/Poland) revealed new levels of information, in a traditional environmental data collection sense, and was a step forward in understanding the complexity of the environmental system under study. This indicated that new chemometric methods have become indispensable tools for human efforts aiming to improve the conditions of life.
5.7.2
Multivariate classification and modelling in surface water pollution estimation
The following case study offers statistical multivariate techniques for the classification, modelling and interpretation of monitoring data obtained from environmental dynamic systems, and in particular the freshwater system of the karstic area surrounding the city of Trieste in north-eastern Italy. In a series of previously published papers, several traditional chemometric approaches (CA, PCA and time-series analysis) have been implemented in order to gain specific information on the monitoring data structure from the region of interest (Barbieri et al., 1998, 1999b, 2002; Reisenhofer et al., 1996, 1998). In the papers cited, the region of the city of Trieste was also considered with respect to water quality assessment by chemometrics, albeit for a very limited period of time. However, all of the chemometric trials confirmed a deep conviction that the water pollution estimation using many available parameters and time-series could be improved if additional chemometric strategies (Kohonen SOM and N-way PCA) were applied in order to classify, model and interpret monitoring data from the area. Because of the calcitic–dolomitic nature of the Trieste area, many of the watercourses flowing there are subterranean: e.g., the Timavo River, which has a surficial course of about 50 km in Slovenia, then sinks into a limestone fissure near the Italian border, and assumes a variety of routes, mainly hypogenic, before emerging near the coast and flowing into the Adriatic Sea. This condition is favourable for preserving the quality of these waters, but at the same time hinders not only a detailed knowledge of the hydrology of the subterranean watercourses, but also the sampling and monitoring operations necessary for an adequate understanding of the behaviour and properties of this complex hydrological system. A further factor of complexity is due to the permeability of the karstic soils, which induces mutual overflowing among the contiguous watersheds of this area. Overlapping phenomena, which obviously depend on seasonal rain conditions and the production of occasional intrusions of waters from the northern Isonzo and Vipacco rivers into the southern karstic wells related to the Timavo River, were previously verified by Reisenhofer et al. (1996). The waters mentioned are relevant, because they contribute to the municipal supply of the Province of Trieste. In particular, the waters of the three wells at Sablici, Moschenizze Nord and Sardos, indicated in Figure 5.7 as SB, MN and SA, respectively, were collected for drinking purposes. In this case study, the spring of the Timavo River,
181
MULTIVARIATE STATISTICAL MODELLING
Italy, where water emerges after a path under the karstic limestone (TI), was also considered. These spring waters are occasionally characterised by turbidity events derived from soil runoff occurring after relevant meteorological events in the Slovenian epigeous tract that can condition Sardos wells. All data reported in the case study presented here have been obtained within the framework of a long-term water quality monitoring programme at the four sampling stations, which are important for the drinking water supply of the city of Trieste. The main aim of the case study, based both on chemical and physical analyses and on biological monitoring of the Timavo River waters, was to demonstrate how advanced multivariate statistical approaches could contribute to a better understanding of the data collected during monitoring episodes for a long period of observation. Similarities between different sampling sites in the multivariate space of quality parameters could be revealed, which is an important step in optimising drinking water control in the region of interest. Furthermore, the seasonal behaviour of the water quality events could be proved and utilised in practice. Finally, the linkage between the water quality parameters might characterise the role of the various natural or anthropogenically influenced factors in the overall spring water quality. The total assessment, taking into account chemical, geological and anthropogenic impacts, is possible only if the monitoring data are interpreted in a multivariate way.
Figure 5.7: Location of monitoring points of the city of Trieste water supply system (Italy). 13.5
13.6
13.7
13.8
0
13.9
1000
2000 m
GORIZIA ISONZO RIVER
45.9
DOBERDO’ LAKE IAMIANO
Sampling sites VIPACCO RIVER See inset MONFALCONE
N
TI
Timavo Sardos
SA
Sablici Moschenizze Nord
MN
SB 45.8
ISONZO RIVER
MONFALCONE
MN
SB
GULF OF PANZANO SA
N 0
Km
45.7
5
GULF OF TRIESTE
TI
TRIESTE
TIMAVO ESTUARY
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
182
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Four sampling sites were considered. Sardos (SA), Sablici (SB) and Moschenizze Nord (MN) are the three historical karstic wells that contribute to the water supply of the municipality of Trieste. Timavo (TI) represents the 40 km spring (see Figure 5.7). Water samples were taken with monthly frequency within a span of 11 years, from January 1995 to December 2005, and analysed within 48 h in the Laboratory for Analysis and Control of ACEGAS-APS of Trieste. The analytical determinations followed official procedures of the Italian law and the standard methods of the American Public Heath Association (Eaton and Clesceri, 1995). The parameters determined in situ were turbidity (TURB, Jackson turbidity units (JTU)), temperature (TEMP, 8C), and conductivity (COND, S cm1 , corrected to 258C). All other parameters were measured in the laboratory: chlorides (Cl , mg L1 ), sulfates (SO4 2, mg L1 ), total hardness (HARD, 8f; 18f [degree French] ¼ 10 ppm), dissolved oxygen (DOXY, mg L1 ), nitrates (NO3 , mg L1 ), nitrites (NO2 , mg L1 ), ammonia (NH3 , mg L1 ), orthophosphates (PO4 3, mg L1 ), UV-absorbing organic constituents (humic acids, aromatic compounds, tannins, lignins, etc.), total coliforms (CT, most probable number MPN 100 ml1 ), faecal coliforms (CF, MPN 100 ml1 ) and faecal streptococci (SF, MPN 100 ml1 ). When the concentration of analytes was below LOD, the value of 12LOD was used in the dataset because of chemometric requirements (Aruga, 2004; El-Shaarawi and Esterby, 1992; Helsel, 2005). For Cl , SO4 2, NO3 and DOXY, the number of non-informative value replacements did not exceed 0.5% of the total number of samples. For NO2 , NH3 and PO4 3, more than 95% of the results obtained were below the associated LODs, which were as follows: NO2 , 0.015 mg L1 ; NH3 , 0.03 mg L1 and PO4 3, 0.03 mg L1 . Because of negligible variance and informative abilities, these ions were excluded from further chemometric evaluation. In this study, the Kohonen network was chosen as a rectangular grid with a hexagonal lattice. For monthly averages, pffiffiffiffiffiffiffiffi the dimensionality of the Kohonen map was determined as 9 3 12 (n ¼ 5 3 528 115). In Figure 5.8 (see colour insert), the SOM component planes for the entire set of sites and water quality parameters for the entire period of monitoring are indicated. It was readily seen that the distribution of the turbidity measurements at all sites and monitoring periods very much resembled the distribution pattern of the biological parameters (CT, CF and SF), and even the UV (light absorbance) factor. This was a logical linkage, because the turbidity of the water mass is closely related to the bacterial activity and, in addition, determines the light absorbance ability. For SOMs, increased turbidity was always associated with high concentrations of coliforms and high light absorbance. Because of the need to detect relationships between the parameters observed for all sites and periods of monitoring, an additional visualisation application of SOM was used, with the results presented in Figure 5.9 (see colour insert). The plot presents the grouping of all water quality parameters. Both the location and distance between variables on the map, connected with an analysis of the colourtone patterns, provide semi-quantitative information in terms of a correlation coefficient. Every ‘‘island’’ on the graph can be assessed and interpreted separately. The variables grouped in one, separate ‘‘island’’ are positively correlated; with increasing
183
MULTIVARIATE STATISTICAL MODELLING
distance between variables, the correlation coefficient decreases. A homogeneous group of similar patterns is formed for the bacterial forms in the water, turbidity and the light absorbance factor (SF-CT-CF-UV-TURB). The overall salt content expressed by the conductivity factor is related to water hardness and chloride concentration (Cl -HARD-COND), and less so with sulfate concentration, since all these natural solutes are commonly influenced by the meteorological dilution factor. The rest of the parameters (DOXY, TEMP and NO 3 ) do not belong to either of the other two major groups, and obviously possess a more specific function in determining the water quality. In Figure 5.10, the clusters formed by the objects of observation (four sampling sites for 132 episodes checked – monthly for 11 years) are presented as SOM. The number of significant clusters is determined by the lowest value of the Davies– Bouldin index (four in this case). In Table 5.5, the results of application of the Kołmogorov–Smirnov test to differences between levels of quality indicators of river water quality for clusters I–IV
Figure 5.10: Clustering pattern according to Davies–Bouldin index minimum value. Both grey-scale hexagons in each SOM unit and digits represent the numbers of samples belonging to particular clusters. 1.5 15 5 10 4 9 4 13 6 12 5 2 4 6 3 6 2 2 8 10 5 6 5 4 5 6 6 5
Davies–Bouldin index value
1.4
7 1 8 2 4 1 3 2 7 10 5 3 5 3 6 1 3 4 6
1.3
9
5
4 5
4 10 3
9
7 8
1.2
3
1.1
1
9 1
9 5 7
6
2
3 2
3 9
4 5
2 5
3
4
9
4 2
9
4
6
5 3
3
4
2 5
5 4
2 3
2 3
2 7
4 3
8
4 2
4
5
5
8
2 6
Cluster I Cluster II Cluster III
1.0
Cluster IV
0.9
0
2 4 Number of clusters
6
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
0.90 11.30 335.0 5.10 9.50 19.0 8.60 7.20 136.0 35.0 12.0 0.06
2.0 12.10 384.0 7.40 10.30 23.0 8.60 7.30 230.0 51.0 30.0 0.08
II 1.0 12.90 321.0 4.80 8.40 18.0 7.50 6.10 264.0 57.0 31.0 0.07
III 12.50 11.90 371.0 5.50 9.50 22.0 9.10 6.30 1044.0 267.0 103.0 0.16
IV *** *** *** *** *** *** p . 0.10 p . 0.10 *** ** *** ***
I–II
I–IV *** *** *** ** p . 0.10 *** * *** *** *** *** ***
I–III p . 0.10 *** *** *** *** *** *** *** *** *** *** ***
Cluster combinations
Clusters
I
Kołmogorov–Smirnov test
Mean values
II–IV
*** *** *** p . 0.10 *** p . 0.10 *** *** *** ** *** p . 0.10 *** 0.05 , p , 0.10 *** *** *** *** *** *** p . 0.10 *** *** ***
II–III
*** *** *** ** *** *** *** p . 0.10 *** *** *** ***
III–IV
***p , 0.0001; **0.0001 < p , 0.01; *0.01 < p , 0.05. Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
Turbidity Temperature Conductivity Cl SO4 2 Hardness Dissolved oxygen NO3 Total coliforms Faecal coliforms Faecal streptococci UV absorption
Variable (monthly averages)
Table 5.5: Statistical assessment (Kołmogorov–Smirnov test) of differences between levels of chemical indicators of river water quality for clusters (I –IV) obtained by SOM algorithm. Variables’ units are indicated in the text.
184 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
185
MULTIVARIATE STATISTICAL MODELLING
obtained by the SOM algorithm are presented. In Table 5.6, cluster episode distributions are shown. From this information, it can be concluded that the clusters are fairly homogeneous. Cluster I (upper left of the SOM) contains dominant episodes from the sites at MN and SB for the period of sampling between November to May, that is, the winter period (the exact content of the clusters is presented in Table 5.6). The episodes from SA and TI were significantly fewer (only 11.4% and 1.5% from the total number of observations for these sites, respectively). Reisenhofer et al. (1996) have suggested that, during the winter, water from MN and SB is influenced by massive ingressions of water from the Isonzo River, lowering the overall water conductivity at these sites. Cluster II (bottom left of the SOM) consists mainly of SA and TI samples, which is an indication of the spatial separation of the two groups of sites. The contribution of the other two sites in this cluster is limited to 4.5% of all episodes for MN. The same values apply to SB. The third cluster (upper right of the SOM) involves the remaining episodes for MN and SB, but for the summer period of monitoring, with limited data from TI and SA. This distribution of objects indicates that a well-formed seasonality pattern could be predominantly revealed for the MN and SB water sources, as for TI and SA the seasonality is not as definitively expressed. The last cluster (bottom right of the SOM) contains a relatively small number of hits from all four sampling sites. It is characterised by high levels of bacterial contamination, and in this sense is a ‘‘bacterial’’ outlier as compared with the other three groups. The observation of a small number of episodes marked by higher bacterial contamination is indirect proof of good water quality for the catchment in consideration. The next step in intelligent data analysis was the application of N-way PCA in order to provide a factorial model accounting for most of the dataset variability, and also providing evidence of where and when specific chemical patterns play a role. The dimensions of the three-way approach often involve water quality parameters (mode A), sampling periods (mode B) and sampling sites (mode C) that identify each sampling episode. As a result, analysed data were arranged in a three-way array of dimensionality: 12 (parameters) 3 132 (months) 3 4 (sampling sites). Thus the Table 5.6: Cluster episode distributions. Cluster
I II III IV
Sampling location Moschenizze Nord
Sablici
Sardos
Timavo
50.0% (66) 4.6% (6) 42.4% (56) 3.0% (4)
49.2% (65) 4.6% (6) 42.4 (56) 3.8% (5)
11.4% (15) 60.6% (80) 23.5% (31) 4.5% (6)
2.3% (3) 80.3% (106) 2.3% (3) 15.1% (20)
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
186
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
quality parameters, months and sampling points constitute the modes of the array. The arrangement is schematically presented in Figure 5.11. The Tucker3 model was used to interpret the whole system of quality parameters, time period and sampling location. In Figure 5.12, selection of the optimal model of type 221 is indicated. From this plot it is readily visible that model 221 describes almost 96% of the total variance of the system. The model involves two components in the water quality parameters mode, two components in the sampling time mode, and one component in the sampling site mode. Other models with a higher number of components were discarded, because their fitting increment added little additional useful information. In Table 5.7, the core array of the (221) Tucker3 model is presented. The core array achieved straightforward interpretability because of the small number of significant interactions. In addition, no rotation was necessary. The next step of interpretation, based on the figures from the core matrix, was the construction of loading plots for modes A and B (Figures 5.13 and 5.14). In Figure 5.13, the loading plot for mode A is shown, as a representation of the relationships between the water quality parameters. In addition to the SOM classification, one can obtain additional information concerning the linkage between the water Figure 5.11: Graphical representation of three-way data array.
Quality parameters – mode A
Months in period between 1995 and 2005 – mode B
Months in period between 1995 and 2005
Sampling locations (Sardos, Sablici, Moschenizze Nord, Timavo)
Quality parameters
Timavo Moschenizze Nord Sablici Sardos – mode C
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
187
MULTIVARIATE STATISTICAL MODELLING
Figure 5.12: Percentage of data variance explained by three-way Tucker models of different complexities. 97.5 (4 4 3)
Explained variation of X
97.0
(3 3 2)
(4 3 2) (3 4 2) (3 3 3)
(2 3 2) (3 3 1) (3 2 2)
(2 3 3) (4 2 2) (3 2 3)
(4 4 1) (2 4 3) (4 (2 23 3) 4) (3 2 4)
(2 2 3)
(2 2 4)
96.5
(2 2 2)
96.0
(4 4 2) (4 3 3) (3 4 3) (3 3 4)
(4 4 4)
(4 3 4) (3 4 4)
(2 4 4) (4 2 4)
(2 2 1)
95.5 (2 1 2) (1 2 2)
(3 1 3) (1 3 3)
(4 1 4) (1 4 4)
(1 1 1) 95.0
3
4
5 7 6 8 9 10 Tucker3 model dimensionality (total number of components)
11
12
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
Table 5.7: Core array elements of the (221) Tucker3 model. A1, B1, C1 306.19
A1, B2, C1 5.21 3 105
A2, B1, C1 0.00012
A2, B2, C1 28.668
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
temperature and the remaining parameters. From the loadings plot it is readily seen that factor A1 shows how the high turbidity, bacterial content and organic matter arrive with water of low temperature, conductivity and dissolved oxygen (this could be called ‘‘cold runoff water’’). Factor A2 shows a contrast between dissolved oxygen
188
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.13: The two factors plotted for water quality parameters (mode A). Vertical axis represents factor loading values. 0.8
A1
0.6
A2
Factor loading values
0.4 0.2 0 0.2 0.4 0.6 0.8
TURB TEMP COND Cl
SO42 HARD DOXY NO3 Variables
CT
CF
UV
SF
Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
Figure 5.14: The two factors plotted for temporal changes (mode B). Vertical axis represents factor loading values. 0.4 B1
Factor loading values
0.3
B2
0.2 0.1 0 0.1
January 2005
January 2004
January 2003
January 2002
January 2001
January 2000
January 1999
January 1998
January 1997
January 1996
0.3
January 1995
0.2
Months Source: Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. With kind permission of Springer Science and Business Media.
MULTIVARIATE STATISTICAL MODELLING
189
and nitrates on one side and temperature and bacteria on the other: cold, bacterially clean waters can dissolve oxygen and nitrates. Such occurrences reflect the seasonal behaviour of the water quality parameters, as additionally indicated in Figure 5.14. The plot presents loadings B1 and B2, and clearly indicates the seasonality already detected by SOM classification. The baseline model of the time series (B1) is accompanied by a well-expressed seasonal pattern (B2), with dominant winter and summer minimal and maximal values. The environmental meaning of the ‘‘seasonal factor’’ will be discussed later, when core elements (A2, B2, and C1) are considered. Factor loadings of sampling stations in mode C are quite similar, with no differences indicated between them. The core matrix indicates that the most significant variation in the dataset should be seen in A1, B1 and C1 interactions, where even the constant background time factor differentiates the bacterial water quality parameters and water turbidity (mainly by the temperature effect). In addition, the spatial mode does not affect the overall water quality of the region of interest. The other important interaction (A2, B2 and C1) highlights the most relevant dynamic factor regarding the considered water quality parameters in this spring water system: high temperature and low dissolved oxygen and nitrates occur during the summer (the values of A2 are multiplied by positive values of B2), while, in winter (negative sign in B2), the relation between quality parameters changes sign, which means that cold waters deliver more dissolved oxygen, but also higher nitrates and lower bacterial content. The rather evident sinusoidal trend of B2 shows the gradual transition between the two extreme summer and winter episodes. Sites SA, SB, MN and TI are similarly affected by this dynamic factor. According to the factorial model, these two patterns of variation appear to be most relevant with respect to a substantially constant and good water quality. The presented case study shows that the combined strategy of SOM and N-way PCA is ideally suitable for handling an environmental dataset describing variations of 11 chemical and biological quality parameters, with each sampled monthly for 11 years at four sampling sites. The visualisation of the monitoring results by SOM makes it possible to classify different water quality patterns for all sites in consideration, and for the whole monitoring period, which remain undetected by other data projection options. The output of Tucker3 modelling is more compact and informative than the classical two-way PCA used for detecting spatial and temporal patterns. All the aspects mentioned above make SOM and N-way PCA very promising tools to be integrated in environmental decision support systems.
5.7.3
Exploration of the environmental impact of a tsunami disaster
A tsunami generated by an earthquake on 26 December 2004 affected most of the countries around the Indian Ocean, and is considered to be one of the world’s worst natural disasters in human history. As a result, the attention of the world’s public and scientists from various disciplines focused on tsunami impacts on a coastal zone. Some aspects of tsunami effects, such as the environmental impacts of the tsunami,
190
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and the role of tsunami sediment as a potential carrier of pollution, were very poorly represented in the scientific literature before the 2004 episode. No pre-2004 reports on contamination of tsunami sediments and their successive chemical alterations can be found. After December 2004 many interdisciplinary studies were conducted in different parts of the Indian Ocean basin. They also included studies on the sediments left by the tsunami. Most of the studies focused on the sedimentological attributes of the sediments (Hori et al., 2007; Moore et al., 2006; Paris et al., 2007; Szczucin´ski et al. 2006), which may help in the search for older tsunami sediment and, as a consequence, might help in assessing the tsunami hazard for a specific coast. Several studies also focused on the chemical composition of the tsunami sediment and its potential environmental impact (Boszke et al., 2006; Chandrasekharan et al., 2008; Chaudhary et al., 2006; Ranjan et al., 2008; Srinivasalu et al., 2008; Szczucin´ski et al., 2005; UNEP, 2005). Among the elements and compounds studied were salts, bioavailable (acid-leachable) fractions of heavy metals, metalloids, and various fractions of mercury. Most of the work focused on the coastal zone of Thailand, where changes in tsunami sediment appeared. Studies were undertaken at the same locations shortly after the tsunami (Boszke et al., 2006; Szczucin´ski et al., 2005) and after the rainy season (Szczucin´ski et al., 2007). In Thailand, about 20 300 ha of land was covered by seawater during the 2004 tsunami (UNEP, 2005) and most of this area was also covered by sandy sediments (Hori et al., 2007; Rossetto et al., 2007; Szczucin´ski et al., 2006; Umitsu et al., 2007). The sediment covered former soil with a layer up to 50 cm thick (Szczucin´ski et al., 2006) containing a high amount of salts (Naþ , K þ , Ca2þ , Mg2þ , Cl , SO4 2 ) in the water-soluble fraction, and heavy metals (Cd, Cu, Pb, Zn) and As in an assumed bioavailable, acid-leachable fraction (Szczucin´ski et al., 2005). Tsunami sediments also contained more Hg (in the organomercury fraction) than in analysed reference soil samples (Boszke et al., 2006). After the rainy season (with more than 3300 mm of rainfall), the tsunami sediment in Thailand still contained elevated amounts of heavy metals and As in the bioavailable fraction, although the salts were largely washed out from the sediments (Szczucin´ski et al., 2007). Simple inter-element correlation analysis of the data on tsunami sediment enrichment in salts, heavy metals and metalloids showed that, shortly after the tsunami, Naþ , K þ , Ca2þ , Mg2þ , Cl and SO4 2 were correlated with each other and with some heavy metals – particularly Pb, Zn and Cu (Szczucin´ski et al., 2005). The correlations between sampling locations were also significant (p , 0.05) (Szczucin´ski et al., 2005). These results point to one major factor regulating the chemical composition of tsunami sediment – they are probably of common marine origin. They also indicate the possible forms (compounds) in which the studied elements existed in the tsunami sediment. One year later (after the rainy season), the inter-element correlations were even stronger, but inter-site correlations appeared weaker, suggesting the influence of secondary site-specific alterations (Szczucin´ski et al., 2007). The case study presented here explores the ways of solving the problem of interelement and inter-site relations in tsunami sediments by application of the SOM technique. The primary aims were to investigate the dominant compounds related to primary processes (deposition by tsunami) and secondary processes (site-specific
MULTIVARIATE STATISTICAL MODELLING
191
effects of the rainy season), and to classify the tsunami sediment based on sediment chemical composition into two datasets presenting analytical results for the same location, shortly after the tsunami and after the rainy season. In the present case study, literature-derived datasets (Boszke et al., 2006; Szczucin´ski et al., 2005, 2006) have been used based on the analysis of samples (consecutively designated as T1–T15) collected from an area flooded by the 26 December 2004 tsunami. The samples were collected during two field campaigns conducted in January–February 2005 (less than 50 days after the tsunami) and in February 2006. The 2006 survey was conducted after the rainy season, which resulted in a total rainfall of over 3300 mm. Samples were collected from selected locations on Phuket Island (around Pattong Bay), and along the coastline between Khao Lak and Kho Khao Island, on the western coast of Thailand (Figure 5.15). During the campaign carried out in 2006 it was impossible to approach the former sampling location (from 2005) as the sediment layer had been removed due to cleaning actions. Because of this, in 2006, the samples were collected from a nearby region covered with similar tsunami sediment and at approximately the same distance from the shoreline. In particular cases, the distance to the 2005 sampling location (according to GPS coordinates) varied between 6 and 304 m. Detailed coordinates of sampling locations, including distance from the shoreline and calculated distance to the locations of the sampling sites from the survey in 2005, as well as sediment type, are presented in Table 5.8. Details of the analytical procedures (preparation of reagents and vessels prior to analysis, extraction procedure and determination of salts in the water-soluble fraction, heavy metals and metalloids in the acid-leachable fraction, and mercury in five fractions) have been published elsewhere (Boszke and Astel, 2007; Boszke et al., 2007, 2008; Niedzielski, 2006; Szczucin´ski et al., 2005, 2007). Descriptive statistics (mean, median, minimum, maximum and standard deviation), based on the contents of salts (K þ , Naþ , Ca2þ , Mg2þ , Cl and SO4 2 ) leached with deionised water, heavy metals (Cd, Cr, Cu, Ni, Pb and Zn) in the acid-leachable fraction, metalloids (As, Sb and Se) in the exchangeable fraction, and sequential extraction of Hg in the tsunami sediment, are given in Table 5.9. In the sediments collected in 2006, the total Hg and fractionated Hg were not determined. Comparison of data from 2005 and 2006 reveals major changes in the content of water-soluble salts. Generally, their concentrations decreased by half or more in 2006 when compared with 2005. The observed decline in salt content is attributed to dissolution by freshwater during the rainy season. Taking into consideration the mean values of particular heavy metals, a few general trends can be observed. For Cu, Ni and Pb, samples were enriched and, for Cd, Cr and Zn, samples were depleted after the rainy season. Similar trends to those observed in heavy metals were also detected in metalloids. In the case of As, samples were enriched, whereas, for Sb and Se, they were depleted. However, the range of changes was relatively small, and it must be acknowledged that, after the rainy season, most concentration values of heavy metals and As were still higher than those determined in reference soil samples from the area not affected by the tsunami (Szczucin´ski et al., 2007).
192
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
0
98°30’E
Figure 5.15: Study area and sampling site locations: 1, tsunami inundation area; 2, urban area; 3, sampling sites; 4, tsunami wave direction. 50 km
T16 Thung Tuk 9°N
Nga P
AMAN
Phang
AND
Kho Khao Island
5 km
e
SE
rovinc
0
T9–T13 Nam Khem
A
8°50’N
105°E
ANDAMAN S
EA
T1–T2
T3–T8
R AT
98°17’E
7°53’N
SU M A
0
G
BAY PATONG
BANGKOK
PA TO N
THAILAND
15°N
Bang Mor
98°20’E
T14–T15
Phuket Island
500 km 1
2
1–2
0
3
1 km
4
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
The SOM algorithm was applied in two separate runs to datasets containing 315 and 240 results for analyte concentration determined in 15 locations for the 2005 and 2006 surveys, respectively (consecutively called the ‘‘2005-run’’ and ‘‘2006-run’’). The Kohonen map has been chosen as a rectangular grid with a default number of 20
78 53.0889 N 78 53.0149 78 52.9249 78 52.9109 78 52.8879 78 52.8649 78 52.9539 78 52.9389 88 51.4709 88 51.4179 88 51.4059 88 51.5539 88 51.6189 88 49.9739 88 49.9079
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15
988 16.4439 E 988 16.4359 988 17.3099 988 17.3209 988 17.3289 988 17.3499 988 17.3289 988 17.3369 988 15.9309 988 15.9539 988 16.3109 988 15.9409 988 16.5279 988 16.1289 988 16.2719
Longitude
75 315 430 480 520 545 390 410 60 100 570 50 1100 300 590
2 1 5 2 2 1 2 20 20 15 18 2 5 11 14
2005
Distance from Thickness of shoreline/m tsunami sediments/cm
13 17 108 24 11 41 96 13 19 325 40 304 6 72 172
Coarse silt Very coarse silt Fine sand Fine sand Fine sand Fine sand Medium sand Fine sand Very fine sand Very coarse silt Very fine sand Medium sand Very fine sand Very fine sand Very fine sand
Sediment type Distance to 2005 sampling location (according to GPS coordinates)/m 2005
Medium sand Very coarse silt Medium sand Medium sand Medium sand Medium sand Coarse sand Fine sand Very fine sand Coarse silt Very coarse silt Very fine sand Very fine sand Very fine sand Very coarse silt
2006
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
Pattong Bay Pattong Bay Pattong Pattong Pattong Pattong Pattong Pattong Nam Khem Nam Khem Nam Khem Nam Khem Nam Khem Bang Mor Bang Mor
Latitude
Sample Location
Table 5.8: Basic data on samples.
MULTIVARIATE STATISTICAL MODELLING
193
15 15 15 15 15 14 15 15 15 15 15 15 15 15 15 15 15 15 15 15 15
307 6 548 7 079 1 619 13 013 11 306 7.5 4.8 3.9 1.3 15.9 23.3 664 267 47 20.9 1.2 0.9 11.8 84.7 115.4
323 7 208 2 430 608 13 000 2 800 1.2 4.1 2.4 0.1 16.1 11.5 415 200 30 15.1 0.7 0.9 5.0 72.1 97.0
Median 78 1 210 1 430 121 2 200 101 0.6 0.1 1.4 0.1 0.1 5.8 49 145 15 5.6 0.1 0.3 0.8 4.7 20.0
Min 666 16 920 67 500 10 600 33 000 118 000 11.2 13.1 11.2 10.4 36.5 131.0 1 775 1 230 205 92.0 6.1 1.7 45.9 176.5 233.0
Max 205 4 893 16 786 2 605 9 379 29 730 2.9 4.1 2.8 2.6 7.6 32.5 486 267 47 20.8 1.6 0.4 14.5 45.3 58.1
SD 15 151 15 2 320 15 1 185 15 264 15 3 468 15 2 962 15 2.0 15 3.8 15 4.5 15 1.6 15 21.6 15 14.5 15 1,241 15 195 15 17 Not determined Not determined Not determined Not determined Not determined Not determined
Mean
N
Mean
N 71 147 793 94 900 500 0.9 3.0 2.9 1.1 18.0 7.4 747 158 5
Median 23 58 18 19 200 50 0.1 0.9 0.9 0.7 8.6 3.6 332 65 3
Min 1 300 26 000 6 800 2 100 34 000 35 000 13.3 8.6 14.0 3.7 62.0 45.0 6 446 671 70
Max
320 6 618 1 649 519 8 665 8 903 3.2 2.3 4.5 1.0 14.8 13.7 1 584 156 21
SD
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
K Naþ Ca2þ Mg2þ Cl SO4 2 Cd Cr Cu Ni Pb Zn As Sb Se Hg (F1) Hg (F2) Hg (F3) Hg (F4) Hg (F5) Hg(Bulk)
þ
2006 survey
2005 survey
Table 5.9: Descriptive statistics for the determination of salts (mg kg1 ), heavy metals (mg kg1 ), metalloids (g kg1 ) and mercury (g kg1 ) in Thailand tsunami sediments.
194 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
195
MULTIVARIATE STATISTICAL MODELLING
hexagons for the 2005-run and 21 for the 2006-run. In both cases, pffiffiffiffiffi the default number of hexagons was estimated by following the formula n ¼ 5 15 ¼ 19:4, but the final dimensionality (4 3 5 for the 2005-run and 3 3 7 for the 2006-run) differs slightly in properties owing to the difference in the size of the raw datasets. For both runs, QE and TE were calculated: 2005-run, QE ¼ 2.293, TE ¼ 0.0; 2006-run, QE ¼ 1.629, TE ¼ 0.0. In Figure 5.16, the U-matrix and each chemical component plane for the 2005run are presented and, in Figure 5.17, the variable groupings from the Thailand case study are shown. An assessment of inter-element relations, based on the visualisation presented in Figures 5.16 and 5.17, leads to the following observations. In general, a high content of alkali cations, such as Ca2þ (.4950 mg kg1 ), Naþ (.2560 mg kg1 ), K þ (.704 mg kg1 ) and Mg2þ (.2080 mg kg1 ), is linked to a high content of Cl (.4570 mg kg1 ), SO4 2 (.6290 mg kg1 ) and Pb (.25.8 mg kg1 ). (The numbers in parentheses represent the mean values of species’ abundance, calculated through
Figure 5.16: Visualisation of relationships between chemical variables determined in tsunami sediments in Thailand during the 2005 sampling survey. U-matrix
3.4
Hg(Bulk)
202
1.98 0.573
Hg(F4)
Mg2+
d
Hg(F5)
d
16
109 Cl
d
d
Ni
d
5.45 Sb
d
1.97 213
d
Se
d
0.0567 63.5
9.99 1180
1.61
SO42
d
Na+
d
d
Cd
d
14
Ca2+
d
Zn
d
Cr
7830
d
8.57
As
d
9.36
1.68 1170
46.9 d
2070
5.13
0.935 84.4
0.556
4950
3890 1.56
25.8 d
0.967
1.25
1980 37.5
0.426 47300
6290
Pb
1.38
25600
224 10600
Hg(F3)
1.02
704
0.761
186
802 d
437
41.9
160 d
K+
7780 1.47
Hg(F2)
15.5
4570
518 8.93
64.8 83700
2080
Cu
88.1 153
3640
21
145
28.8
3.26
Hg(F1)
20.3 d
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
196
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.17: SOM classification of chemical variables from the 2005 survey.
Cr
As
Hg(F1)
Hg(F2)
Hg(Bulk)
Ni
Hg(F4)
Hg(F5)
Cd
Sb
Zn
Mg2+
Hg(F3)
Cl
Na
K
Pb
SO42
Se
Ca2
Cu
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
MULTIVARIATE STATISTICAL MODELLING
197
the SOM learning process.) A high content of Zn (.46.9 mg kg1 ) is linked with a high content of Ni (.0.761 mg kg1 ), As (.802 g kg1 ) and Sb (.186 g kg1 ). For most of the heavy metals, their high content is linked with a high content of Hg(Bulk) (.145 g kg1 ) and Hg in the acid-soluble fraction (Hg(F3) .0.967 g kg1 ), bound to humic matter (Hg(F4) .16 g kg1 ) or bound to sulfides (Hg(F5) .109 g kg1 ), while a high content of Cr (.5.13 mg kg1 ) is linked with a low content of organomercury species (,15.5 g kg1 ) and a high content of water-soluble mercury (.1.02 g kg1 ). Mutual location of the component planes presented in Figure 5.17 suggests that the entire set of variables could be conditionally divided over two, relatively homogeneous, formations. According to the theoretical description of SOM, variables grouped within a separate formation are positively correlated, while with increasing distance between them the correlation coefficient decreases. One of the formations could be composed of Sb, Mg2þ , Cl , Naþ , K þ , Pb, SO4 2, Ca2þ , Hg(F3), Se and Cu. The other could indicate lower homogeneity, and in its range a few additional subgroups, chosen subjectively, could be isolated: (i) Cr, Hg(F1), Hg(F2); (ii) Ni, Hg(F4); (iii) Hg(Bulk), Hg(F5), and Zn. A subjective choice means that there was no statistical testing applied at this stage. As an effect, variables were arranged according to suggested sources as follows: • •
salt patterned: Sb, Mg2þ , Cl , Naþ , K þ , Pb, Ca2þ , SO4 2, Se, Cu and Hg(F3); wastewater patterned: various forms of mercury (Hg(F1), Hg(F2), Hg(F4), Hg(F5), Hg(Bulk)) plus heavy metals: Ni, Zn, and Cr.
Visual assessment of the U-matrix and chemical component planes (Figure 5.16) enables both general and more detailed assessment of similarity in the sampling space. The upper half of the U-matrix plane represents samples with a relatively low content of salt-patterned components in the form of Cl , Naþ , K þ , Mg2þ , Pb, SO4 2 and Ca2þ , whereas samples related to the upper left of the U-matrix contain simultaneously decreased amounts of Hg(F3), Se, and Cu. This phenomenon suggests that samples related to the upper half of the U-matrix plane were comparable, owing to the impact of salt-patterned analytes. The bottom half of the U-matrix plane represents samples with highly increased content of Cl , Naþ , K þ , Mg2þ , Pb, SO4 2 and Ca2 . The existence of a single bright hexagon located in the bottom-right part of the U-matrix connected with a single dark hexagon located in an analogous part of the component planes indicates that, for most of the analytes (Sb, Mg2þ , Cl , Naþ , K þ , Pb, Ca2þ and Cu), but excluding Zn, extra-large contents were observed in a few samples. This is why Zn does not link with SO4 2, Ca2þ , Cu, Pb, K þ or the other parameters in Figure 5.17. On the contrary, a single dark hexagon located in the bottom-left part of the component planes suggests that, for a few samples, the variability of Hg(Bulk), Hg(F5), Hg(F4), As, Ni, Cd, and Zn is similar. Both single hexagons can be associated with ‘‘site-specific’’ tsunami sediment chemical profiles. In general, analysis of the U-matrix and grey-scale pattern of the component planes suggests an existence of at least four different chemical profiles of samples collected in Thailand. The identification of the correct number of clusters according to the sampling location, and thus an assessment of the impact of the tsunami wave, was
198
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
based on the K-nearest neighbour classification technique offered by SOM. Different values of k (predefined number of clusters) were tried, and the sum of squares for each run was calculated. Finally, the best classification was found for four clusters (consecutively named CI–CIV) with the lowest Davies–Bouldin index value presented in Figure 5.18. As mentioned above, for the 2005-run, the dimensionality of the Kohonen map was 4 3 5. Because of a high similarity in chemical profiles, in some cases more than one sample from the initial dataset was included in a particular hexagon. A few hexagons did not include any sample, and can be recognised as a border between samples highly differentiated according to a chemical profile. Cases included in each hexagon were grouped in agreement with cluster borders. The SOM-supported classification was then confronted with the real data from the initial dataset. Setting up the mean values of the analyte concentrations along with the classification results allows relatively reasonable interpretation of the clustering pattern connected with the tsunami sediment chemical profile (Figure 5.19). Cluster I (CI) includes two cases Figure 5.18: Plot of classification with the lowest Davies–Bouldin index. Both grey-scale hexagons in each SOM unit and digits represent the number of samples belonging to particular clusters. 1.15 1.10 1.05 3
Davies–Bouldin index value
2 1.00 2
2
0.95 0.90
1
1
Cluster IV
Cluster III
1
0.85 1
0.80
1
Cluster I
1 Cluster II
0.75 0.70 0.65 1
2 3 Number of clusters
4
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
Hg/µg kg1
0
10
20
30
40
50
60
70
80
90
100
0
50
100
150
Cluster I
Cluster I
Cluster II
Cluster II
Cluster III
Cluster III
Cluster IV
Cluster IV
Zn
Pb
Ni
Cu
Cr
Cd
Hg(F5)
Hg(F4)
Hg(F3)
Hg(F2)
Hg(F1)
Hg(Bulk)
0
200
400
600
800
1000
1200
1400
0
20 000
40 000
60 000
80 000
100 000
120 000
140 000
Cluster I
Cluster I
Cluster II
Cluster II
Cluster III
Cluster III
Cluster IV
Cluster IV
Se
Sb
As
SO42
Cl
Mg2
2
Ca
Na
K
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016 –1025. Reprinted by permission of Taylor & Francis Ltd.
Heavy metals/mg kg1
200
Metalloids/µg kg1
250
Salt components/mg kg1
Figure 5.19: Mean values of bulk mercury (g kg1 ), particular fractions of Hg (g kg1 ), salt components (mg kg1 ), heavy metals (mg kg1 ) and metalloids (g kg1 ) for clustering patterns connected with the tsunami sediment chemical profile (2005 survey).
MULTIVARIATE STATISTICAL MODELLING
199
200
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
(T2, T13), cluster II (CII) one case (T1), cluster III (CIII) seven cases (T4–T10), and cluster IV (CIV) five cases (T3, T11, T12, T14, T15). An SOM-based classification provided a comprehensive method to identify variables responsible for the clustering pattern, and explanation trials based on chemical reasoning indicated possible relations between the chemical properties of tsunami sediment and the location of sampling points. The highly diversified locations classified under the respective clusters (CI ¼ 2; CII ¼ 1; CIII ¼ 7, CIV ¼ 5) confirm that numerous clusters reflect a general chemical composition of tsunami sediment, and thus a general impact of the tsunami wave, whereas clusters containing isolated sampling locations reflect ‘‘site-specific’’ conditions. Site-specific conditions appear to be crucial in explaining the chemical composition of sediments for samples T1, T2 and T13. The Naþ and Cl content distinguished CII from CI, CIII and CIV, and Pb and Zn distinguished CI and CII from the others. The single sample T1, collected in Pattong Bay, belonged to CII, and was characterised by the highest concentrations of Naþ (67 500 mg kg1 ), Cl (118 000 mg kg1 ), Pb and As (1230 g kg1 ) compared with other clusters. For T1, the concentrations of Pb (46.3 mg kg1 ) and Zn (49.1 mg kg1 ) were similar. The significantly higher content of Naþ and Cl in sediment samples taken in T1 could be explained by the tsunami wave’s ability to introduce salts into the surface and groundwaters as well as into the soil layer. In the case of T1 and T2 (collected at the same location – Pattong Bay), large differences in Naþ and Cl concentrations might be related both to the varying distance from the shoreline (T1, 75 m; T2, 315 m) and the thickness of the sediments (T1, 2 cm; T2, 1 cm). Additional differentiation in both locations with respect to Naþ and Cl contents was that sample T1 was taken from a depression, which was filled with seawater for an extended period (Szczucin´ski et al., 2005). The similar location was also a reason for the tremendous increase of Pb, Zn and Hg content in both T1 and T2. Both samples were collected at a location affected by wastewaters from anthropogenic activity, which was probably the additional reason for the outstanding concentrations of Pb and Zn. Additionally, sediment samples grouped in CI (T2, T13) indicated, on average, lower concentrations of lead (28.0 mg kg1 ) than in CII, but a twofold higher concentration of Zn (89.0 mg kg1 ), and the highest concentration of Hg (mainly in the form of Hg bound to sulfides), 199 g kg1 . In spite of a very similar location or similar conditions, large variability occurred between closely neighbouring locations. This phenomenon indicated that small-scale local conditions could be crucial for tsunami sediment chemical profile characteristics. In general, concerning heavy metals, sediments belonging to CI and CII showed higher concentrations of Pb and Zn than sediments grouped in CIII and CIV. This could be an effect of the higher content of clay fraction compared with sediments collected in the latter locations (Szczucin´ski et al., 2005). Except for T1, T2 and T13, there were no significant spatial changes in the concentration of salt-patterned components, Cd, Cr and Zn, or for the majority of Hg species. Statistically significant differences appeared only for Ni and Hg(F4). The difference (Kołmogorov–Smirnov test, p , 0.05) in Ni concentration between CIII and CIV was due to the concentration of Ni in tsunami sediment at Pattong (CIV), which was below 0.1 mg kg1 , whereas at Nam Khem and Bang Mor it varied between 1.1 and
MULTIVARIATE STATISTICAL MODELLING
201
1.4 mg kg1 . In the case of Hg species, the mean concentration value of mercury bound to humic matter Hg(F4) for the set of samples classified as CIII was equal to 3.4 g kg1 , whereas for the set of samples classified as CIV it was equal to 20.6 g kg1 (Kołmogorov–Smirnov test, p , 0.01). Because only two variables were involved in the distinction between CIII and CIV, the concentration differences might be caused by local factors such as sediment thickness. Five out of six samples (T3, T4, T6, T7 and T8) were collected at Pattong from an area characterised by a very thin tsunami sediment layer, 1–5 cm (samples in the form of a fine sand), while both samples collected at Bang Mor (T14 and T15) and one sample collected at Nam Khem (T11), grouped in CIV, originated from an area where the thickness of sediments varied between 10 and 20 cm, and moreover the distance from the shoreline was greater than 300 m. To statistically assess the nature of the inter-element relation between variables involved in the diversification between CIII and CIV, Spearman’s correlation coefficient matrix was calculated separately for CIII, and CIV. For CIII, strong positive correlations between salt components were obtained (n ¼ 7, Rcrit ¼ 0.75): K þ –Ca2þ , 0.82; K þ –Mg2þ , 0.93; K þ –SO4 2, 0.96; Naþ –Ca2þ , 0.82; Naþ –Mg2þ , 0.93; Naþ – SO4 2, 0.96; Ca2þ –Cl , 0.82; Ca2þ –SO4 2, 0.86; Mg2þ –Cl , 0.93; Mg2þ –SO4 2, 0.86; Cl –SO4 2, 0.96. For CIV, the correlations were (n ¼ 5, Rcrit ¼ 0.90): K þ –Naþ , 0.90; Naþ –Ca2þ , 0.90; Naþ –Cl , 0.97; Naþ –SO4 2, 0.90. Correlation between Naþ , K þ , Mg2þ and Cl suggests that these ions were probably present in the sediment in the form of halite (NaCl), sylvine (KCl) and carnalite (KMgCl3 .6H2 O) – the most common minerals resulting from ocean-water evaporation. A high correlation factor between Ca2þ and SO4 2 suggested that gypsum (CaSO4.2H2 O) or anhydrite (CaSO4 ) might also be present in the sediments. The correlation coefficients listed above were in agreement with the correlation matrix presented by Szczucin´ski et al. (2005) for all analysed elements and sampling locations, which confirmed that the term ‘‘salt pattern’’ introduced before was adequate. For CIII, a strong positive correlation was also obtained for Cd–Pb (0.84), whereas, in the case of CIV, for Cu–Pb the value was 0.97. Surprisingly, the correlation of Pb with all the salts was relatively high, which might indicate a common source in the tsunami sediment. Such a phenomenon suggests that Pb, in the form of sparingly soluble minerals such as anglesite (PbSO4 ), pyromorphite (Pb5 (PO4 )3 Cl) and phosgenite (Pb2 CO3 Cl2 ), might be present in the tsunami sediment. For both clusters, statistically significant correlations between various Hg species and other analysed elements appeared. For CIII, the relations could be summarised as follows: Hg(Bulk)–Hg(F5), 0.96; Hg(F2)–Sb, 0.83; and Hg(F4)–As, 0.82. For CIV: Hg(Bulk)–Hg(F5), 0.90; Hg(F1)–K þ , 0.90; Hg(F1)–Ca2þ , 0.90; Hg(F1)–Se, 0.90; Hg(F2)–Cr, 0.90; Hg(F4)–As, 0.90; and Hg(F5)–Cr, 0.90. For the tsunami sediment collected at Nam Khem and Bang Mor (CIV), organomercury species were negatively correlated with major ions in the water-soluble fraction, and positively correlated with soluble Se. In both cases, water-soluble mercury was positively correlated with soluble heavy metals (Sb and Cr). For both CIII and CIV, mercury bound to humic matter was correlated negatively with As. In the next step, to assess possible changes in the chemical profile of the tsunami sediment caused by the rainy season, SOM classification was repeated based on data
202
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
from the 2006 survey (2006-run). In Figure 5.20, the U-matrix with a complete set of chemical component planes for the 2006-run is presented and, in Figure 5.21, a visualisation of inter-variable relations for the same run is shown. Similar to the procedure described above, classification based on the lowest Davies–Bouldin index was evaluated, and a variant with two clusters was chosen. The SOM-supported analysis for the 2006-run indicates that a high concentration of salt components (Ca2þ , Mg2þ , Naþ , K þ , Cl and SO4 2 ) is linked to a high concentration of metalloids: Se and Sb (strong, directly proportional relationship, 0.75 , R , 1.00). Cr, Cu, Pb, Zn and Cd demonstrate a similar concentration pattern (Figure 5.20). By contrast, Ni indicates an indirect proportional relationship with other heavy metals. The identified inter-element relations are in agreement with the correlation coefficient matrix presented by Szczucin´ski et al. (2007). Similar to the results obtained for the 2005-run, a mutual linkage between variables suggests an existence of a salt-patterned set of variables (K þ , Cl , Sb, Mg2þ , Naþ , SO4 2, Ca2þ Figure 5.20: Visualisation of relationship between chemical variables determined in tsunami sediments in Thailand during the 2006 sampling survey. U-matrix
3.92
K
849
2.07 0.226
Mg2
Cr
Zn
d
d
16 800
452
Cl
d
54.3
Ca2
4710
8540
SO42
d
2560
296 Cd
d
402
1410
22 400
22 400
6.71
743
11 500
11 400
3.7
70.8 Cu
d
502 Ni
d
282 Pb
d
0.69
7.44
10.8
2.92
52.3
5.02
6.37
1.93
32.5
2.6 35.8
As
d
6.59
1.99 2720
21.2
d
Na
Sb
d
508
1750
d
783
0.941 Se
d
117
319
d
130
12.7
66
d
15.1
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
203
MULTIVARIATE STATISTICAL MODELLING
Figure 5.21: SOM classification of chemical variables from the 2006 survey. SP: salt patterned; WP: wastewater patterned.
As
Cd
Ni
Cu
Zn
Cr
Pb WP Ca2
Se
SP
Mg2
Na
SO42
K
Cl
Sb
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
204
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and Se, marked ‘‘SP’’) and a wastewater-patterned set (As, Cd, Ni, Cu, Cr, Zn and Pb, marked ‘‘WP’’) (Figure 5.21). The classification obtained in agreement with the procedure described above indicates that, for the 2006-run, the suitable number of clusters is two. Setting up the real mean values of the analyte concentrations along with the classification results allows the attainment of a simple interpretation of the clustering pattern connected with the tsunami sediment chemical profile (Figure 5.22). Cluster I (CI) includes 14 cases, whereas cluster II includes only one sample, labelled T2 (Pattong Bay). Both clusters are distinguished by Naþ , Ca2þ , Cl , SO4 2, Pb and Zn variability. As arises from the clustering pattern, sample T2 can be understood as an ‘‘outlier’’ in terms of the pollution level, characterised by a high concentration of salt components, as well as Pb and Zn. The effect of the rainy season was neutralised in this case. A relatively high content of both salt components and heavy metals, especially Pb and Zn, was caused by a combination of the shape effect of the sampling area and an anthropogenic pollution effect. As mentioned above, sample T2 was collected from a depression, which was mostly filled with seawater (Szczucin´ski et al., 2005) and affected by wastewaters from an anthropogenic activity, which might be the reason for the unchanged outstanding concentrations of Pb and Zn. It was difficult to find a general trend of temporal changes based only on mean values when comparing the two sampling campaigns, shortly after the tsunami and 1 year later (after the rainy
40 000 35 000 30 000 25 000 20 000 15 000 10 000 5000 0
70 K Na 2
Ca Mg2 Cl SO42
T1–T15 without T2
Heavy metals/mg kg1
Salt components/mg kg1
Figure 5.22: Mean values of salt components (mg kg1 ), heavy metals (mg kg1 ) and metalloids (g kg1 ) for clustering pattern connected with tsunami sediment chemical profile (2006-run).
60 Cr
40
Cu Ni
30
Pb Zn
20 10 0
T2
Cd
50
T1–T15 without T2
T2
Metalloids/µg kg1
1400 1200 1000 As
800
Sb Se
600 400 200 0
T1–T15 without T2
T2
Source: Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Reprinted by permission of Taylor & Francis Ltd.
MULTIVARIATE STATISTICAL MODELLING
205
season). The SOM clustering identified T2 as relatively more contaminated than other locations as far as Cr and Cu were concerned. The maximum concentration values of Cr (8.6 mg kg1 ) and Cu (12.0 mg kg1 ) were determined in the tsunami sediment collected at T2. The remaining sampling locations (T1–T15) were clustered in one group, which suggests unification of the tsunami sediment chemical profile, and indicates a cleaning effect of the rainy season (Szczucin´ski et al., 2007). As is evident from this case study, the SOM efficiently simplified the interpretation of the environmental and geochemical impacts of the the tsunami, and helped to explain the role of the tsunami sediment as a possible carrier of pollution. The SOM provided outstanding visual information to identify the relationships between variables describing the chemical composition of the tsunami sediment, and allowed an investigation of the dominant compounds related to the origin of the sediment. The novel SOM simultaneously showed inter-element relations in the dataset and allowed the detection of natural clusters in the data according to the spatial location of the sampling sites. Based on the clustering pattern, high-pollution areas were identified. A comparison of the clustering results obtained in the two runs indicated the existence of temporal changes in the tsunami sediment chemical profile, and proved the cleaning effect of the rainy season in Thailand.
5.8
CONCLUSIONS
Recent global climate changes and increased levels of pollution of many aquatic and soil-related environmental compartments have led to the search for new strategies in the assessment of the risk from environmental pollution. In addition, the concept of sustainable development has placed on the social agenda the problem of considering the simultaneous effects of many factors on the ecological, economical, social and technological aspects of sustainability. The need to apply the new scale and metrics for risk assessment and eco-efficiency assessment is obvious, as the environmental problems must be treated in a multivariate way. Multidimensional statistical modelling of water- and soil-related environmental compartments for pollution is only one of the issues that fall under the region of interest of scientists, engineers, environmentalists and politicians. Although the environmental compartments investigated are in many cases affected by background levels of pollutants, their quality assessment is an important part of every environmental protection strategy. In this chapter, the aim was to show that the quality assessment of aquatic and soil-related environmental compartments possesses an extraordinarily useful tool – the methods of chemometrics and environmetrics. If one decides to use multivariate statistical methods for the classification, projection, modelling and interpretation of aquatic or soil monitoring data in order to gain specific information (often hidden, and not available from the raw data), the pathway should follow these important steps: • •
Prove the data quality by means of metrological criteria (uncertainty, limits of detection of the monitoring methods, precision, reliability of signal, etc.). Check the data distribution (often non-normal) by statistical tests.
Dataset
Description of objects by chemical, physicochemical, physical, biological features etc.
(SOM Toolbox 2.0 for Matlab 6.0)
SOM
(STATISTICA 8.0)
Traditional multivariate chemometric techniques
Optimum map topology (based on QE and TE errors)
PCA, N-way PCA Principal component analysis (or N-way mode)
HCA Hierarchical cluster analysis
Outputs
Efficiency and visualisation ability comparison
Factor scores plot and/or factor loadings plot (both with rotation strategy applied if desired)
Dendrogram (e.g., squared Euclidean distance; Ward’s method)
Features similarity assessment
K-means classification algorithm Davies– Bouldin index value
U-matrix; component planes (features abundance assessment)
Figure 5.23: Schematic representation of the recommended methodology for an exploratory analysis.
Exploration of variability of samples according to various criteria
206 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
MULTIVARIATE STATISTICAL MODELLING
• •
• •
• •
207
Normalise the data in order to avoid problems with non-normal distribution or data dimensionality. Classify the monitoring data by various environmetric approaches, such as CA, PCA, and neural net classification (Kohonen SOM as an option for classification without training procedures). Cautiously interpret the classification results, including finding reasons for similarity groupings. Model the data in order to identify latent factors responsible for the data structure and the factor contribution to the formation of the total concentration of each of the water or soil quality parameters. Determine seasonal patterns. Compare the models with real monitoring data.
Based on the presented case studies, an important conclusion can be drawn. In order to obtain reliable results, very often more than one environmetric technique should be applied. For example, a comparison of SOM with CA and PCA showed that some results obtained by SOM are, in general, similar to those obtained by CA and PCA. Nevertheless, the SOM provided a more detailed classification, which often was more useful for subsequent decisions or management. This is why the procedure presented in Figure 5.23 is highly recommended in the data exploration step. The greatest advantage of the SOM method has turned out to be its powerful visualisation tools, which clearly outperform the representations that can be obtained by CA or by PCA in order to achieve the classification of samples while considering the integration of multiple variables. Following the modes and the recommendations presented above, several important conclusions concerning the case studies involved can be drawn: •
•
•
The multivariate statistical strategies enabled the attainment of reliable information about the environmental system in consideration, and proved useful for testing the implementation of the sustainable development rule. The combined strategy of SOM and multiway PCA is very suitable for handling an environmental dataset describing variations of several chemical and biological quality parameters sampled at various time intervals and at several sampling sites. Visualisation of the monitoring results by SOM makes it possible to classify different feature patterns for specific sampling sites during extended monitoring periods. Such patterns remain undetected by other data projection options.
REFERENCES Adamiec, E. and Helios-Rybicka, E. (2002). Distribution of pollutants in the Odra River system. Part V: Assessment of total and mobile heavy metals content in the suspended matter and sediments of the Odra River system and recommendations for river chemical monitoring. Polish Journal of Environmental Studies. 11: 675–688.
208
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Allen, D.M. (1971). The prediction sum of squares as a criterion for selecting predictor variables. Department of Statistics, University of Kentucky Technical Report, 23: 32. Alloway, B.J. and Ayres, D.C. (1999). Chemical Principles of Environmental Pollution (in Polish). PWN, Warsaw. Alp, M. and Cigizoglu, H.K. (2007). Suspended sediment load simulation by two artificial neural network methods using hydrometeorological data. Environmental Modelling & Software. 22: 2–13. Alvarez-Guerra, M., Gonza´les-Pinuela, C., Andre´s, A., Gala´n, B. and Viguri, J.R. (2008). Assessment of self-organizing map artificial neural networks for the classification of sediment quality. Environmental International. 34: 782–790. Andersson, C. and Bro, R. (1998). Improving the speed of multi-way algorithms. Part I: Tucker3. Chemometric & Intelligent Laboratory Systems. 42: 93–103. Arsuaga, U.A. and Dı´az, M.F. (2006). Topology preservation in SOM. Proceedings of World Academy of Science, Engineering and Technology. 16: 187–190. Aruga, R. (2004). The problem of responses less than the reporting limit in unsupervised pattern recognition. Talanta. 62(5): 871–878. Astel, A. and Małek, S. (2008). Multivariate modeling and classification of environmental n-way data from bulk precipitation quality control. Journal of Chemometrics. 22(11–12): 738–746. Astel, A. and Simeonov, V. (2008). Environmetrics as a tool for lake pollution assessment. In Miranda, F.R. and Bernard, L.M. (Eds) Lake Pollution Research Progress. NOVA Publishers, Hauppauge, NY, pp. 13–61. Astel, A., Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Simeonov, V. and Siepak, J. (2006). Chemometrics in assessment of sustainable development rule implementation. Central European Journal of Chemistry. 4(3): 543–564. Astel, A., Tsakovski, S., Barbieri, P. and Simeonov, V. (2007). Comparison of self-organizing maps classification approach with cluster and principal components analysis for large environmental data sets. Water Research. 41: 4566–4578. Astel, A., Tsakovski, S., Simeonov, V., Reisenhofer, E., Piselli, S. and Barbieri, P. (2008a). Multivariate classification and modeling in surface water pollution estimation. Analytical & Bioanalytical Chemistry. 390(5): 1283–1292. Astel, A., Boszke, L., Niedzielski, P. and Kozak, L. (2008b). Application of the self-organizing mapping in exploration of the environmental impact of a tsunami disaster. Journal of Environmental Science & Health, Part A. 43(9): 1016–1025. Banat, K.M. and Howari, F.M. (2003). Pollution load of Pb, Zn and Cd and mineralogy of the recent sediments of Jordan River/Jordan. Environmental International. 28(7): 581–586. Barbieri, P., Adami, G. and Reisenhofer, E. (1998). Multivariate analysis of chemical-physical parameters to characterize and discriminate karstic waters. Annali di Chimica (Journal of Analytical & Environmental Chemistry). 88: 381–391. Barbieri, P., Andersson, C.A., Massart, D.L., Predonzani, S., Adami, G. and Reisenhofer, E. (1999a). Modeling bio-geochemical interactions in the surface waters of the Gulf of Trieste by threeway principal component analysis (PCA). Analytica Chimica Acta. 398: 227–235 Barbieri, P., Adami, G. and Reisenhofer, E. (1999b). Searching for a 3-way model of spatial and seasonal variations in the chemical composition of karstic freshwaters. Annali di Chimica Journal of Analytical & Environmental Chemistry. 89: 639–648. Barbieri, P., Adami, G., Piselli, P., Gemiti, F. and Reisenhofer, E. (2002). A three-way principal factor analysis for assessing the time variability of freshwaters related to a municipal water supply. Chemometrics & Intelligent Laboratory Systems. 62: 89–100. Bojakowska, I. (2001). Criteria of assessment of water reservoirs sediment pollution (in Polish). Geological Review. 49(3): 213–218. Borovec, Z. (1996). Evaluation of the concentration of trace elements in stream sediments by factor and cluster analysis and the sequential extraction procedure. Science of the Total Environment. 177: 237–250. Boszke, L. and Astel, A. (2007). Fractionation of mercury in sediments from coastal zone inundated
MULTIVARIATE STATISTICAL MODELLING
209
by tsunami and in freshwater sediments from rivers. Journal of Environmental Science & Health, Part A. 42: 847–858. Boszke, L., Kowalski, A. and Siepak, J. (2004). Grain size partitioning of mercury in sediments of the middle Odra river (Germany/Poland). Water, Air and Soil Pollution. 159: 125–138. Boszke, L., Kowalski, A., Szczucin´ski, W., Rachlewicz, G., Lorenc, S. and Siepak, J. (2006). Assessment of mercury availability by fractionation method in sediments from coastal zone inundated by the 26 December 2004 tsunami in Thailand. Environmental Geology. 51: 527–536. Boszke, L., Kowalski, A. and Siepak, J. (2007). Fractionation of mercury in sediments of the Warta River (Poland). In: Pawłowski, L., Dudzinska, M. and Pawłowski, A. (Eds) Environmental Engineering. Taylor & Francis, London, pp. 403—413. Boszke, L., Kowalski, A., Astel, A., Baran´ski, A., Gworek, B. and Siepak J. (2008). Mercury mobility and bioavailability in soil from contaminated area. Environmental Geology. 55(5): 1075–1087. Brereton, R. (2003). Chemometrics: Data Analysis for the Laboratory and Chemical Plant. John Wiley & Sons, Chichester. Bro, R., Andersson, C.A. and Kiers, H.A.L. (1999). PARAFAC2. Part II: Modeling chromatographic data with retention time shifts. Journal of Chemometrics. 13: 295–309. Bro, R., Kjeldahl, K., Smilde, A.K. and Kiers, H.A.L. (2008). Cross-validation of component models: a critical look at current methods. Analytical & Bioanalytical Chemistry. 390(5): 1241–1251. Browne, M.W. (2000). Cross-validation methods. Journal of Mathematical Psychology. 44(1): 108– 132. Bruno, P., Caselli, M., de Gennaro, G. and Traini, A. (2001). Source apportionment of gaseous atmospheric pollutants by means of an absolute principal component scores (APCS) receptor model. Fresenius Journal of Analytical Chemistry. 371(8): 1119–1123. Cattell, R.B. (1966). The scree test for the number of factors. Multivariate Behavioral Research. 1: 629–637. Ce´re´ghino, R., Giraudel, J.L. and Compin, A. (2001). Spatial analysis of stream invertebrates distribution in the Adour-Garonne drainage basin (France) using Kohonen self organizing maps. Ecological Modelling. 146: 167–180. Chandrasekharan, H., Sarangi, A., Nagarajan, M., Singh, V.P., Rao, D.U.M., Stalin, P., Natarajan, K., Chandrasekaran, B. and Anbazhagan, S. (2008). Variability of soil-water quality due to Tsunami-2004 in the coastal belt of Nagapattinam district, Tamilnadu. Journal of Environmental Management. 89(1): 63–72. Chaudhary, D.R., Gosh, A. and Patolia, J.S. (2006). Characterization of soils in the tsunami-affected coastal areas of Tamil Nadu for agronomic rehabilitation. Current Science. 91: 99–104. Choin´ski, A. (1981). The Variability of Water Circulation on the Lubuska Upland in the Light of Analytics of the Environment and Balance Calculations. Polish Society of Friends of Earth Sciences, Zielona Go´ra. Czarnik-Matusewicz, B. and Pilorz, S. (2006). Study of the temperature-dependent near-infrared spectra of water by two-dimensional correlation spectroscopy and principal components analysis. Vibrational Spectroscopy. 40(2): 235–245. Davies, D.L. and Bouldin, B.W. (1979). A cluster separation measure. IEEE Transactions on Pattern Recognition and Machine Intelligence. 1(2): 224–227. Diana, G. and Tommasi, C. (2002). Cross-validation methods in principal component analysis: a comparison. Statistical Methods & Applications. 11(1): 71–82. Eaton, A. and Clesceri, L. (1995). Standard Methods for the Examination of Water and Wastewater. American Public Health Association/Water Environment Federation/American Water Works Association, Washington, DC. Ehsani, A.H. and Quiel, F. (2008). Application of self organizing map and SRTM data to characterize yardangs in the Lut desert, Iran. Remote Sensing of Environment. 112(7): 3284–3294. Einax, J.W., Zwanziger, K.H. and Geiß, S. (1997). Chemometrics in Environmental Analysis. VCH Weinheim, Germany. Einax, J.W., Truckenbrodt, D. and Kampe, O. (1998). River pollution data interpreted by means of chemometric methods. Microchemical Journal. 58: 315–324.
210
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
El-Shaarawi, A.H. and Esterby, S.R. (1992). Replacement of censored observations by a constant: an evaluation. Water Research. 26: 835–844. Felipe-Sotelo, M., Andrade, J.M., Carlosena, A. and Tauler, R. (2007). Temporal characterisation of river waters in urban and semi-urban areas using physico-chemical parameters and chemometric methods. Analytica Chimica Acta. 587: 128–137. Filgueiras, A.V., Lavilla, I. and Bendicho, C. (2004). Evaluation of distribution, mobility and binding behaviour of heavy metals in surficial sediments of Louro River (Galicia, Spain) using chemometrics analysis: a case study. Science of the Total Environment. 330: 115–129. Giussani, B., Monticelli, D., Gambillara, R., Pozzi, A. and Dossi, C. (2008). Three-way principal component analysis of chemical data from Lake Como watershed. Microchemical Journal. 88: 160–166. Głosin´ska, G., Sobczyn´ski, T., Boszke, L., Bierła, K. and Siepak, J. (2005). Fractionation of some heavy metals in bottom sediments from the middle Odra River (Germany/Poland). Polish Journal of Environmental Studies. 14(3): 305–317. Gouzinis, A., Kosmidis, N., Vayenas, D.V. and Lyberatos, G. (1998). Removal of Mn and simultaneous removal of NH3 , Fe and Mn from potable water using a trickling filter. Water Research. 32(8): 2442–2450. Guo, H., Wang, T. and Louie, P.K.K. (2004a). Source apportionment of ambient non-methane hydrocarbons in Hong Kong: application of a principal component analysis/absolute principal component scores (PCA/APCS) receptor model. Environmental Pollution. 129: 489–498. Guo, H., Wang, T., Simpson, I.J., Blake, D.R., Yu, X.M., Kwok, Y.H. and Li, Y.S. (2004b). Source contributions to ambient VOCs and CO at a rural site in eastern China. Atmospheric Environment. 38: 4551–4560. Harshman, A. and de Sarbo, W.S. (1984). An application of PARAFAC to a small sample problem, demonstrating preprocessing, orthogonality constraints, and split-half diagnostic techniques. In Law, H.G., McDonald, R.P., Snyder, C.W. and Hattie, J.A. (Eds) Research Methods for Multimode Data Analysis. Praeger, New York, pp. 602—643. Helsel, D. (2005). Nondetects and Data Analysis: Statistics for Censored Environmental Data. John Wiley & Sons, Hoboken, NJ. Henrion, R. (1994). N-way principal component analysis: theory, algorithms and applications. Chemometrics & Intelligent Laboratory Systems. 25: 1–23. Henrion, R. and Anderson, C. (1999). A new criterion for simple-structure transformations of core arrays in N-way principal components analysis. Chemometrics & Intelligent Laboratory Systems. 47(2): 189–204. Henry, R.C., Lewis, C.W., Hopke, P.K. and Williamson, H.J. (1984). Review of receptor model fundamentals. Atmospheric Environment. 18: 1507–1515. Hopke, P.K. (1985). Receptor Modeling in Environmental Chemistry. John Wiley & Sons, New York. Hori, K., Kuzumoto, R., Hirouchi, D., Umitsu, M., Janjirawuttikul, N. and Patanakanog, B. (2007). Horizontal and vertical variation of 2004 Indian tsunami deposits: an example of two transects along the western coast of Thailand. Marine Geology. 239(3–4): 163–172. Humphreys, L.G. and Montanelli, R.G. (1975). An examination of the parallel analysis criterion for determining the number of common factors. Multivariate Behavioral Research. 10: 193–206. Idris, A.M. (2008). Combining multivariate analysis and geochemical approaches for assessing heavy metal level in sediments from Sudanese harbors along the Red Sea coast. Microchemical Journal. 90(2): 159–163. Jolliffe, I.T. (2002). Principal Component Analysis, 2nd edn. Springer Series in Statistics, Springer, New York. Kaiser, H.F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement. 20: 141–151. Kalteh, A.M., Hjorth, P. and Berndtsson, R. (2008). Review of the self-organizing map (SOM) approach in water resources: analysis, modeling and application. Environmental Modelling & Software. 23: 835–845. Kohonen, T. (1982). Self-organized formation of topologically feature map. Biological Cybernetics. 43: 59–69.
MULTIVARIATE STATISTICAL MODELLING
211
Kohonen, T. (1984). Self-Organization and Associative Memory. Springer-Verlag, Berlin. Kohonen, T. (1989). Self-Organization and Associative Memory, 3rd edn. Springer, Berlin. Kohonen, T. (2001). Self-Organizing Maps, 3rd edn. Springer, Berlin. Kouras, A., Katsoyiannis, I. and Voutsa D. (2007) Distribution of arsenic in groundwater in the area of Chalkidiki, Northern Greece. Journal of Hazardous Materials. 147(3): 890–899. Kuang-Chung, Y., Li-Jyur, T., Shih-Hsiung, Ch. and Shien-Tsong, H. (2001). Correlation analyses of binding behavior of heavy metals with sediment matrices. Water Research. 35(10): 2417–2428. Lacassie, J.B., Roser, B., Del Solar, J.R. and Herve´, F. (2004). Discovering geochemical patterns using self-organizing neural networks: a new perspective for sedimentary provenance analysis. Sedimentary Geology. 165: 175–191. Lee, B.H. and Scholz, M. (2006). Application of the self-organizing maps (SOM) to assess the heavy metal removal performance in experimental constructed wetlands. Water Research. 40(18): 3367–3374. Lee, S., Moon, J.W. and Moon, H.S. (2003). Heavy metals in the bed and suspended sediments of Anyang River, Korea: implications for water quality. Environmental Geochemistry and Health. 25: 433–452. Lek, S. and Gue´gan, J.F. (1999). Artificial neural networks as a tool in ecological modeling: an introduction. Ecological Modelling. 120: 65–73. Loska, K. and Wiechuła, D. (2003). Application of principal component analysis for the estimation of source of heavy metal contamination in surface sediments from the Rybnik Reservoir. Chemosphere. 51(7): 723–733. Louwerse, D.J., Smilde, A.K. and Kiers, H.A.L. (1999) Cross-validation of multiway component models. Journal of Chemometrics. 13: 491–510. MacDonald, D. (1994). Approach to the Assessment of Sediment Quality in Florida Coastal Waters. Vol. 1: Development and Evaluation of Sediment Quality Assessment Guidelines. Florida Department of Environmental Protection, Office of Water Policy, Tallahassee, FL. Marques, V.S., Sial, A.N., de Albuquerque, M.E. and Ferreira, V.P. (2008). Principal component analysis (PCA) and mineral associations of litoraneous facies of continental shelf carbonates from northeastern Brazil. Continental Shelf Research. 28(20): 2709–2717. Massart, D.L. and Kaufman, L. (1983). The Interpretation of Analytical Chemical Data by the Use of Cluster Analysis. Wiley, New York. Meyer, A.K. (Ed.) (2002). IOP – International Odra Project. Results of International Odra Project, Hamburg, Germany. Miller, S.L., Anderson, M.J., Daly, E.P. and Milford, J.B. (2002). Source apportionment of exposures to volatile organic compounds. I. Evaluation of receptor models using simulated exposure data. Atmospheric Environment. 36: 3629–3641. Moore, A., Nishimura, Y., Gelfenbaum, G., Kamataki, T. and Triyono, R. (2006). Sedimentary deposits of the 26 December 2004 tsunami on the northwest coast of Aceh, Indonesia. Earth Planets and Space. 58: 253–258. Moshou, D., Bravo, C., Oberti, R., West, J., Bodria, L., McCarney, A. and Ramon, H. (2005) Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps. Real-Time Imaging. 11(2): 75–83. Mosier, C. (1951). Problems and designs of cross-validation. Educational and Psychological Measurement. 11: 5–11. Mukherjee, A. (1997). Self-organizing neural network for identification of natural modes. Journal of Computing in Civil Engineering. 11: 74–7. Niedzielski, P. (2006). Microtrace metalloids speciation in lake water samples (Poland). Environmental Monitoring & Assessment. 118: 231–246. Pardo, R., Vega, M., Deba´n, L., Cazurro, C. and Carretero C. (2008) Modelling of chemical fractionation patterns of metals in soils by two-way and three-way principal component analysis. Analytica Chimica Acta. 606(1): 26–36. Paris, R., Lavigne, F., Wassmer, P. and Sartohadi, J. (2007). Costal sedimentation associated with the December 26, 2004 tsunami in Lhok Nga, west Banda Aceh (Sumatra, Indonesia). Marine Geology. 238: 93–106.
212
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Park, S.S. and Kim, Y.J. (2005). Source contributions to fine particulate matter in an urban atmosphere. Chemosphere. 5(2): 217–226. Park, Y.S., Ce´re´ghino, R., Compin, A. and Lek, S. (2003). Application of artificial neural networks for patterning and predicting aquatic insect species richness in running waters. Ecological Modelling. 160: 265–280. Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. Philosophical Magazine. 2(6): 559–572. Peeters, L., Bac¸a˜o, F., Lobo, V. and Dassargues, A. (2006). Exploratory data analysis and clustering of multivariate spatial hydrogeological data by means of GEO3DSOM, a variant of Kohonen’s Self-Organizing Map. Hydrology and Earth System Sciences Discussions. 3: 1487–1516. Pekey, H., Karakas¸, D. and Bakog˘lu, M. (2004). Source apportionment of trace metals in surface waters of a polluted stream using multivariate statistical analysis. Marine Pollution Bulletin. 49: 809–818. Platikanov, S., Puig, X., Martı´n, J. and Tauler, R. (2007). Chemometric modeling and prediction of trihalomethane formation in Barcelona’s water works plant. Water Research. 41: 3394–3406. Pravdova, V., Boucon, C., de Jong, S., Walczak, B. and Massart, D.L. (2002). Three-way principal component analysis applied to food analysis: an example. Analytica Chimica Acta. 462: 133– 148. Ranjan, R.K., Ramanathan, AL. and Singh, G. (2008). Evaluation of geochemical impact of tsunami on Pichavaram mangrove ecosystem, southeast coast of India. Environmental Geology. 55(3): 687–697. Recknagel, F. (2003). Ecological Informatics: Understanding Ecology by Biologically Inspired Computation. Springer, Berlin. Reisenhofer, E., Adami, G. and Barbieri, P. (1996). Trace metals used as natural markers for discriminating some karstic freshwaters near Trieste (Italy). Toxicological and Environmental Chemistry. 54: 233–241. Reisenhofer, E., Adami, G. and Barbieri, P. (1998). Using chemical and physical parameters to define the quality of karstic freshwaters (Timavo river, North-Eastern Italy): a chemometric approach. Water Research. 32: 1193–1203 Richardson, A.J., Risien, C. and Shillington, F.A. (2003). Using self-organizing maps to identify patterns in satellite imagery. Progress in Oceanography. 59: 223–239. Rios-Arana, J.V., Walsh, E.J. and Gardea-Torresdey, J.L. (2004). Assessment of arsenic and heavy metal concentrations in water and sediments of the Rio Grande at El Paso-Juarez metroplex region. Environment International. 29(7): 957–971. Rossetto, T., Peiris, N., Pomonis, A., Wilkinson, S.M., Del Re, D., Koo, R. and Gallocher, S. (2007). The Indian Ocean tsunami of December 26, 2004: observations in Sri Lanka and Thailand. Natural Hazards. 42: 105–124. Simeonov, V. (2004). Environmetric strategies to classify, interpret and model risk assessment and quality of environmental systems. In: Sikdar, S., Glavic, R. and Jain, R. (Eds) Technological Choices for Sustainability. Springer, Berlin – Heidelberg, pp. 147–164. Simeonov, V., Barbieri, P., Walczak, B., Massart, D.L. and Tsakovski, S. (2001). Environmetric modeling of a pot water data set. Toxicological Environmental Chemistry. 79: 55–72. Simeonov, V., Stratis, J., Samara C., Zachariadis, G., Voutsa, D., Anthemidis, A., Sofoniou, M. and Kouimtzis, T. (2003) Assessment of the surface water quality in northern Greece. Water Research. 37: 4119–4124. Simeonov, V., Wolska, L., Kuczynska, A., Gurwin, J., Tsakovski, S., Protasowicki, M. and Namiesnik, J. (2007). Sediment-quality assessment by intelligent data analysis. Trends in Analytical Chemistry. 26(4): 323–331. Simeonova, P. (2006). Polluting sources apportionment for atmospheric and coastal sediments environment. Ecology & Chemical Engineering. 13(10): 1021–1032. Simeonova, P. (2007). Multivariate statistical assessment of the pollution sources along the stream of Kamchia River, Bulgaria. Ecology & Chemical Engineering. 14(8): 867–874. Simeonova, P. and Simeonov, V. (2007). Chemometrics to evaluate the quality of water sources for human consumption. Microchimica Acta. 156(3–4): 315–320.
MULTIVARIATE STATISTICAL MODELLING
213
Singh, K.P., Malik, A. and Sinha, S. (2005). Water quality assessment and apportionment of pollution sources of Gomti river (India) using multivariate statistical techniques: a case study. Analytica Chimica Acta. 528: 355–374. Smith, R.M. and Miao, C.Y. (1994). Assessing unidimensionality for Rasch measurement. In: Wilson, M. (Ed.) Objective Measurement: Theory into Practice. Volume 2. Ablex, Greenwich, pp. 316–328. Song, Y., Shaodong, X., Zhang, Y., Zeng, L., Salmon, L.G. and Zheng, M. (2006). Source apportionment of PM2:5 in Benjing using principal component analysis/absolute principal component scores and UNMIX. Science of the Total Environment. 372: 278–286. Srinivasalu, S., Thangadurai, N., Jonathan, M.P., Armstrong-Altrin, J.S., Ayyamperumal, T. and RamMohan, V. (2008). Evaluation of trace-metal enrichments from the 26 December 2004 tsunami sediments along the Southeast coast of India. Environmental Geology. 53(8): 1711–1721. Stanimirova, I. and Simeonov, V. (2005). Modeling of environmental four-way data from air quality control. Chemometrics & Intelligent Laboratory Systems. 77(1–2): 115–121. Stanimirova, I., Tsakovski, S. and Simeonov, V. (1999). Multivariate statistical analysis of coastal sediment data. Fresenius Journal of Analytical Chemistry. 365: 489–493. Stanimirova, I., Daszykowski, M., Massart, D.L., Questier, F., Simeonov, V. and Puxbaum, H. (2005). Chemometrical exploration of the wet precipitation chemistry from the Austrian Monitoring Network (1988–1999). Journal of Environmental Management. 74: 349–363. Stanimirova, I., Zehl, K., Massart, D.L., Van der Heyden, Y. and Eianx, J.W. (2006) Chemometric analysis of soil pollution data using the Tucker N-way method. Analytical & Bioanalytical Chemistry. 385: 771–779. Stanimirova, I., Połowniak, M., Skorek, R., Kita, A., John, E., Buhl, F. and Walczak, B. (2007). Chemometric analysis of the water purification process data. Talanta. 74(1): 153–162. Svete, P., Milacic, R., Pihlar, B. (2001). Partitioning of Zn, Pb, Cd in river sediments from a lead and zinc mining area using the BCR three-step sequential extraction procedure. Journal of Environmental Monitoring. 3: 586–590. Szczucin´ski, W., Niedzielski, P., Rachlewicz, G., Sobczyn´ski, T., Zioła, A., Kowalski, A., Lorenc, S. and Siepak, J. (2005). Contamination of tsunami sediments in a coastal zone inundated by the 26 December 2004 tsunami in Thailand. Environmental Geology. 49: 321–331. Szczucin´ski, W., Chaimanee, N., Niedzielski, P., Rachlewicz, G., Saisuttichai, D., Tepsuwan, T., Lorenc, S. and Siepak, J. (2006). Environmental and geological impacts of the 26 December 2004 tsunami in coastal zone of Thailand: overview of short- and long-term effects. Polish Journal of Environmental Studies. 15: 793–810. Szczucin´ski, W., Niedzielski, P., Kozak, L., Frankowski, M., Zioła, A. and Lorenc, S. (2007). Effects of rainy season on mobilization of contaminants from tsunami deposits left in a coastal zone of Thailand by the 26 December 2004 tsunami. Environmental Geology. 53: 253–264. Teng, Z., Huang, J., Fujita, K. and Takizawa, S. (2001). Manganese removal by hollow fiber microfilter. Membrane separation for drinking water. Desalination. 139: 411–418. Terrado, M., Barcelo´, D., Tauler, R. (2006). Identification and distribution of contamination sources in the Ebro river basin by chemometrics modelling coupled to geographical information systems. Talanta. 70(4): 691–704. Tessier, A., Campbell, P.G. and Bisson, M. (1979). Sequential extraction procedure for the speciation of particulate trace metals. Analytical Chemistry. 51: 844–852. Thurston, G.D. and Spengler, J.D. (1985). A quantitative assessment of source contributions to inhalable particulate matter pollution in metropolitan Boston. Atmospheric Environment. 19 (1): 9–25. Tryon, R. C. (1939). Cluster Analysis. McGraw-Hill, New York. Tsakovski, S., Kudlak, B., Simeonov, V., Wolska, L. and Namiesnik, J. (2009). Ecotoxicity and chemical sediment data classification by the use of self-organizing maps. Analytica Chimica Acta. 631: 149–152. Tucker, L. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika. 31: 279– 311. Tutu, H., Cukrowska, E.M., Dohnal, V. and Havel, J. (2005). Application of artificial neural networks
214
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
for classification of uranium distribution in the Central Rand goldfield, South Africa. Environmental Modelling & Assessment. 10: 43–152. Ultsch, A. and Siemon, H.P. (1990). Kohonen’s self organizing feature maps for exploratory data analysis. In: Proceedings of International Neural Network Conference (INNC’90). Kluwer Netherlands, Dordrecht, pp. 305–308. Umitsu, M., Tanavud, Ch. and Patanakanog, B. (2007). Effects of landforms on tsunami flow in the plains of Banda Aceh, Indonesia and Nam Khem, Thailand. Marine Geology. 242(1–3): 141– 153. UNEP (2005). After the Tsunami: Rapid Environmental Assessment. United Nations Environment Programme. Vallius, M., Lanki, T., Tittanen, P., Koistinen, K., Russkanen, J. and Pekkanen J. (2003). Source apportionment of urban ambient PM2:5 in two successive measurement campaigns in Helsinki, Finland. Atmospheric Environment. 37: 615–623. Vallius, M., Janssen, N.A.H., Heinrich, J., Hoek, G., Ruuskanen, J., Cyrys, J., Van Grieken, R., de Hartog, J.J., Kteyling, W.G. and Pekkanen, J. (2005). Source and elemental composition of ambient PM2.5 in three European cities. Science of the Total Environment. 337: 147–162. Vandeginste, B., Massart, D.L., Buydens, L., De Long, S., Lewi, P. and Smeyers-Verbeke, J. (1998). Handbook of Chemometrics and Qualimetrics. Elsevier, Amsterdam. Vesanto, J. (1999). SOM-based data visualization methods. Intelligent Data Analysis. 3: 111–126. Vesanto, J. (2002) Data exploration process based on the self-organizing map, PhD thesis (http:// lib.tkk.fi/Diss/2002/isbn9512258978/isbn9512258978.pdf) (accessed 12 December 2008). Vesanto, J. and Alhoniemi, E. (2000). Clustering of the self-organizing map. IEEE Transactions on Neural Networks. 11(3): 586–600. Vesanto, J., Himberg, J., Alhoniemi, E. and Parhankangas, J. (2000). SOM Toolbox for Matlab 5 (http: //www.cis.hut.fi/projects/somtoolbox/) (accessed 15 February 2007). Vos, G., Brekvoort, Y. and Buys, P. (1997). Full-scale treatment of filter backwash water in one step to drinking water. Desalination. 113: 283–284. Walczak, B. and Massart, D.L. (2001a). Dealing with missing data. Part 1. Chemometrics & Intelligent Laboratory Systems. 58: 15–17. Walczak, B. and Massart D.L. (2001b). Dealing with missing data. Part 2. Chemometrics & Intelligent Laboratory Systems. 58: 29–42. Walley, W.J., Martin, R.W. and O’Connor, M.A. (2000) Self-organizing maps for classification of river quality from biological and environmental data. In: Denzer, R., Swayne, D.A. Purvis, M., Schimak, G. (Eds) Environmental Software Systems: Environmental Information and Decision Support, IFIP Conference Series. Kluwer Academic, Boston, MA, pp. 27–41. Zhang, H.B., Luo, Y.M., Wong, M.H., Zhao, Q.G. and Zhang, G.L. (2007). Concentrations and possible sources of polychlorinated biphenyls in the soils of Hong Kong. Geoderma. 138(3– 4): 244–251. Zhou, F., Huang, G.H., Guo, H.C., Zhang, W. and Hao, Z.J. (2007a). Spatio-temporal patterns and source apportionment of coastal water pollution in eastern Hong Kong. Water Research. 41(5): 3439–3449. Zhou, F., Guo, H.C. and Liu, L. (2007b). Quantitative identification and source apportionment of anthropogenic heavy metals in marine sediment of Hong Kong. Environmental Geology. 53(2): 295–305. Zuo, Q., Duan, Y.H., Yang, Y., Wang, X.J. and Tao, S. (2007). Source apportionment of polycyclic aromatic hydrocarbons in surface soil in Tianjin, China. Environmental Pollution. 147: 303– 310.
CHAPTER
6
Modelling the Fate of Persistent Organic Pollutants in Aquatic Systems Tuomo M. Saloranta, Ian J. Allan and Kristoffer Næs
6.1 6.1.1
INTRODUCTION Persistent organic pollutants in aquatic systems
Many chemicals are released into the environment as a result of their everyday use, their manufacture, their formulation, or as industrial by-products, and from accidental releases or their disposal after use. Many of these contaminants make their way into aquatic systems, and this can present both short- and long-term human and environmental health risks. Persistent organic pollutants (POPs) are generally characterised by toxicity towards humans and biota, and by a low potential for (bio)degradation in the environment. This environmental stability leads to their high persistence and likeliness to accumulate in certain environmental compartments and in biota (Warren et al., 2003; Yogui and Sericano, 2009). Long-term and widespread anthropogenic emissions of polycyclic aromatic hydrocarbons (PAHs) or polychlorinated biphenyls (PCBs), for example, have resulted in their global dispersion and distribution in air, water, soil and sediments. In some cases, specific point-source emissions of POPs can lead to significant contamination at a more localised scale (Brenner et al., 2002; Persson et al., 2006; Ruus et al., 2006). POPs in general are characterised by (extremely) low solubility and high hydrophobicity, making them prime candidates for bioaccumulation in organisms and for progression to higher trophic levels. Owing to these same features, POPs tend to bind very strongly to sediment particles and organic matter when released into aquatic systems, and this often leads to accumulation in bottom sediments (Warren et al., 2003). Bottom sediments can thereby act both as a sink (when the source is active) and as a longterm source or reservoir (when the source is stopped) of contaminants to the overlying water or benthic organisms. Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
216
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
The development of both qualitative and quantitative understanding of the fate and behaviour of POPs in aquatic systems, and their transfer into and adverse effects towards organisms, is crucial, both to assess the risk they pose to the aquatic environment and to identify and select remedial actions (e.g., local and global emission and source control or use of technology to actively reduce environmental burden and risk at a more local level). POPs have been the subject of much attention across Europe through recent regulatory and legislative texts, such as the Registration, Evaluation, Authorisation and Restriction of Chemical Substances (REACH) regulation or the Water Framework Directive (2000/60/EC) currently in force (Lepom et al., 2009). A large number of physico-chemical processes can affect the distribution and mobility of POPs within and between phases in aquatic systems, and their bioavailability to aquatic organisms. Thus sorption of hydrophobic contaminants such as POPs to dissolved and particulate organic matter and sediments is an important factor that will influence both their fate and bioavailability (Karickhoff et al., 1979; Næs et al., 1998; Warren et al., 2003). Moreover, exceptionally strong sorption to black carbon has been observed in recent years (Cornelissen et al., 2005; Persson et al., 2002). Whereas equilibrium-based models have often been used to describe the distribution of contaminants between particle-bound and water-soluble phases, a large body of evidence (collected through long-term sorption experiments and many field measurements) suggests that the kinetics of desorption are critical in evaluating contaminant bioavailability in sediments and in assessing the potential for their release into the aqueous phase (Cornelissen et al., 1998; Reid et al., 2000). The risk posed by the presence of POPs in sediments is therefore related not only to the truly dissolved concentration in pore water but also to their bioaccessibility (Reichenberg and Mayer, 2006). Transport of contaminants across the sediment/water interface can occur through diffusion (owing to chemical concentration gradients between water and pore water phases) and advection in the dissolved phase, sediment deposition and resuspension for sediment-associated contaminants, or the reworking of bottom sediments by benthic organisms (Warren et al., 2003). Bioturbation is likely to affect not only contaminant fluxes to the water column but also the distribution of contaminants within the sediment. The deep basins of Norwegian fjord systems, for example, combined with a relatively stagnant water column, promote significant sedimentation and increases in fluxes of particle-bound POPs to bottom sediments. For very hydrophobic contaminants it is likely that gradients of concentrations exist not only in the sediment but also in the water column, and physical factors, such as mixing, water flows and deep water renewal rates, influence the overall movement and distribution of these contaminants.
6.1.2
Remediation of contaminated sediments
The connection between sediment and water quality is of particular concern in Europe with the implementation of the Water Framework Directive (2000/60/EC) and more recently with the Marine Strategy Directive (2008/56/EC). More generally, the
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
217
management of (contaminated) sediments is directly relevant to other European legislative drivers, such as the directives on Habitats (92/43/EEC), Environmental Impact Assessment (2001/42/EC), Environmental Liability (2004/35/EC) and Public Participation (2003/35/EC). This implies without doubt that many factors have to be considered during the development and implementation of remediation strategies for dealing with contaminated sediments. The selection of an appropriate remediation procedure for contaminated sediment will depend on: • •
• • •
site characteristics (e.g., water depth, currents, tidal range, commercial/ recreational use); sediment characteristics (e.g., volume of contaminated sediment, type of sediment, potential for sediment resuspension, sedimentation rates, presence of benthic organisms); the potential for contamination of nearby pristine sites or sites of specific importance during remediation; the type and levels of contamination, contaminant bioavailability, and potential for remobilisation to the water column or for degradation; and the potential short- and long-term ecological and chemical impact of the remediation technique selected.
In view of the high number of stakeholders with distinct and possibly conflicting objectives that may be involved in such decision-making processes (these can be the regulators, industries, consultants and contractors, local and national governments, the public and/or environmental and research organisations), considerable importance has to be given to issues such as: • • • • • •
the environmental benefit of remediation; the cost associated with the treatment and/or disposal option chosen; the potential environmental and/or financial benefit of contaminated sediment reuse; the public understanding and acceptance of various remediation strategies; the state of the market, and the availability of treatment and remediation options; and the sociological and political impact of the strategy for dealing with contaminated sediments.
Such management, taking into account all aspects, as well as the uncertainties and risks connected with them, is a particularly challenging task. Two main remediation pathways are generally available. These involve either: (i) collection of the sediment through dredging for ex situ remediation with a view to reducing contaminant concentrations (and/or bioavailability) to non-harmful levels prior to disposal and confinement of the contaminated sediment; or (ii) in situ treatment through capping of contaminated sediment from the water column, or monitoring and promoting long-term changes (reduction with/without amendments) in contaminant concentrations and bioavailability with time. Ex situ options for POPcontaminated sediments include physical removal of contaminants by sediment washing/flushing and water vapour/thermal desorption, or the use of selective extrac-
218
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
tants to remove contaminants from the sediment matrix (Bortone and Palumbo, 2007). Contaminants present in sediment may be immobilised via stabilisation or solidification using cement-based binders. Other strategies may rely on chemical amendments, catabolically active microorganisms for bioremediation, composting, or the use of hyperaccumulator plants to reduce contaminant burden prior to relocation, subaquatic or land disposal (Bortone and Palumbo, 2007). For notably degradation-recalcitrant and hydrophobic POPs, the effectiveness of these options may be limited and, when large volumes of sediments are involved, rises in costs of treatment may become unrealistic. Spreading of contaminants during sediment dredging and relocation procedures can occur; nevertheless, this risk may be reduced or avoided by employing in situ remediation strategies where the contaminated sediment is left in place. Where low levels of resuspension are expected, and if the source of contamination has ceased, monitored natural recovery can make use of natural sedimentation of cleaner material to reduce contamination of surficial sediments (Magar et al., 2003). In contrast, isolation capping involves actively placing a layer of clean material (generally a 30– 50 cm thick layer of sand) on top of contaminated sediment to physically isolate the overlying water column and benthic organisms from this contaminated material (Mohan et al., 2000; Wang et al., 1991). Such a layer can be further engineered to reduce fluxes of contaminants to the overlying water, and can also serve to stabilise contaminated material and prevent resuspension and further spreading of the contamination (Mohan et al., 2000). More recently, the focus has been on the development of active caps with increased sorption efficiency, or the possibility of promoting contaminant degradation in the cap layer (Lowry and Johnson, 2004). An attractive alternative is the use of thin layer caps, whereby materials with a high sorption capacity for the POPs of interest (e.g., activated carbon) can be sprinkled on top of contaminated sediments. With time, particles are assimilated into the sediment, and the high affinity of POPs for these particles acts to decrease pore water concentrations. This technique generally aims to reduce the fraction of contaminants available for uptake into benthic organisms (Tomaszewski et al., 2007). In the long term, a decrease in pore water concentration may result in a decrease in diffusive fluxes to the overlying water. Capping projects have been conducted at a growing number of freshwater, coastal and estuarine sites in the United States, Hong Kong, Japan, Canada, the Netherlands and the United Kingdom where nutrients, dioxins, dichlorodiphenyltrichloroethane (DDT), PAHs, PCBs, metals and mercury posed serious threats to the ecosystem. In Norway, sediment capping has been gaining momentum recently, and while major isolation capping projects following the dredging and dumping of contaminated sediments from Oslo harbour are under way (Jørgensen et al., 2008), thin layer capping strategies have been proposed for the remediation of sediment contaminated with polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs; hereinafter referred to as dioxins) in the Grenlands fjords (Saloranta et al., 2008).
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
6.2
219
MODELLING THE FATE AND TRANSPORT OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
6.2.1
Conceptual site models
The success of a remedy is clearly dependent on a precise definition of the environmental goals for the remediation, and on a good scientific understanding of the system. These are the foundations required for sound management decision-making. This holistic approach may be facilitated through the development of a conceptual site model (CSM) (Apitz et al., 2005). A CSM is a written or pictorial description that aims to present: • • •
the extent of the knowledge of contaminant sources (whether direct and/or diffuse); physico-chemical and biological processes that significantly impact on contaminant fate and movement in a specified environmental system; and how contaminants can potentially reach relevant environmental receptors (whether these be humans, benthic organisms, physical entity or specific environmental compartments).
Contaminant sources, routes and pathways for exposure and biological or physical receptors are components of such models. These are identified in order to undertake an exposure assessment, with the ultimate aim of assessing the effect of a range of possible remedial actions. The CSM may, at least initially, comprise simple graphical or written representations of important pathways in the system. However, as the project evolves and matures, the CSM can increase in complexity. The CSM may therefore: • • • • • •
support the development of a framework for planning and scoping site investigations, taking the environmental goals into account; provide a template for hypotheses about the release and fate of contaminants at the site; help to identify sources, and how contaminant releases might migrate towards human or environmental receptors; ease the development of sampling designs; help in evaluating potential remedial options; or help to identify site conditions that may lead to unacceptable risks.
A CSM generally serves a dual purpose: not only does it provide a description of all major or relevant exposure pathways, and of the ways in which these pathways can be studied and managed, it is also a very useful tool for improving communication and understanding between scientists, the regulator and stakeholders across the wide range of technical disciplines involved, and through the different phases of the study. The CSM is thus one of the initial planning tools for more quantitative modelling, and for management of the contaminated site, as it enables the organisation of available site information in a clear and transparent way, and facilitates the identification of uncertainties or gaps in knowledge, data and information.
220
6.2.2
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Multimedia models
Mathematical process-based models are increasingly being used in both core and applied science projects, and within environmental management. This is not surprising, because models are able to organise and synthesise scientific knowledge in a way that would not be possible otherwise (Dale, 2003). Models aim to represent the essential features of a system, and can be valuable tools, for example for environmental managers to predict the future state of a particular system. Models can also provide better insight into the behaviour of complex natural systems, and enhance understanding of the system’s response to environmental management decisions or other changes. As the popularity of model use increases, new demands for improved transparency, credibility and assessment of uncertainties are set for models. This is particularly important for cases related to environmental management, because modelling results may be used as input for making decisions, for example on what types of environmental abatement measure are carried out. Basic features of multimedia models A well-established genre of models for simulating the transport, fate and bioaccumulation of POPs in aquatic systems is multimedia models (e.g., Mackay, 2001; Wania and Mackay, 1999). Multimedia models have been applied to a wide variety of aquatic, atmospheric and terrestrial sites (e.g., Armitage and Gobas, 2007; Brevik et al., 2004; Meyer and Wania, 2007; Saloranta et al., 2006a, 2008). As the name indicates, multimedia models are designed to simulate the transport and fate of POPs between many different media, such as water, sediment, air, soil and biota. Multimedia models are relatively simple, and often easier and less time-consuming to set up and apply than, for example, the more complicated and detailed three-dimensional grid-based models, which solve the hydrodynamics and particle transport in more detail. Moreover, the spatio-temporal resolution of multimedia models is often in good agreement with the resolution of field observations, because detailed aquatic data, for example on water current patterns or chemical concentrations in the different media, are often scarce. The spatial resolution in multimedia models is based on the division of the system under simulation into compartments, both horizontally and vertically (Figure 6.1). Within these compartments all system properties (physical, chemical, biological) are assumed to be homogeneous. For example, the vertical division into compartments can follow the water stratification structure, and the horizontal division the sub-basin structure or the spatial distribution and homogeneity of contaminants (e.g., known highly contaminated hotspot areas). Model equations and parameterisation (i.e., setting appropriate values for the coefficients in model equations) describe contaminant transports between the compartments as well as contaminant sources from, and sinks to, the outside of the system being simulated. All sources, sinks and transports are expressed in the form of rates, in units of mass or pressure per time unit (e.g., ng PCB-52 per day). Based on these rates, the model calculates changes in contaminant concentrations simultaneously in each compartment for each consecutive model time step (see Equations 6.2 and 6.3). Table 6.1 shows examples of processes simulated in
221
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
Figure 6.1: Example of model compartments in a lake multimedia model application, where the lake is divided into 12 compartments: three for surface water (SW) and three for the corresponding shallower sediment area (SS), as well as three for deep water (DW) and three for the corresponding deeper sediment area (DS). X1, X3 and AIR denote boundary conditions outside the simulated system, e.g. inflowing and outflowing river water and the atmosphere, respectively.
AIR SW1
X1
SS1, SS2, SS3
DW1
SW2
DW2
SW3
X3
DW3
DS1, DS2, DS3
typical abiotic and biotic aquatic multimedia model applications. To better illustrate the main building blocks of multimedia models in Table 6.1, these processes are divided into three types of medium (water, sediment and biota) as well as into the three types of rate (sources, sinks and transports). The partitioning of the chemical between organic matter or carbon (organic matter, lipids, and black carbon) and water/air is another basic principle of multimedia models for POPs. Thus the description of organic matter and carbon concentrations and flows plays an important role in these models. The partitioning of the total POP concentration Ctot between the truly water-dissolved phase Cdiss , and particulate and dissolved organic carbon (POC, DOC)-bound fractions (second and third terms on the right-hand side in the following equation), can be expressed as Ctot ¼ Cdiss ð1 þ ÆCPOC K OW þ CDOC K OW Þ
(6:1)
where Æ and are empirical coefficients (Æ ¼ 0.41 in Karickhoff, 1981; ¼ 0.08 in Burkhard, 2000); and KOW [L/L1 ] is the octanol–water partitioning coefficient for the particular contaminant (Beyer et al., 2002). An additional, non-linear, term for strong carbonaceous geosorbents (GC), such as black carbon, may be needed in Equation 6.1 to adequately simulate the effects of GC on POP partitioning in the system (Armitage et al., 2008; Cornelissen et al., 2005). However, as Armitage et al. (2008) demonstrate, the paucity of chemical-specific measurement data that are needed to parameterise the additional GC sorption term remains a limitation in the usefulness of such process-based modelling approaches. As a simpler alternative to represent GC sorption in models, site-specific measured POC-normalised solid–water partitioning coefficients (KOC_obs ; e.g. Persson et al., 2002) can be used to replace the ÆKOW term in Equation 6.1, assuming that GC particle dynamics follow closely those
222
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 6.1: Typical transport, source and sink processes simulated in abiotic (water and sediment) and biotic aquatic multimedia model applications. Medium Water
Sediment
Biota
Transports
Sources
Advective exchange (water flow) between connected compartments. Organic particle settling from the water compartment above. Exchange with sediment by diffusion, sedimentation and resuspension. Exchange with water by diffusion, sedimentation and resuspension.
Sinks a
Atmospheric sources (wet and dry deposition and diffusive exchange). Local emissions (e.g. from industry). Advection from outside the simulated system.
Degradation. Advection to outside the simulated system. Diffusion to atmospherea .
(None.)
Burial to deeper inactive sediment layer. Degradation. Outflow to water. Growth dilution. Excretion. Metabolic degradation.
Dietary uptake (predator eats Uptake from water. prey in the food web).
a
Atmosphere is here assumed to be a passive boundary compartment outside the system under simulation (see Figure 6.1). POP concentration in atmosphere is determined by background-level concentration and local emissions.
of POC. It is also worth noting that an equilibrium state is assumed in Equation 6.1, and therefore any details of time-dependent kinetics of sorption (e.g., Cornelissen et al., 1998) are not resolved. The validity of the equilibrium partitioning assumption in modelling depends on the typical timescale of the processes, as well as on the relative importance of the rapidly, slowly and very slowly (de)sorbing POP fractions – that is, on how rapidly a near-equilibrium partitioning state is achieved. Contaminant concentrations in multimedia models are often expressed in terms of the contaminant fugacity f, expressed in units of (partial) pressure (Pa) instead of mass-based units. In the fugacity approach (see, e.g., Mackay, 2001), the so-called fugacity capacities, or Z-values (mol m3 Pa1 ), quantify the ‘‘partitioning capacity’’ of a phase, and transport values, or D-values (mol Pa1 d1 ), quantify contaminant transport between, and transformation within, the different model compartments. A mass balance equation for a chemical in each compartment can be written in the following form: dM i OUT ¼ E i þ T IN i Ti dt
(6:2)
where subscript i denotes a particular compartment, and M denotes mass (mol). E, T I N and T O UT (mol d1 ) denote the sum of sources and the sum of incoming and outgoing transport (i.e. D-values multiplied by the corresponding fugacity), respectively. The
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
223
resulting system of linear, first-order differential equations for the whole system (all compartments) can also be written more compactly as d f ¼ Kf þ S dt
(6:3)
where f is a vector of fugacities (Pa) in the compartments, and S is a vector of sources (Pa d1 ) from outside the simulated system into the compartments. K is the specific rate coefficient matrix (d1 ) constructed using D- and Z-values and the volumes of the compartments, V, in the following way: 8 X > DOUT i > > i¼ j > < Vi Zi (6:4) K ij ¼ X > > D j!i > > : i 6¼ j Vi Zi where i and j are the ith row and jth column of the matrix K, and also denote the particular compartment. DOUT denotes the D-values responsible for sinks in the particular compartment, and D j! i the D-values responsible for transport from compartment j to i. A source vector originally in units of mol d1 can be transformed to units of Pa d1, required in Equation 6.3, by dividing compartment-specific values by the corresponding product Vi Zi . In a steady state, the left-hand side of Equation 6.3 becomes zero (df/dt ¼ 0), and the system simplifies to f ¼ K 1 S
(6:5)
where the overbars denote steady-state conditions. A dynamic solution is often preferred, however, as achieving a (quasi) steady state may take a long time (as much as decades), depending on the response time of the system. Dynamic solutions allow simulations of changes in concentrations in the aquatic systems with time, for example as a response to changing emissions or remediation measures. Fugacities in the different media can easily be transformed to concentration units using equations of the form C (mol m3 ) ¼ Z f, and to mass transport using equations of the form N (mol d1 ) ¼ D f. The basic principles outlined above are mostly the same in the large variety of multimedia models that have been developed up to now. However, there are differences in the way the various rates, Z- and D-values are parameterised, and in the flexibility, user-friendliness and availability of these models. In Section 6.4.2, we demonstrate the applicability and usefulness of a multimedia model, the SF-tool modelling package (Saloranta et al., 2006a, 2008), to simulate the impacts of different sediment remediation alternatives. Linking abiotic and biotic models The multimedia modelling approach outlined above can be applied to construct both abiotic and biotic models. In an aquatic food web bioaccumulation model, compart-
224
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
ments are the food web organisms (or organism groups), and the processes being simulated represent incoming and outgoing POP flows between water (including sediment pore water) and the organisms’ bodies, as well as dietary uptake and sinks such as excretion, metabolic degradation and dilution due to growth (see Table 6.1). Abiotic and biotic models are often run separately and merely chained together, rather than fully two-way coupled, so that output from the abiotic model is used as input for the biotic model. In such model chaining, it is assumed that POPs in the biotic system have an insignificant influence on those in the abiotic system. As pointed out in Section 6.1.1, the truly dissolved contaminant concentration Cdiss is generally assumed to represent the bioavailable fraction for the aquatic organisms. Thus the abiotic–biotic model linking can be made by using the simulated Cdiss from the abiotic model as forcing for the bioaccumulation model. Since a distinct Cdiss is simulated in each of the water and sediment compartments, a weighted mean value of Cdiss must be calculated. Weights are set according to the fraction of time that organisms spend in each compartment in order to calculate the appropriate total abiotic concentration Ca that organisms are exposed to in their habitat. In the bioaccumulation model application, this abiotic forcing due to Ca can be taken into account via the source term S in Equation 6.3 (see, e.g., Saloranta et al., 2006a). The normally linear formulation of multimedia models (Equations 6.3 and 6.5) implies that, if constant rate coefficients in K and a steady-state situation (Equation 6.5) are assumed, any changes in the abiotic forcing of Ca are reflected similarly in the biotic concentrations Cb throughout the whole food web. For example, a 20% reduction in Ca would eventually, in a new (quasi) steady state, lead to a 20% reduction in Cb . Furthermore, Cdiss , which forms the basis for the calculation of Ca , is proportional to the total concentration Ctot via the solid–water partitioning (Equation 6.1). Thus, if we can assume that this partitioning remains unchanged in time, a simulated change in Ctot would imply an equal change in Cdiss . These properties of the system can be utilised to increase the robustness of the abiotic–biotic linking, as exemplified later in Section 6.4.2. Namely, while abiotic multimedia models are often successful in reproducing Ctot , they can more easily fail in adequately simulating observed Cdiss levels owing to the lack of understanding of the complex partitioning phenomena as well as to analytical measurement challenges (see, e.g., Armitage et al., 2008; Persson et al., 2002; Section 6.3.1).
6.3 6.3.1
DATA QUALITY AND MODEL RESULTS: DEALING WITH UNCERTAINTIES Use of field and laboratory data: challenges in measuring POPs in different media
The level of detail of a model and its resolution are not only dependent on, but should in reality mirror, the quantity and quality of data available for parameterisation of the model. The implementation of multimedia models to assess and predict the movement and fate of POPs in aquatic environments often requires a large amount of: (i)
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
225
environmental physico-chemical data to characterise the various media, compartments and their interactions; and (ii) measurements of POP concentrations to assess the distribution and homogeneity of contaminants in various matrices, and to be used for model calibration and as evaluation data. The measurement of the concentrations of many POPs in environmental media is rendered challenging by the low concentrations frequently encountered in the environment. At the same time, the operationally defined nature of the data generated (the method defines the answer) makes the use of the data for modelling purposes more complex, and this may need to be accounted for in model sensitivity analyses. The high hydrophobicity (with octanol–water partition coefficients, log KOW . 5–6) or tendency of many POPs to bind to surfaces and organic matter when released in water results in particularly difficult measurements in aqueous phases. Even under contaminated sediment conditions, water column concentrations of POPs, such as dioxins or polybrominated diphenyl ethers (PBDEs), are not expected to rise significantly above the low pg L1 range, with concentrations remaining below the limits of detection of standard bottle sampling (Lepom et al., 2009). Considerable improvements in limits of detection can be achieved using large-volume water sampling (Streets et al., 2006). In this way, hundreds of litres of water can be filtered and extracted, either on-site or even in situ, to measure both particulate-bound and filtered contaminant concentrations (Zarnadze and Rodenburg, 2008). Polyurethane foam and XAD resin, for example, are employed to extract and retain the fraction of contaminants dissolved in water. Although it is possible to determine accurately the volume of water extracted, the assessment of contaminant recoveries with commonly used techniques (through spiking of hundreds of litres of water followed by filtration/extraction) is difficult in practice. However, the resin phase or foam used for extracting dissolved-phase analytes can be spiked with recovery standards prior to filtration/extraction, and losses from the foam or resin can provide an estimate of the efficiency of the extraction. The retention efficiency for DOC or colloid-bound contaminants passing through these phases remains uncertain. In contrast to the mechanically and labour-intensive filtration, the truly dissolved fraction of POPs in water may be measured through the use of passive sampling devices (Vrana et al., 2005). These devices deployed in situ are able to accumulate contaminants through diffusion as a result of the difference in chemical activity of the contaminant in water and in the sampler (initially free of the POP of interest). If samplers are deployed for sufficiently long periods of time, equilibrium between the POP concentration in the sampler and that in water can be reached, and dissolved phase concentrations can be calculated if sampler–water partition coefficients, KSW , are known (Reichenberg and Mayer, 2006). Integrative sampling (when analyte uptake is in the linear phase of uptake) can be useful when POP concentrations in water fluctuate, or when exposure times are well below the time needed to reach equilibrium. This is particularly relevant for POPs, as linear uptake for periods of weeks to months can be expected for hydrophobic compounds (log KOW . 5–6) with commonly used samplers such as semipermeable membrane devices (Huckins et al., 2006). In situ calibration of uptake rates is commonly achieved using performance reference compounds (PRCs), non-naturally occurring deuterated analogues of chemicals of interest
226
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
that are spiked in the samplers prior to exposure, and which are able to diffuse out of the sampler during exposure (Booij et al., 1998). For both equilibrium and integrative sampling, the accuracy of concentrations is strongly dependent on the quality of the laboratory-measured KSW values (Huckins et al., 2006). Specific to integrative sampling, bias in the calculation of concentrations for POPs with log KOW . 5–6 may result from the extrapolation of uptake rates from the use of dissipation rates for PRCs with log KOW in the range 3–5. For POPs that can be totally or partially dissociated when present in water (when pH .. pKa for example), specific methods may be required to measure the different fractions. This is particularly important when the differing properties of these chemical fractions have the potential to significantly influence the overall fate of these chemicals in the environment. As sediment can act as a source of POPs to their surroundings, methods to quantify POP levels and their potential for release out of the sediment have been the focus of much work over recent years. Exhaustive extractions using combinations of solvents have generally been applied to determine total POP concentrations in sediments. As these approaches tend not to consider contaminant bioaccessibility issues, and do not provide information on the strength of the sorption of contaminants to the sediment, many alternative mild extraction techniques have been developed, tested and validated in the last decade (Allan et al., 2006). As previously mentioned in this chapter, the often observed non-linearity of contaminant sorption isotherms, and the slow kinetics of contaminant sorption and desorption after increasing sediment– contaminant exposure time, need to be accounted for when attempting to understand and model the long-term fate of POPs in the aquatic environment. Shake-flask extraction using mild extractants, such as aqueous solutions of hydroxypropyl-cyclodextrin or solvents such as butanol, may be conducted (usually using predefined extraction times) to estimate the size of the rapidly desorbing fraction of contaminants, or the size of the fraction that may be available for biodegradation by microorganisms (Allan et al., 2006). Desorption rates can also be obtained with similar shake-flask extractions, where contaminated sediment is brought into contact with an adsorbent phase, such as XAD or Tenax resin, that is renewed a number of times (Sormunen et al., 2008). For each interval, the fraction able to desorb from the sediment and sorb to the resin is quantified, and this allows the determination of rapidly, slowly and very slowly desorbing fractions of contaminant and their associated desorption rates (van Noort et al., 2003). A recent interest involves the use of passive sampling devices to measure POP activity or concentration in sediment pore water, and to estimate the resulting sediment–water partition coefficients (e.g., Witt et al., 2008). Such sediment-specific partitioning data may be required, because the effects of additional sorption phases such as black carbon can be difficult to predict. While most of these studies have focused on partitioning of POPs to sediments, such techniques may also be applied to estimate DOC–water partition coefficients (Durjava et al., 2007). Other techniques for the measurement of dissolved contaminant concentrations in pore water rely on conventional centrifugation, aiming to collect pore waters and quantify POP following pore water manipulation (Hawthorne et al., 2005).
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
227
The techniques described above generate data that can be utilised for many types of modelling applications, including the use of multimedia models. Yet the success of the subsequent modelling step relies not only on the correctness of these data but also on a knowledge of the scale of the contamination (variability, spread and homogeneity) and its connection to the resolution of the model (number of compartments in the case of multimedia models). Significant uncertainty often resides with the knowledge of the heterogeneity of the contamination, and with the resulting representativeness of the monitoring data. This is strongly dependent on the design of monitoring studies. Another challenge is the often costly analysis for POPs such as dioxins, and this can have a significant impact on the number of samples that can be analysed at a reasonable cost. This in turn can ultimately influence the quality of the output of modelling exercises. Sometimes the use of proxies is possible to reduce or minimise the burden of costly analysis. Overall, monitoring and data requirements should be effectively balanced against the data requirements of the model. In turn, the choice of resolution for the model should accommodate the quality and quantity of data available for calibration and validation of the model. Therefore an iterative process is often required, whereby both field/experimental data and model outputs are refined.
6.3.2
Model uncertainty and sensitivity analysis
The inherent natural variability of aquatic systems, and the uncertainties associated with the various measurements, sometimes make it very challenging to obtain knowledge of an acceptable quality with respect to the ‘‘true’’ state of the system. Similarly, uncertainties are also inherently present in models, which are in essence simplified and idealised abstractions from the real systems they attempt to describe. Only a limited selection of the elements and processes of the real system can be included in models. If the purpose of modelling is to use the output results as knowledge input to decision-making processes, such as those connected with planning of the most costeffective sediment remediation measures, modelling uncertainties and relevance will become particularly important aspects to assess. Proper model uncertainty assessment is essential to allow environmental managers to judge whether the model results are sufficiently accurate and precise to support decision-making (Arhonditsis et al., 2007; Larssen et al., 2006; Saloranta et al., 2003, 2008). Failure to account for uncertainties in modelling results may result in the provision of misleading information to those in charge of the decision-making process. In model uncertainty analysis, parameter values are estimated and modelling results expressed in terms of probability distributions or confidence intervals, rather than as single deterministic values. Funtowicz and Ravetz (1990) distinguish between three different types of uncertainty: technical, methodological and epistemological. The first type, technical uncertainties, are what a model uncertainty analysis normally quantifies, that is, how uncertainties in model (input) forcing and parameter values are reflected in the modelling (output) results. However, there are also uncertainties connected with the representativeness of the modelling methodology and model equations for the system
228
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
they aim to describe. Quantification of this type of methodological uncertainty is more difficult, as it requires the application and comparison of different models and methods on the same site (Burkhard, 1998; Cowan et al., 1994). Moreover, unexpected events and surprises are always possible, and these epistemological uncertainties are connected to currently unknown factors that are beyond current scientific knowledge of the system. Closely related to uncertainty analysis are the different sensitivity analysis techniques that are used to identify the most important model input factors and parameters that govern particular model output variables of interest (e.g., Saltelli et al., 2000). Sensitivity analysis can help to increase insight into the model behaviour, and point out key factors and parameters in the model application. As models are often too complicated for analytical uncertainty or sensitivity analysis to be conducted, numerical methods must be applied. Which method is best suited to a particular model depends, among other things, on the purpose of the uncertainty/sensitivity analysis, the computing capacity available (as these methods often require thousands of model runs), the characteristics of the model, and the quality of the forcing and calibration data. Monte Carlo simulation methods (e.g., Cullen and Frey, 1999) are commonly applied in numerical uncertainty analysis. Monte Carlo simulations require the definition of probability distributions for all factors and model parameters that bear some uncertainty. The model is then run repeatedly, 100–10 000 times or more. Following each round of model execution, a new set of values for the various factors and parameters is sampled randomly (possibly taking into account correlations between parameters) from these probability distributions. Specific model outputs are saved for each execution round, and once all simulations are completed, probability distributions reflecting model parameter uncertainties can be constructed from the model output results. A shortcoming of basic Monte Carlo methods is that it may be difficult in practice to estimate probability distributions and correlations accurately for a number of different parameters. This may lead to inconsistencies between uncertainties of model predictions and observed values. For example, certain combinations of the parameters sampled during Monte Carlo simulations may produce unrealistic model results compared with observations, and thus excessively high uncertainty bounds may be estimated for model output. Furthermore, a well-known issue in model parameter estimation (calibration) is that many different combinations of parameters may give similar, equally plausible output results (the equifinality problem; Beven, 2006). One solution to resolve problems of inconsistencies in model calibration and uncertainty analysis is to apply Markov chain Monte Carlo (MCMC) simulation methods. These methods are based on Bayesian inference, and are suitable for (automatic) model calibration and estimation of model parameter uncertainties, and thus uncertainties in model predictions. Additionally, uncertainties related to variability and error in measurement data, discussed in Section 6.3.1, can be taken into account and estimated with MCMC simulation. In other words, both model parameters and field measurement data are considered as uncertain stochastic variables in the MCMC simulation. Previous studies have shown that the application of MCMC
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
229
simulation methods in environmental modelling has significant advantages for many types of model application, including multimedia models (Arhonditsis et al., 2007; Kuczera and Parent, 1998; Larssen et al., 2006; Lin et al., 2004; Malve et al., 2007; Saloranta et al., 2008). The general idea of MCMC simulation is based on iterative selection of new parameter values by a ‘‘random walk’’ search. This random search is, however, steered towards improved fit between model results and observations. This goodness of fit is expressed in terms of likelihood of observations, which affects the probability of whether the current parameter values proposed by the random walk are accepted or rejected. Measurement errors and spatio-temporal variability in observations, not explained by the model, are taken into account when calculating this likelihood. A large number of iterations (i.e. model runs) are executed, and the chain of randomly selected parameter values should, after a while, converge to a ‘‘stable’’ random variation pattern. This converged chain then represents samples of parameter value sets from a joint probability distribution that optimally, in a probabilistic sense, fits the corresponding model results to the observations used in the calibration. In a successful MCMC simulation, all combinations of parameters that give plausible output results are revealed (cf. the equifinality problem; Beven, 2006). Previous knowledge (or lack of) of the probability of different parameter values is expressed in the so-called ‘‘prior probability distributions’’ prior to starting MCMC simulations. These prior distributions are then updated in the MCMC simulation on the basis of model simulations and calibration data. The resulting updated parameter probability distributions expressing model parameter uncertainties are called ‘‘posterior distributions’’ (see Figure 6.4a in Section 6.4.2). The basic steps conducted in Monte Carlo and MCMC simulations are presented below to further highlight their similarities and differences. The following sequence of steps is repeated N times: • • • •
•
Step 1: Select new parameter values (either randomly or from a predefined distribution). Step 2: Run the model with the new parameter values. Step 3 in Monte Carlo simulation: Save model results for later statistical analysis of uncertainty. Step 3 in MCMC simulation: Accept or reject the new parameter values, depending on a calculated likelihood ratio taking into account model results, observations, and prior parameter probability from previous and present iteration rounds. In case of rejection, go back to parameter values from previous iteration round. Save parameter values to the ‘‘chain’’ for later construction of posterior distributions. Step 4: Go back to Step 1.
In summary, MCMC simulation techniques facilitate an effective synthesis and utilisation of: • •
available observations; expert knowledge (e.g., in the form of prior probability distributions); and
230 •
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
scientific theory (in the form of a model).
More details on the MCMC simulation techniques can be found in, for example, Gamerman (1999), Gelman et al. (2004), Larssen et al. (2006) and Saloranta et al. (2008). Section 6.4 presents an example of the use of MCMC simulation to calibrate and analyse parameter-related (i.e., technical) uncertainties in the application of a multimedia model.
6.4
THE GRENLAND FJORDS DIOXIN CASE: SIMULATING CONTAMINATED SEDIMENT REMEDIATION ALTERNATIVES
The following sections illustrate the application of some of the models and model analysis methods described in the previous sections, and show how different types of knowledge and data can be brought together to provide the best possible science-based estimates of the future state of a contaminated fjord following the implementation of different sediment remediation scenarios. The following examples of model application focus on the Grenland fjords, a severely dioxin-polluted Norwegian fjord system.
6.4.1
Conceptual site model for the Grenland fjords
Frierfjorden is the innermost branch of the Grenland fjord system in southern Norway (Figure 6.2 – see colour insert). It has a surface area of ,20 km2 and mean and maximum depths of 40 m and ,100 m, respectively. The sill depth at the mouth of the fjord is ,25 m. A magnesium production plant began operating in 1951 by the island of Herøya in the innermost part of Frierfjorden, and large amounts of dioxins and other chlorinated organic pollutants were formed as by-products during the chlorination of magnesium oxide to yield water-free magnesium chloride. The direct emissions to seawater reached 10 kg per year, calculated as 2378-TCDD (tetrachlorinated dibenzo-p-dioxin) toxicity equivalents, in the mid-1970s. A growing awareness of the dioxin contamination, along with findings of dioxins in environmental samples, resulted in the Norwegian authorities issuing advice against the consumption of fish and shellfish caught in the area. Cleaning devices were therefore installed at the plant in the mid-1970s and at the end of the 1980s, and allowed a reduction in direct emissions to less than 10 g annually. In 2002, the point source of dioxins to the fjord area stopped altogether with the closure of the magnesium plant. Based on the strong abatements against direct emissions, the Norwegian environmental management team stated in 1990 that it should be possible by the year 2000 to suspend the advice to fish and shellfish consumers. This was and is the primary environmental goal for the area. Today, in 2009, this dietary advice remains in place. Obviously, the positive expectations for the fjord in 1990 were based on the lack of a holistic understanding of the dioxin flow in the system. Hence, in 2000, a group consisting of representatives from the Norwegian environmental management team, the owner of the problem and scientific institutions initiated the formulation of a
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
231
conceptual site model for the dioxin flow in this fjord system. Four hypotheses were formulated: •
•
•
•
Seafood bioaccumulates dioxins deposited in the fjord sediments through the food chain. Crab, fjord sediment bed-dwelling fish (eel, flounder and cod), and fish that periodically reside on the fjord sediment bed (sea trout) accumulate dioxins through feeding on dioxins bioaccumulated in organisms that live in or near the sediment. Current dioxin levels in the sediment are sufficient to account for the high dioxin levels measured in crabs, various species of flounder, cod and possibly sea trout, which are the result of feeding on molluscs or sediment associated with high concentrations of dioxins. Dioxin deposits on the fjord sediment bed enter the water phase through physical resuspension into the water masses. Mussels, pelagic fish and benthic-feeding fish that periodically live in the free water masses (sea trout) accumulate dioxins through feeding on dioxin-containing planktonic organisms. The amount of dioxins currently known to be discharged from the industrial installation to the brackish water layer in the Frierfjorden is sufficient to account for the high levels of dioxins in mussels, pelagic fish (herring and sprat) and fish that periodically live in the freely moving water (sea trout) in the Grenland fjords. There is a considerably higher discharge of dioxins into the Grenland fjords than previously known. Dioxins enter the brackish water in the Frierfjorden and are transported with the brackish water flow to the fjord areas outside the Frierfjorden and to coastal waters. Suspended particles with high dioxin concentrations settle into the sediment from the brackish water and contribute to the high concentration of dioxins in the sediment.
The CSM drafting group concluded that the high dioxin content of seafood from the Grenland fjords could best be explained by the following two main hypotheses: • •
Dioxin deposits in the fjord bed are accumulated by fjord sediment bed-dwelling organisms, and spread to seafood through the food chain. Considerable amounts of dioxins are discharged from sources not yet identified.
The initially written ‘‘conceptualisations’’ were transformed to a graphical CSM: see Figure 6.3. The CSM drafting group recommended that further and more focused studies be planned and performed. They also stated that the insight gained from these studies should be collated in a model describing dioxin emissions from different sources, routes of transport for dioxins, and the dioxin content in seafood. Such a model should then be used as a basis for exploring possible remediation strategies.
6.4.2
Multimedia model for the Grenland fjords
The CSM was followed by the development of multimedia models (Persson et al., 2006; Saloranta et al., 2006a; 2008) to allow more quantitative and detailed predictions of and insight into the transport, fate and bioaccumulation of the dioxins in the
232
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 6.3: A simplified model for emissions and pathways of dioxins in the Grenland fjords. Known sources
Sea organisms
Unknown sources
Existing discharges ➢ To water ➢ To air
Organisms in upper
Fjord sediments ➢ Organisms accumulate dioxins through ingestion of prey organisms that live in or near the sediments ➢ Dioxins are transported to the water masses through resuspension due to wave and current erosion and prop wash
Organisms in upper
Land or sea ➢ Contaminated soil ➢ Landfills at primary industrial site ➢ Landfills at other industrial sites ➢ Runoff from contaminated catchment areas ➢ Transport from contaminated sediments in nearby Gunneklev fjord
Likely importance:
water layers only ➢ Blue mussels ➢ Pelagic fish (herring and sea trout)
water layers and at sediment surface ➢ Sea trout
Organisms at sediment surface ➢ Crab ➢ Cod ➢ Flat fish
Large
Moderate
Small
Grenland fjords. The most recent work of multimedia model application in the Grenland fjords (Saloranta et al., 2008) can be divided into the following steps or tasks, which are described in more detail below: • • • •
multimedia model (SF-tool) application set-up; model sensitivity analysis, calibration and uncertainty analysis; model evaluation; and simulation of remediation scenarios.
Multimedia model application set-up The SF-tool modelling package (Saloranta et al., 2006a, 2006b, 2008), applied recently to the Grenland fjords, consists of: (i) a water–sediment fugacity model code for the simulation of sources, sinks and transport of POPs in a fjord, estuary or lake system; and (ii) a bioaccumulation rate constant model code for simulation of the uptake and bioaccumulation of POPs in an aquatic food web. These models are
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
233
formulated along the multimedia model framework described in Section 6.2.2, and have been applied to different POPs in many Norwegian fjords and lakes. In addition, the SF-tool model package is equipped for advanced model sensitivity and uncertainty analysis. Prior to setting up the actual model, the purpose of the modelling, definition of management objectives and possibly planned remedial operations were clarified, as these might largely influence the selection and decisions made in the model set-up. The next task in development of the SF-tool was to decide how to divide the fjord system into the different model compartments. In Frierfjorden, for example, the strongly stratified surface layer of lesser saline water due to river inflow, as well as the sill depth at the mouth of the fjord, dictated a vertical division into three layers (0– 5 m, 5–25 m, 25–100 m). The typical water residence time for Frierfjorden is a few days for the surface compartment, 10–20 days for the intermediate compartments, and 1–3 years for the deep compartments. The horizontal division of Frierfjorden was made to distinguish between the harbour area close to the industrial park and the rest of the fjord basin (Figure 6.2). Furthermore, each water compartment was associated with a corresponding sediment compartment. Interfaces of the water compartment with the sediment dictated the extent of the sediment compartment areas, and the thicknesses of these compartments (i.e. the well-mixed sediment layer) were based on sediment-profiling camera images. All in all, modelled water column and sediment areas in the Grenland fjords were split into 26 compartments in this model application. As pointed out in Section 6.2.1, these compartments make up the basic units (or resolution) in the model, and the properties of these compartments, such as their size, concentration of various substances and interaction with other compartments, make up a significant proportion of the model parameters. Instead of applying a more detailed biotic food web model, a simple bioaccumulation factor KBAF was used to relate the dissolved dioxin concentration in water Cdiss_W and sediment pore water Cdiss_PW to concentrations in the biota, Cb . Thus the conversion from abiotic to biotic concentrations was undertaken as follows: Cb ¼ K BAF ½Cdiss_PW þ ð1 ÞCdiss_W
(6:6)
where denotes the proportion of exposure from sediment pore water and (1 ) that from water in the habitats of cod and crab. The solid–water partitioning coefficients and food web structure were assumed to remain unchanged in time. The simulated time series of Cb were in addition ‘‘delayed’’ in the model according to the response time, Tresp, both for cod and crab. KBAF and other parameters in this equation were estimated in the MCMC simulation from available data on dioxin concentrations in water (both particle-bound and dissolved), sediment (particle-bound) and biota (crab hepatopancreas and cod liver) in Frierfjorden. Model sensitivity analysis, calibration and uncertainty analysis Following completion of the model set-up, a sensitivity analysis was executed using the Extended FAST technique (Saltelli et al., 1999, 2000). This analysis was able to identify model parameters that most significantly influenced key output variables of
234
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
interest (see Saloranta et al., 2008). Results from the sensitivity analysis were helpful in selecting a set of 19 key model parameters for which probability distributions were estimated in the MCMC simulation using available data from the fjord (sediment, water, cod and crab). In addition, estimates for the stochastic measurement error and spatio-temporal variability of observations were calculated in the MCMC simulation ( 2a and 2b for the abiotic and biotic data variance, respectively). Prior probability distributions of the key parameters were estimated on the basis of expert judgement and some previous observations and model results. Prior distributions were set to be statistically independent marginal distributions spanning a relatively wide range. Flat non-informative distributions were assumed for 2a and 2b . The remaining model parameters, not estimated in the MCMC simulation, were fixed to their nominal values. The last 80 000 out of a total number of 150 000 iterations were taken to represent the final, seemingly well-converged part of the parameter chain. Final model simulations and uncertainty analysis presented in the next two sections were generated by randomly drawing 2000 samples of parameter sets from this final chain and conducting simulations with these samples. Model evaluation The performance of the calibrated model following MCMC simulation was evaluated by comparing model simulations with the data used in calibration, and by assessing how reasonable were the resulting estimates of posterior probability distributions. Figure 6.4a shows the posterior distribution for a calibration parameter whose distribution was well updated and refined in the MCMC simulation on the basis of the data used in the MCMC simulation. For certain parameters, however, posterior distributions were not significantly different from their prior probability distributions, indicating that observations used in the MCMC simulation do not convey the information required to update the prior value estimates of these parameters. Figure 6.4b shows simulation results from the calibrated model compared with corresponding observations over the period 1950–2050 (with no simulation of remediation measures). The model results fit well with yearly observations of dioxin concentration in cod liver, Ccod liver , and the simulated 95% confidence intervals appear to adequately cover the spatio-temporal variability of these observations. The model fit with sediment, water and crab data (not shown) was equally satisfactory. In the MCMC simulation, the spatial distribution of dioxin concentration in each medium (water, sediment, cod and crab) is assumed to follow a log-normal distribution, that is, a normal distribution when values are log-transformed. The model is assumed to simulate the mean of this normal distribution (at a given moment), and its variance is described via 2a and 2b, which for simplicity are assumed to be timeinvariant. Consequently, based on this distribution, it is possible to estimate median, mean and different percentiles of the dioxin concentration in the cod (liver) population. This is illustrated in Figures 6.4b, 6.5a and 6.5b (the latter can be found in the colour insert), where population medians and means, as well as means of 20 pooled randomly selected cod (liver) individuals, are used, together with a display of the corresponding confidence intervals for these estimates.
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
235
Probability density
Figure 6.4: Evaluation of calibrated model after MCMC simulations. (a) Prior (solid line) and posterior (histogram) probability distributions for one of the 19 calibration coefficients (a3 ; used for scaling the nominal sediment burial rates). (b) Simulated median dioxin concentration in cod (liver) population in Frierfjorden in 1950–2050. Solid line denotes the 50% percentile, and the dashed lines the 2.5 and 97.5% percentiles, of the 2000 simulations conducted in the uncertainty analysis. Corresponding observations and their estimated error variance (95% confidence intervals) are shown by the square markers.
Ccod liver /ng TE/kg wet weight
0.5
10
1.0
1.5 a3 (a)
2.0
2.5
3.0
4
2
10
0
10
1950
1975
2000 Year (b)
2025
2050
Simulation of remediation scenarios Following performance evaluation of the calibrated model, the final major task in this modelling study was to simulate how remediation of the contaminated sediments by isolation capping (Section 6.1.2) in the Frierfjorden would affect dioxin concentrations in water and sediment pore water, and how this would affect those in cod and crab
10 2000
Class I
2025 Year
2050
(i)
10 2000
0
Class I 2025 Year
2050
(ii) (a)
10
2000
0
101
101
101
0
102
102
102
10
Class V
104
103
Class V
104
103
3
104
Class I
Class V
(iii)
2025 Year
2050
Figure 6.5: Model simulation results for the three remediation scenarios assumed to take place in 2010. (a) Simulated mean dioxin concentration in cod (liver) population in Frierfjorden in 2000 –2050. Scenarios: (i) no remedy; (ii) partial capping (, 25 m); (iii) full capping. Solid line denotes the 50% percentile and the dashed lines the 2.5 and 97.5% percentiles of the 2000 simulations conducted in the uncertainty analysis. The Norwegian pollution class (I–V) limit values are indicated by the horizontal dashed lines. (Figure 6.5(b) can be found in the colour insert.)
Ccod liver/ng TE/kg wet weight
236 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
237
from Frierfjorden. Three different simplified remediation scenarios were defined to exemplify the effect of capping: • • •
no cap; partial isolation cap, covering the sediment areas shallower than 25 m (i.e. surface and intermediate sediment compartments); and full isolation cap with full coverage of the Frierfjorden.
The purpose of capping defined in model simulations was to obtain a clean top layer of sediment serving as habitat for benthic biota. The capping operation was assumed to take place in 2010. In the model, the fugacity of the capped sediment compartments was set to zero at the time of capping, thus assuming for simplicity a technically ideal cap application. Results from simulations of these remediation scenarios are shown in Figure 6.5. In the case of the partial capping scenario (Figure 6.5a), only the lower confidence interval (2.5% percentile) of the 2000 simulations conducted in the uncertainty analysis clearly demonstrates a positive effect of remediation, that is, lowered Ccod liver after 2010 compared with the no remediation scenario. However, no significant effect can be seen in the 50% percentile (median) of model estimates. As cod appears to be exposed to contamination originating from large areas of Frierfjorden bottom sediments, a significant impact of remediation measures on Ccod liver can first be seen for the full capping scenario (Figure 6.5a). In order to illustrate model uncertainties from another perspective, Figure 6.5b shows the probability of cod in Frierfjorden belonging to each of the five Norwegian pollution status classes (I–V) on the basis of its Ccod liver in the period 2010–2050. Instead of using the mean of the whole cod population, as in Figure 6.5a, the mean Ccod liver of a pooled sample of 20 cod individuals was extracted from simulation results. This aims to reflect the type of monitoring commonly undertaken to assess the status of the fjord. This extraction is done by using the estimated distribution of 2b and simulating Ccod liver in randomly selected individuals of the cod population, from which means for 20 pooled individuals can easily be calculated. Figure 6.5b shows that, 10 years after application of a full cap (in 2020), there is a 73% probability (i.e., 1460 out of the 2000 simulations shown) that Ccod liver in a sample of 20 cod from the Frierfjorden would result in a class I level of pollution (i.e., no significant pollution by dioxins). However, if no remediation measures were implemented, the probability of Class I would be only slightly different from zero. In the above example, uncertainties related to key model parameter values were quantified. Figures 6.4 and 6.5 show how this uncertainty is propagated to simulation results for dioxin concentrations in cod liver. However, it is important to bear in mind that no uncertainty analysis is able to cover all uncertainties: consequently, other types of uncertainty, such as those related to model assumptions or limits of current scientific knowledge (methodological and epistemological uncertainties; see Section 6.2.1), should be considered and openly discussed when using the output of simulations in connection with making decisions related to the management of the fjord. Such uncertainties include the coarse spatial resolution of the model, which may overlook possible unknown contaminant hotspot areas in the fjord, or the assumption
238
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
of time-invariant solid–water partitioning parameters related to the bioavailability of the dioxins. The model may require adjustments in order to account for these uncertainties when further studies are conducted in the future.
6.5
SUMMARY
In this chapter, we have discussed some of the challenges of and methods for measuring and modelling POPs in complex aquatic systems. As POPs tend to bind very strongly to sediment particles and organic matter when released into aquatic systems, the highest POP concentrations are usually found in bottom sediments. Consequently, contaminated sediments pose a significant environmental problem in many countries. To reduce the risk of toxic impacts of POPs in sediments of lakes, rivers, fjords and other water bodies for the ecosystem and human health (dietary exposure), different remediation techniques have been developed and applied, as discussed in Section 6.1.2. Such measures are often costly, and the success of the remediation is by no means guaranteed, but is associated with many uncertainties. This range of uncertainties spans over the chemical, ecological, social and political responses to the remediation measures, as well as the practical engineering challenges in the remediation operation. Models have proved to be very useful tools in handling, organising and synthesising the different processes affecting the transport, fate and bioaccumulation of POPs in aquatic systems. However, a state-of-the-art modelling tool alone is not enough to guarantee successful modelling results. The availability of data of good quality and the skills of the modeller play at least as important a role as the model code itself. Often models are applied only as straightforward tools to produce the predictions needed, not acknowledging the fact that model behaviour and performance can be significantly enhanced by the many model analysis and optimisation techniques available (e.g., sensitivity and uncertainty analysis and automatic calibration). One very useful model analysis technique presented in this chapter is Bayesian MCMC simulation, which combines parameter calibration and uncertainty analysis. This technique is able to effectively synthesise the current scientific theory (in the form of the modelled processes) with available measurements and expert knowledge, as exemplified in Section 6.4.2. Moreover, numerical (technical) uncertainties in this knowledge are taken into account, estimated, and carried all the way to the final model simulation results, as exemplified by the simulations of dioxin levels in cod in Section 6.4.2. Improvements in physico-chemical process descriptions and in measurement techniques are essential to the further refinement of multimedia models. Moreover, better utilisation of model uncertainty and sensitivity analysis and calibration techniques, and linking of multimedia models with models from other relevant fields of science (e.g., carbon mass balance or capping efficiency models), are key elements in producing more accurate, robust and useful scientific knowledge on the fate of POPs in aquatic systems.
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
239
ACKNOWLEDGEMENTS We warmly thank the Norwegian Institute for Water Research (NIVA) for supporting the production of this chapter, as well as the Research Council of Norway, Norsk Hydro, and the County Governor of Telemark for funding the Grenland fjord modelling studies.
REFERENCES Allan, I.J., Semple, K.T., Hare, R. and Reid, B.J. (2006). Prediction of mono- and polycyclic aromatic hydrocarbon degradation in spiked soils using cyclodextrin extraction. Environmental Pollution. 144: 562–571. Apitz, S.E., Davis, J.W., Finkelstein, K., Hohreiter, D.W., Hoke, R., Jensen, R.H., Jersak, J., Kirtay, V.J., Mack, E E., Magar, V.S., Moore, D., Reible, D. and Stahl, R.G. (2005). Assessing and managing contaminated sediments. Part I: Developing an effective investigation and risk evaluation strategy. Integrated Environmental Assessment and Management. 1(1): 2–8. Arhonditsis, G.B., Qian, S.S., Stow, C.A., Lamon, E.C. and Reckhow, K.H. (2007). Eutrophication risk assessment using Bayesian calibration of process-based models: application to a mesotrophic lake. Ecological Modelling. 208: 215–229. Armitage, J.M. and Gobas, F.A.P.C. (2007). A terrestrial food-chain bioaccumulation model for POPs. Environmental Science and Technology. 41: 4019–4025. ¨ ., Cornelissen, G., Saloranta, T., Broman, Armitage, J.M., Cousins, I.T., Persson, N.J., Gustafsson, O D. and Næs, K. (2008) Black carbon-inclusive modeling approaches for estimating the aquatic fate of dibenzo-p-dioxins and dibenzofurans. Environmental Science and Technology. 42: 3697–3703. Beven, K. (2006). A manifesto for the equifinality thesis. Journal of Hydrology. 320: 18–36. Beyer, A., Wania, F., Gouin, T., Mackay, D. and Matthies, M. (2002). Selecting internally consistent physicochemical properties of organic compounds. Environmental Toxicology and Chemistry. 21: 941–953. Booij, K., Sleiderink, H.M. and Smedes, F. (1998). Calibrating the uptake kinetics of semi-permeable membrane devices using exposure standards. Environmental Toxicology and Chemistry. 17: 1236–1245. Bortone, G. and Palumbo, R. (2007). Sustainable Management of Sediment Resources. Vol. 2. Sediment and Dredged Material Treatment. Elsevier, London. Brenner, R.C., Magar, V.S., Ickes, J.A., Abbott, J.E., Stout, S.A., Crecelius, E.A. and Bingler, L.S. (2002). Characterization and fate of PAH-contaminated sediments at the Wyckoff/Eagle Harbor superfund site. Environmental Science and Technology. 36: 2605–2613. Brevik, K., Bjerkeng, B., Wania, F., Helland, A. and Magnusson, J. (2004). Modeling the fate of polychlorinated biphenyls in the inner Oslofjord, Norway. Environmental Toxicology and Chemistry. 23: 2386–2395. Burkhard, L.P. (1998). Comparison of two models for predicting bioaccumulation of hydrophobic organic chemicals in Great Lakes food web. Environmental Toxicology and Chemistry. 17: 383–393. Burkhard, L.P. (2000). Estimating dissolved organic carbon partition coefficients for nonionic organic chemicals. Environmental Science and Technology. 34: 4663–4668. Cornelissen, G., Van Noort, P.C.M. and Govers, H.A.J. (1998). Mechanism of slow desorption of organic compounds from sediments: a study using model sorbents. Environmental Science and Technology. 32: 3124–3131. ¨ ., Bucheli, T.D., Jonker, M.T.O., Koelmans, A.A. and van Noort, Cornelissen, G., Gustafsson, O P.C.M. (2005). Extensive sorption of organic compounds to black carbon, coal, and kerogen in
240
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
sediments and soils: mechanisms and consequences for distribution, bioaccumulation, and biodegradation. Environmental Science and Technology. 39: 6881–6895. Cowan, C.E., Mackay, D., Feijtel, T.C.J., van de Meent, D., di Guardo, A., Davies, J. and Mackay, N. (1994). The Multi-Media Fate Model: A Vital Tool for Predicting the Fate of Chemicals. SETAC Press, Pensacola, FL. Cullen, A.C. and Frey, H.C. (1999). Probabilistic Techniques in Exposure Assessment. Plenum Press, New York. Dale, V.H. (Ed.) (2003). Ecological Modeling for Resource Management. Springer Verlag, New York. Durjava, M.K., ter Laak, T.L., Hermens, J.L.M. and Struijs, J. (2007). Distribution of PAHs and PCBs to dissolved organic matter: high distribution coefficients with consequences for environmental fate modelling. Chemosphere. 67: 990–997. Funtowicz, S.O. and Ravetz, J. (1990). Uncertainty and Quality in Science for Policy. Kluwer, Dordrecht. Gamerman, D. (1999). Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference. Chapman & Hall, London. Gelman, A., Carlin, B.C., Stern, H.S. and Rubin, D.B. (2004). Bayesian Data Analysis, 2nd edn. Chapman & Hall, London. Hawthorne, S.B., Grabanski, C.B., Miller D.G. and Kreitinger J.P. (2005). Solid-phase microextraction measurement of parent and alkyl polycyclic aromatic hydrocarbons in milliliter sediment pore water samples and determination of K-DOC values. Environmental Science and Technology. 39: 2795–2803. Huckins, J., Petty, J.D. and Booij, K. (2006). Monitors of Organic Chemicals in the Environment. Springer, New York. Jørgensen, T., Jensen, L.K. and Halvorsen, P.Ø. (2008). The Oslo harbour remediation project. Proceedings of the 5th Sednet Conference. Oslo, Norway. Available at http://www.sednet.org/ download/conference2008/4%20-%20abstract%20Jorgensen.pdf Karickhoff, S.W. (1981). Semiempirical estimation of sorption of hydrophobic pollutants on natural sediments and soils. Chemosphere. 10: 833–849. Karickhoff, S.W., Brown, O.S. and Scott, T.H. (1979). Sorption of hydrophobic pollutants on natural sediments. Water Research. 13: 241–248. Kuczera, G. and Parent, E. (1998). Monte Carlo assessment of parameter uncertainty in conceptual catchment models: the Metropolis algorithm. Journal of Hydrology. 211: 69–85. Larssen, T., Huseby, R.B., Cosby, B.J., Høst, G., Høga˚sen, T. and Aldrin, M. (2006). Forecasting acidification effects using a Bayesian calibration and uncertainty propagation approach. Environmental Science and Technology. 40: 7841–7847. Lepom, P., Brown, B., Hanke, G., Loos, R., Quevauviller, P. and Wollgast, J. (2009). Needs for reliable analytical methods for monitoring chemical pollutants in surface water under the European Water Framework Directive. Journal of Chromatography A. 1216: 302–315. Lin, H.-I., Berzins, D.W., Myers, L., George, W.J., Abdelghani, A. and Watanabe, K.H. (2004). A Bayesian approach to parameter estimation for a crayfish (Procambarus spp.) bioaccumulation model. Environmental Toxicology and Chemistry. 23: 2259–2266. Lowry, G.V. and Johnson, K.M. (2004). Congener-specific dechlorination of dissolved PCBs by microscale and nanoscale zerovalent iron in a water/methanol solution. Environmental Science and Technology. 38: 5208–5216. Mackay, D. (2001). Multimedia Environmental Models: The Fugacity Approach. CRC Press, Boca Raton, FL. Magar, V.S., Ickes, J.E., Abbott, J.E., Brenner, R.C., Durrell, G.S., Peven-McCarthy, C., Johnson, G.W., Crecelius, E.A. and Bingler, L.S. (2003). Natural recovery of PCB-contaminated sediments at the Sangamo-Weston/Lake Hartwell Superfund site. Proceedings of the Second International Conference on Remediation of Contaminated Sediments, Venice, Italy. Battelle Memorial Institute, Columbus, OH. Malve, O., Laine, M., Haario, H., Kirkkala, T. and Sarvala, J. (2007). Bayesian modelling of algal occurrences: using adaptive MCMC methods with a lake water quality model. Environmental Modelling & Software. 22: 966–977.
MODELLING THE FATE OF PERSISTENT ORGANIC POLLUTANTS IN AQUATIC SYSTEMS
241
Meyer, T. and Wania, F. (2007). What environmental fate processes have the strongest influence on a completely persistent organic chemical’s accumulation in the Arctic? Atmospheric Environment. 41: 2757–2767. Mohan, R.K., Brown, M.P. and Barnes, C.R. (2000). Design criteria and theoretical basis for capping contaminated marine sediments. Applied Ocean Research. 22: 85–93. Næs, K., Axelman, J., Na¨f, C. and Broman, D. (1998). Role of soot carbon and other carbon matrices in the distribution of PAHs among particles, DOC, and the dissolved phase in the effluent and recipient waters of an aluminium reduction plant. Environmental Science and Technology. 32: 1786–1792. ¨ ., Bucheli, T.D., Ishaq, R., Næs, K. and Broman, D. (2002). Soot-carbon Persson, N.J., Gustafsson, O influenced distribution of PCDD/Fs in the marine environment of Grenlandsfjords, Norway. Environmental Science and Technology. 36: 4968–4974. Persson, N.J., Cousins, I.T., Molvær, J., Broman, D. and Næs, K. (2006). Modeling the long-term fate of polychlorinated dibenzo-p-dioxins and furans (PCDD/Fs) in the Grenland fjords, Norway. Science of the Total Environment. 369: 188–202. Reichenberg, F. and Mayer, P. (2006). Two complementary sides of bioavailability: accessibility and chemical activity of organic contaminants in sediments and soils. Environmental Toxicology and Chemistry. 25: 23–29. Reid, B.J., Jones, K.C. and Semple, K.T. (2000). Bioavailability of persistent organic pollutants in soils and sediments: a perspective on mechanisms, consequences and assessment. Environmental Pollution. 108: 103–112. Ruus, A., Green, N.W., Maage, A. and Skei, J. (2006). PCB-containing paint and plaster caused extreme PCB-concentrations in biota from the Sørfjord (Western Norway): a case study. Marine Pollution Bulletin. 52: 100–103. Saloranta, T.M., Ka¨ma¨ri, J., Rekolainen, S. and Malve, O. (2003). Benchmark criteria: a tool for selecting appropriate models in the field of water management. Environmental Management. 32: 322–333. Saloranta, T.M., Andersen, T. and Næs, K. (2006a). Flows of dioxins and furans in coastal food webs: inverse modelling, sensitivity analysis, and applications of linear system theory. Environmental Toxicology and Chemistry. 25: 253–264. Saloranta, T.M., Armitage, J., Næs, K., Cousins, I. and Barton, D.N. (2006b). SF-tool multimedia model package: model code description and application examples from the Grenland fjords. NIVA-Report 5216. Norwegian Institute for Water Research, Oslo. Saloranta, T.M., Armitage, J., Haario, H., Næs K., Cousins, I.T. and Barton, D.N. (2008). Modelling the effects and uncertainties of contaminated sediment remediation scenarios in a Norwegian fjord by Markov chain Monte Carlo simulation. Environmental Science and Technology. 42: 200–206. Saltelli, A., Tarantola, S., Chan, K.P.-S. and Scott, E.M. (1999). A quantitative model-independent method for global sensitivity analysis of model output. Technometrics. 41: 39–56. Saltelli, A., Chan, K. and Scott, E.M. (2000). Sensitivity Analysis. Wiley, New York. Sormunen, A.J., Koistinen, J., Leppa¨nen, M.T. and Kukkonen, J.V.K. (2008). Desorption of sedimentassociated polychlorinated dibenzo-p-dioxins, dibenzofurans, diphenyl ethers and hydroxydiphenyl ethers from contaminated sediment. Chemosphere. 72: 1–7. Streets, S.S., Henderson, S.A., Stoner, A.D., Carlson, D.L., Simcik, M.F. and Swackhamer, D.L. (2006). Partitioning and bioaccumulation of PBDEs and PCBs in Lake Michigan. Environmental Science and Technology. 40: 7263–7269. Tomaszewski, J.E., Werner, D. and Luthy, R.G. (2007). Activated carbon amendment as a treatment for residual DDT in sediment from a superfund site in San Francisco Bay, Richmond, California, USA. Environmental Toxicology and Chemistry. 26: 2143–2150. van Noort, P.C.M., Cornelissen, G., ten Hulscher, T.E.M., Vrind, B.A., Rigterink, H. and Belfroid, A. (2003). Slow and very slow desorption of organic compounds from sediment: influence of sorbate planarity. Water Research. 37: 2317–2322. Vrana, B., Mills, G.A., Allan, I.J., Dominiak, E., Svensson, K., Knutsson, J., Morrison, G. and
242
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Greenwood, R. (2005). Passive sampling techniques for monitoring pollutants in water. TrAC: Trends in Analytical Chemistry. 24: 845–868. Wang, X.O., Thibodeaux, L.J., Valsaraj, K.T. and Reible, D.D. (1991). Efficiency of capping contaminated bed sediments in situ. 1. Laboratory-scale experiments on diffusion-adsorption in the capping layer. Environmental Science and Technology. 25: 1578–1584. Wania, F. and Mackay, D. (1999). The evolution of mass balance models of persistent organic pollutant fate in the environment. Environmental Pollution. 100: 223–240. Warren, N., Allan, I.J., Carter, J.E., House, W.A. and Parker, A. (2003). Pesticides and other microorganic contaminants in freshwater sedimentary environments: a review. Applied Geochemistry. 18: 159–194. Witt, G., Liehr, G.A., Borck, D. and Mayer, P. (2008). Matrix solid-phase microextraction for measuring freely dissolved concentrations and chemical activities of PAHs in sediment cores from the western Baltic Sea. Chemosphere. 74: 522–529. Yogui, G.T. and Sericano, J.L. (2009). Polybrominated diphenyl ether flame retardants in the US marine environment: a review. Environment International. 35: 655–666. Zarnadze, A. and Rodenburg, L.A. (2008). Water-column concentrations and partitioning of polybrominated diphenyl ethers in the New York/New Jersey harbor, USA. Environmental Toxicology and Chemistry. 27: 1636–1642.
CHAPTER
7
A Bayesian-Based Inverse Method (BBIM) for Parameter and Load Estimation in River Water Quality Modelling under Uncertainty Yong Liu
7.1
INTRODUCTION
Over the past decades, deterioration of natural surface water bodies has become one of the major environmental concerns worldwide. Therefore, comprehensive water quality management is desired for prompt response to environmental stresses, effective management of water environments, and better protection of aquatic ecosystems. Water quality modelling (WQM) is considered to be a useful tool to support decisionmaking for water quality management worldwide (Vieira and Lijklema, 1989; Zou et al., 2007). Many water quality models have been developed to estimate pollutant loads and their resulting effects on water bodies (Chapra, 1997; Palmer et al., 2006). Most of the previous studies have been performed using forward models consisting of two main steps: estimating the pollutant loads from a nutrient model; and integrating water quality models and monitoring data to calculate the change in water quality variables. In recent decades, there have been numerous efforts to develop pollutant loading models, from simple export coefficient models and regression models to complex mechanistic models (Ghenu et al., 2008; Shrestha et al., 2008). However, load estimation remains a big challenge for modellers, especially in terms of data accuracy. At the same time, recent years have seen rapid development in accurate water quality monitoring technologies and improved data sharing. Thus the question here is: can we find a robust method for load estimation, using the plentiful monitoring data and some popular, verified models? Recent developments of the inverse method have provided a powerful tool to realise such an idea (Shen et al., 2006; Wan and Vallino, 2005; Zou et al., 2007). The inverse method has been widely applied in practical environmental Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
244
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
cases, such as model parameter estimation (Shen and Kuo, 1998; Zou et al., 2007), non-point source estimation (Shen et al., 2006), and groundwater (Michalak et al., 2004; Snehalatha et al., 2006; Vermeulen et al., 2005) and wastewater (Bumgarner and McCray, 2007) systems. Shen et al. (2006) formulated the estimation of non-point sources of faecal coliform as an inverse parameter estimation problem using the observed data. The method was successfully applied to the Wye River in the USA to estimate the allowable load for the river to attain water quality standards. Zou et al. (2007) proposed an adaptive neural network embedded genetic algorithm approach for inverse WQM. The method was applied to a full-scale, numerical example, showing the application of the inverse method in solving complicated water quality model problems. Uncertainty is another important question that needs to be rigorously addressed in model development and application. Such occurrences include the dynamic interactions that exist between pollutant loadings and the receiving waters, and uncertainties in parameter estimation and model outputs (Qin et al., 2007). Additionally, the importance of investigating the uncertainty pertaining to model structures and parameters has been extensively highlighted in the modelling literature (Arhonditsis et al., 2007; Malve and Qian, 2006). A practical water quality management strategy must be based on the effective involvement of: • • •
the uncertainties in data, model structure, and parameters; the uncertainties in model output; and the risk of water quality violating the targeted standards and the adaptive management under uncertainty (Liu et al., 2007).
Previous studies have focused largely on the uncertainties in model development, but the model output that has failed because of uncertainty could provide misleading information for decision-makers (Arhonditsis et al., 2007). Thus uncertainties in WQM must be reported in a straightforward and direct way in order for decisionmakers to achieve better adaptive management. The probability level is a popular method for addressing such problems (Malve and Qian, 2006). Previously, water quality models have usually been validated by a trial-and-error calibration process, the drawbacks of which include inefficiency, subjectivity and unreliability (Zou et al., 2007). Recently, Bayesian models have been applied in environmental modelling efforts, owing to their ability to handle uncertainty and incorporate prior information. Such an approach utilises previous WQM experiences, and provides probabilities for practical decision-making (Huang and McBean, 2007; Malve and Qian, 2006). When using Bayesian models, the entire posterior distribution of parameter values is used for parameter estimation. In addition, Bayesian inference techniques have increasingly been applied in environmental and ecological modelling, because they provide a convenient way to combine existing information and past experience (prior) with current observations (likelihood) for projecting future ecosystem responses (posterior). Thus, Bayesian methods are more informative than the conventional model calibration practices (i.e., mere adjustment of model parameters until the model misfit is minimised), and can be used to refine our knowledge of model input parameters, and to obtain predictions along with uncertainty bounds for
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
245
model outputs (Qian et al., 2003). The Bayesian approach has several advantages, including the expression of model outputs as probability distributions, rigorous assessment of the expected consequences of different management actions, optimisation of the sampling design of monitoring programmes, and alignment with the policy practice of adaptive management. All of these are particularly useful for stakeholders and policymakers when decisions for sustainable environmental management need to be taken (Arhonditsis et al., 2007; Reckhow, 1994). The inverse method can deal with the bottleneck of traditional methods of load estimation. Adding to the Bayesian approach can involve uncertainty and parameter estimation in the modelling process (Shen et al., 2006). Thus, integration of the two approaches should provide an effective way to answer the questions raised above concerning WQM. Such integration has been applied in the modelling of groundwater (Woodbury and Ulrych, 2000), contaminant source identification (Michalak and Kitanidis, 2003), and shellfish aquaculture ecosystems (Dowd and Meyer, 2003). However, it is seldom used in water quality management activities. Thus, the objective of this chapter is to develop a Bayesian-based inverse method (BBIM) for river WQM for supporting practical adaptive water quality management under uncertainty. The heavily polluted Hun–Taizi river system in north-eastern China was chosen as the case study. The model results will help local decision-makers to prevent pollutant discharge in an effective and efficient manner.
7.2
STUDY AREA
The Hun–Taizi river system, consisting of three rivers (Figure 7.1), is in central Liaoning Province, north-eastern China. The Hun River and the Taizi River meet at the Daliao River and then flow out to Bohai Bay, which is under threat of serious pollution and heavy eutrophication from upland nutrient loading, comprising mainly inorganic nitrogen and organic matter (SEPA, 2007). The Hun River is 415 km long, with a basin area of 11 500 km2 . The Taizi River is 413 km long, with a basin area of 13 900 km2 , and is located at 1228489 to 1248889E and 418029 to 418659N. The Daliao River is 94 km long, with a basin area of 1926 km2 . The Hun–Taizi river system is heavily polluted, suffering from both point and non-point sources. Most of the river segments cannot meet the targeted water quality goal set by the local environmental protection bureau (EPB). The Hun–Taizi river system is also among the three river systems that have been targeted by the Chinese central government for effective water quality management and improvement in the years up to 2020. Thus, in order to protect the regional environment and aquatic ecosystem, and control eutrophication in Bohai Bay, it is essential to control the pollutant loads in an effective way. First, a practical, relatively simple and robust water quality model should be developed. Specifically, in the Hun–Taizi river system, load estimation is important for each river segment, as they are under the administration of different counties. Then the model result and the embedded uncertainties and risks would be provided to the decision-makers directly. The EPB of Liaoning Province is responsible for water quality monitoring and management. The data were collected from the 18 regular monitoring sites (Figure
246
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 7.1: The Hun–Taizi river system. The 16 river segments, each one between two adjacent sampling sites, are: S1 between H1 and H2; S2 between H2 and H3; S3 between H3 and H4; S4 between H4 and H5; S5 between H5 and H6; S6 between H6 and H7; S7 between T1 and T2; S8 between T2 and T3; S9 between T3 and T4; S10 between T4 and T5; S11 between T5 and T6; S12 between T6 and T7; S13 between T7 and T8; S14 between T8, H7 and DL1; S15 between DL1and DL2; S16 between DL2 and DL3. N W
E S
H1 H2 H3 H4
Hun T1
H5 T3
T6 H6 River H7
River
T2
Taizi
T5 T4
T7
T8
Legend Sampling site
DL1
River River system boundary
Daliao River DL2 DL3 Bohai Bay
Tributary streams 0
10
20 30 km
7.1), covering 10 years from 1995 to 2004 and including water velocity and the 24 variables required by the Environment Quality Standards for Surface Water Quality (No. GB3838-2002), issued by the Ministry of Environmental Protection of China. The water samples were collected on a weekly basis. The temperature, pH and electrical conductivity (EC) of each water sample were determined at the sampling points by a digital pH-EC meter. All the other water quality parameters were sampled, preserved, delivered and analysed using the standard methods of APHA (1998) and the standards of GB3838-2002. All data were analysed; ammonia (NH4 þ ) and biological oxygen demand (BOD) were preferred as management factors by the local government, as they are viewed as the primary drivers for the eutrophication of Bohai
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
247
Bay (SEPA, 2007). Thus, two separate water quality models for NH4 þ and BOD were established in this study, based on BBIM.
7.3 7.3.1
BAYESIAN-BASED INVERSE METHOD Distributed-source model
Water quality models have commonly been used to support environmental management in river basins, and numerous models have been developed. The distributedsource model (DSM) is used here to support effective water quality management in the Hun–Taizi river system, considering the main loading contribution from non-point sources. The basic assumption for DSM is distributed sources, which means that the source enters a river in a diffuse and uniform manner (Chapra, 1997). The pollutant loading in each river segment is assumed to be uniform for each year. Sixteen river segments were chosen for the Hun–Taizi river system, each one between two adjacent sampling sites (Figure 7.1). The DSM can be written as (Chapra, 1997) v
dC ¼ k : C þ L dx
(7:1)
where v is the average water velocity in the river segment (m : d1 ), C is the concentration (mg : l1 ), x is the distance (m), k is the first-order decay rate (d1 ), and L is the diffuse source loading (g : m3 d1 ). Then the concentration C at any distance x can be expressed as C ¼ f ð k, v, x, C0 , LÞ
(7:2)
where C0 is the inflowing pollutant concentration at x ¼ 0. Specifically, for the ith river segment Si , the outflowing concentration Ci should be C i ¼ f ð k, v i , xi , C i1 , Li Þ
(7:3)
The model in Equations 7.2 and 7.3 can be viewed as a forward model, from Li to Ci . The reverse model for Equation 7.3 is k, Li ¼ f 1 ðv i , xi , C i1 Þ
(7:4)
Thus the problem of load and decay rate estimation can be transformed into an inverse model, which aims to find a set of k and L jointly so that a defined objective function can be met. The inverse parameter estimation is formulated as a non-linear optimisation problem (Pan and Wu, 1998). The objective function is usually measured by the degree of misfit (DOM) of model results and observed data (Zou et al., 2007). Least squares or weighted average squared error functions are widely used to formulate the objective function, which can be expressed as (Shen et al., 2006) J C9; ¼ min J ð C9; Þg (7:5)
248
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
where C9 denotes the simulated concentration at given parameter , and * is the optimal parameter that minimises the cost function J(C9; ). For the model in Equation 7.2, least squares is used to formulate the objective function as J ð C9; Þ ¼
tN X
f ð x, U , C0 , Þ Ca
2
(7:6)
t¼ t1
where N is the number of data points, the term f (x, U , C0 , ) represents the predicted concentration for a given set of parameter , and Ca is the monitored data (Palmer et al., 2006; Pan and Wu, 1998).
7.3.2
Bayesian model
Bayesian statistics have been used increasingly since the 1990s (Qian et al., 2003). In this case, all the unknown parameters are treated as random variables, and their distributions are derived from the known information (Borsuk et al., 2001). Thus, it provides a rigorous method for uncertainty analysis, and presents the key information for management decision-making (Reckhow, 1994). Bayesian inference is based on Bayes’ theorem as (Gill, 2002) pðŁj yÞ ¼
pðŁÞ pð yjŁÞ pð y Þ
¼ð Ł
pðŁÞ pð yjŁÞ
(7:7)
pðŁÞ pð yjŁÞdŁ
/ pðŁÞ pð yjŁÞ where p(Łj y) is the posterior probability of Ł, which is the conditional distribution of the parameters after the observed data; Ł is the parameter; p(Ł) is the prior probability of Ł; and p( yjŁ) is the likelihood function, which incorporates both the statistical relationships and the mechanistic relationships among the predictor and variables. Usually, it is impossible to obtain analytically summarising posterior distributions, which has limited the practical implementation of Bayesian methods. Over recent decades, the Monte Carlo method has been applied to obtain numerical summarisation, and particularly the Markov chain Monte Carlo (MCMC) algorithm (Qian et al., 2003). MCMC is a method used to draw samples from multidimensional distributions for numerical integration. The principle underlying the MCMC implementation in Bayesian inference is to construct a Markov process whose stationary distribution is the model posterior distribution, and then run the process for long enough to produce an accurate approximation of this distribution (Malve and Qian, 2006). We used WinBUGS (version 1.4.3; Lunn et al., 2000), called from R (version 2.6.0) using the package R2WinBUGS (version 2.1-8; Gelman and Hill 2007). To incorporate DSM into Equation 7.7, a normally distributed error term was added to the model, with zero mean and variance of 2 :
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
k
C ¼ C 0 e v x þ
L k (1 e v x ) þ k
249
(7:8)
Then the likelihood function for all 10 years ( j ¼ 1, . . ., 10, where 1995 is j ¼ 1) and 16 river segments (i ¼ 1, . . ., 16) of the Hun–Taizi river system in Figure 7.1 is 8 2 9 > > = < 16 Y 10 L k k Y 1 C9i, j C0 e v x (1 e v x ) pffiffiffiffiffiffiffiffiffiffiffi exp (7:9) k > > 2 ; : i¼1 j¼1 2 2 2 where Ci, j is the observed value. Estimation of parameter k, the first-order decay rate in Equation 7.1, is a key step for WQM. There are two assumptions about k for the Hun–Taizi river system: • •
k i , assuming k is spatially variable for each segment, but similar for each year from a common prior distribution; and k j , assuming k is temporally variable for each period, but similar for each segment from a common prior distribution.
Thus, there are two alternative models to estimate k, shown as Equations 7.10 and 7.11, in which subscripts i and j refer to river segment and time, respectively.
k k Li, j v i xi v i xi i, j i, j C i, j ¼ C i1, j e þ 1e ; 8i ¼ 1, 2, . . ., 16 (7:10) ki or k
C i, j ¼ C i1, j e
v j xi i, j
k Li, j v j xi þ 1 e i, j ; 8 j ¼ 1, 2, . . ., 10 kj
(7:11)
Two derived parameters were applied in this study to justify the above assumptions in Equations 7.10 and 7.11: • •
the deviance information criterion (DIC), a Bayesian measure of how well the model fits the data; the larger it is, the worse the fit (Malve and Qian , 2006); the coefficient of determination R2 between the original data and the predicted values.
For a Bayesian model with data y, unknown parameters Ł and likelihood function p( yjŁ), the deviance is defined as (Gelman et al., 2004) DðŁÞ ¼ 2 log[ p( yjŁ)] þ c
(7:12)
where c is a constant. The expectation of D(Ł) is DðŁÞ ¼ EŁ ½ DðŁÞ
(7:13)
The effective number of parameters of the model is pD ¼ DðŁÞ DðŁÞ
(7:14)
250
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
where Ł is the expectation of Ł. DIC is calculated as DIC ¼ pD þ DðŁÞ
(7:15)
R2 is defined as n X
ð yi y9i Þ2 SS E i¼1 R2 ¼ 1 ¼1X n SST ð y i y i Þ2
(7:16)
i¼1
where SSE and SST are the sum of squared errors and total sum of squares, respectively; yi and y9i are the original data and predicted mean values, respectively; yi is the mean of the observations yi ; and n is the number of observations. It should be noted, though, that the Bayesian approach generates a predictive distribution and not a single value for each variable ( y9i ), and thus the use of R2 is essentially a non-Bayesian (‘‘point’’) assessment of the model performance.
7.4 7.4.1
RESULTS AND DISCUSSION Correlation analysis and ANOVA
Some basic statistical analysis was conducted on the raw data before the application of BBIM, including statistical description (Table 7.1), hierarchical cluster analysis (CA) (Figure 7.2), correlation analysis, and one-way analysis of variance (ANOVA). The analysis was aimed at the primary verification of the model structure in Equation 7.1, and the two assumptions for k in Equations 7.10 and 7.11. The correlation analysis for BOD versus water flow and NH4 þ versus water flow (Figure 7.3) indicated that the concentration will increase with increasing water flow in the case study. The model result fits the model in Equation 7.1, which supports the basic assumption of distributed sources of the study area. ANOVA was conducted to determine the differences in the water velocity, BOD and NH4 þ concentration among the various years or sampling sites (Bordalo et al., 2001). The results can be helpful in determining the two alternative models for k in Equations 7.10 and 7.11. Given the confidence level of p ¼ 0.001, the variance analysis in Table 7.2 indicated significant differences in the water velocity, BOD and NH4 þ at various sites, whereas there was no significant difference between various years, which fits the CA results of Figure 7.2.
7.4.2
Parameter estimation
First, the two estimation assumptions for k were compared based on BBIM, using DIC and R2 . The results in Table 7.3 indicate that k i is more precise than k j for fitting NH4 þ and BOD, which means that the first-order decay rate (k) is significantly different among the various river segments because of the physical characters and
251
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
Table 7.1: Statistical description of water quality and water velocity of the Hun–Taizi river system. (a) NH4 þ /mg L1 Sampling site H1 H2 H3 H4 H5 H6 H7 T1 T2 T3 T4 T5 T6 T7 T8 DL1 DL2 DL3
Mean
Minimum
Maximum
Standard deviation
0.17180 1.71360 2.85060 2.52800 9.77090 18.56200 13.33790 0.20990 3.84590 0.87790 0.74530 1.51730 2.76580 3.79770 3.62470 6.61220 3.97580 2.57150
0.017000 0.703000 1.420000 0.979000 5.050000 8.347000 6.583000 0.025000 1.807000 0.335000 0.100000 0.390000 1.360000 2.390000 2.403000 1.186000 1.767000 0.934000
0.35500 3.20500 4.77700 5.22300 15.43400 41.32000 22.58300 0.38700 6.85700 1.51000 1.18700 2.84700 3.75000 5.42800 5.22600 11.35900 7.10600 3.89100
0.13272 0.74322 0.99173 1.52217 2.87516 10.37542 5.89801 0.10129 1.50170 0.36340 0.36079 0.86093 0.84256 1.04826 0.97602 3.57966 1.62785 0.89089
Mean
Minimum
Maximum
Standard deviation
5.46420 16.45020 17.34420 11.39880 23.36280 41.40840 28.55100 4.54200 17.75820 5.88060 8.59800 13.53960 18.84060 21.63000 20.80500 25.42380 52.24140 60.66480
3.62400 12.45600 13.89000 8.76000 14.76000 25.26000 17.62200 3.00000 10.54800 2.86200 3.36000 5.29800 13.50000 14.88600 15.16800 10.53000 20.40000 26.36400
6.9360 20.4060 22.1820 14.3580 33.1140 62.2320 40.5480 8.7840 25.7400 10.8600 20.1840 19.0200 25.7580 27.5040 24.9300 40.7760 80.2020 100.9980
(b) BOD/mg L1 Sampling site H1 H2 H3 H4 H5 H6 H7 T1 T2 T3 T4 T5 T6 T7 T8 DL1 DL2 DL3
1.07785 2.58312 2.56503 2.02247 5.19177 10.51178 7.25088 1.62562 5.43075 2.33090 4.82195 5.55019 3.61360 4.27972 3.39249 10.03419 26.65616 31.42640 ( continued)
252
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 7.1: ( continued ) (c) Water velocity/m d1 Sampling site H1 H2 H3 H4 H5 H6 H7 T1 T2 T3 T4 T5 T6 T7 T8 DL1 DL2 DL3
Mean
Minimum
33.91 1 294.65 1 466.80 3 022.87 1 418.04 3 349.40 3 876.17 6 799.54 4 574.06 1 246.18 142.99 3 570.91 11 051.57 11 668.46 14 117.33 2 484.19 2 255.70 2 970.47
12.960 177.660 277.020 183.600 18.360 446.580 547.560 2 767.680 2 027.520 34.560 59.040 649.440 4 226.400 3 579.840 5 761.440 1 065.616 994.423 2 258.331
Maximum
Standard deviation
68.04 3 095.82 3 407.94 9 304.20 11 772.00 10 807.02 10 038.60 15 462.72 9 573.12 8 434.08 306.72 18 288.00 26 998.56 31 832.64 29 780.64 4 205.62 3 826.29 3 838.01
17.41 1 204.53 1 296.64 2 953.47 3 668.22 3 333.45 3 411.07 4 590.66 1 956.42 2 631.91 68.05 5 538.45 8 339.77 10 812.67 8 528.72 1 151.12 1 053.22 548.77
Sampling site
Figure 7.2: Dendrogram showing spatial similarities of monitoring sites produced by CA.
H1 T4 H2 H3 DL1 DL2 DL3 T3 H4 H6 T2 H7 H5 T5 T1 T6 T7 T8 0
20
40
60
(Dlink/Dmax) 100
80
100
253
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
Figure 7.3: The correlation analysis for BOD and NH4 þ versus water flow. 2.1 1.8
NH4 BOD
log(BOD) 0.946 0.207 log(water flow) r2 0.2547; p = 0.0000
log (BOD), log (NH4)/mg L1
1.5 1.2 0.9 0.6 0.3 0.0 0.3 0.6 0.9 log(NH4) 0.048 0.297 log(water flow)
1.2 1.5
r2 0.1668; p = 0.0000 0.9
0.6
0.3
0
0.3
0.6
0.9
1.2
1.5
1.8
2.1
2.4
3 1
log (water flow)/m s
loading changes. This judgement fits well with the ANOVA results in Table 7.2. The model results below were based on k i in Equation 7.10. The fitting results between the observed data and the modelled values are shown in Figure 7.4 (see colour insert), with mean and median values and two confidence levels at 2.5% and 97.5%. The model results fit well with the observed data. Thus, the model can be used for practical water quality management. The Monte Carlo error (MC error) and sample standard deviation (SD), along with the confidence levels at 2.5% and 97.5%, for posterior distributions of parameters in the NH4 þ and BOD models are described in Table 7.4, Table 7.5 and Figure 7.5, respectively. The model results in Tables 7.4 and 7.5 and Figures 7.4 and 7.5 show that the parameters k and L have very small MC errors, which means that the model converged very well, with high accuracy (Huang and McBean, 2007).
7.4.3
Load reduction based on BBIM model results
The main focus of water quality management is to control pollutant loading into the river segments to ensure that the water quality meets the set target. The water quality goals for each segment and sampling site are taken from the EPB of Liaoning Province, China. The maximum pollutant loadings (Lm ) for NH4 þ and BOD in each
254
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 7.2: One-way ANOVA results for monitoring data of water velocity, BOD and NH4 þ . Sum of squares
Mean square
307 083.90
18 063.76
346 137.87 653 221.77 43 519.64
2 136.65
609 702.13 653 221.77 41 259.97
3 586.48
19 277.14 60 537.11 3 307.08
119.00
57 230.03 60 537.11 4 042.78
336.65
1 591.75 5 634.52 229.12
9.83
(1) Water velocity for Between groups different segments Within groups Total (2) Water velocity for Between groups different years Within groups Total (3) BOD for different Between groups segments Within groups Total (4) BOD for different Between groups years Within groups Total (5) NH4 þ for different Between groups segments Within groups Total (6) NH4 þ for different Between groups years Within groups Total
F
8.45 , 0.001
4 835.52
1.35
0.216
20.40 , 0.001
2 427.06
367.45
1.09
0.372
24.20 , 0.001
237.81
25.457
5 405.41 5 634.52
Signif.
0.801
0.616
31.797
Table 7.3: Comparison of DIC and R2 for the two models for the estimation of k i and k j .
(1) NH4 þ simulation model (2) BOD simulation model
Model
D(Ł)
D(Ł)
pD
DIC
R2
ki kj ki kj
888.810 1222.730 821.910 1064.030
814.298 1209.860 790.458 1049.800
74.511 12.873 31.452 14.230
963.321 1235.600 853.362 1078.260
0.8495 0.4648 0.7195 0.0025
segment are shown in Table 7.6, assuming that the upper segment meets its set target. The aim is to inform decision-makers on how to control pollutant loading to meet water quality targets under various hydrological conditions. The historical loadings estimated from the inverse model, and the Lm values for three river velocity scenarios (mean, 2.5% and 97.5% confidence levels), were compared. The comparison shows that, for most of the river segments, the historical loading is beyond the Lm threshold. Thus, pollutant reduction for organic matter and nitrogen is necessary to meet the water quality targets.
255
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
Table 7.4: Summary statistics for posterior distributions of NH4 þ model parameters. Parameter
Mean
Standard deviation
MC error
2.50%
Median
97.50%
k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k.sigma
0.337 0.200 0.242 0.043 0.033 0.040 0.061 0.637 0.645 0.383 0.204 0.111 0.146 0.060 0.176 0.224 0.260
0.075 0.049 0.055 0.015 0.008 0.011 0.040 0.103 0.119 0.085 0.051 0.042 0.040 0.031 0.036 0.051 0.053
0.003 0.002 0.002 0.001 0.000 0.000 0.001 0.003 0.004 0.003 0.002 0.001 0.001 0.001 0.001 0.002 0.001
0.212 0.119 0.150 0.018 0.020 0.021 0.012 0.451 0.422 0.240 0.118 0.038 0.077 0.014 0.113 0.137 0.178
0.330 0.195 0.236 0.042 0.032 0.039 0.052 0.631 0.643 0.375 0.199 0.109 0.143 0.056 0.173 0.218 0.253
0.499 0.308 0.368 0.076 0.051 0.064 0.159 0.851 0.870 0.572 0.318 0.202 0.235 0.131 0.253 0.340 0.387
Note: k i is the estimated k value in river segment i; k.sigma is the variance.
Table 7.5: Summary statistics for posterior distributions of BOD model parameters. Parameter
Mean
Standard deviation
MC error
2.50%
Median
97.50%
k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k.sigma
0.182 0.171 0.262 0.119 0.074 0.105 0.091 0.518 0.374 0.240 0.160 0.122 0.136 0.122 0.052 0.053 0.172
0.038 0.036 0.053 0.028 0.016 0.023 0.044 0.095 0.076 0.049 0.035 0.035 0.032 0.034 0.018 0.015 0.043
0.002 0.002 0.003 0.002 0.001 0.001 0.002 0.006 0.004 0.003 0.002 0.002 0.002 0.001 0.001 0.001 0.002
0.120 0.112 0.174 0.073 0.048 0.065 0.018 0.362 0.248 0.159 0.101 0.058 0.081 0.059 0.021 0.026 0.104
0.179 0.168 0.257 0.117 0.072 0.103 0.086 0.509 0.367 0.236 0.157 0.120 0.134 0.121 0.051 0.052 0.166
0.262 0.248 0.375 0.178 0.108 0.154 0.186 0.720 0.537 0.348 0.235 0.193 0.204 0.193 0.090 0.084 0.269
256
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 7.5: The posterior densities for (a) NH4 þ and (b) BOD models. Note: sigma denotes the variance for the model; L.sigma denotes the variance for L; and k.sigma denotes the variance for k.
10.0
10.0
5.0
5.0 0
0 0.2
0.4
0
0.6
0.2
sigma
0.4 L.sigma
10.0 5.0 0 0
0.2
0.4 k.sigma (a)
20.0 15.0 10.0 5.0 0
3.0 2.0 1.0 0 0.2
0.3
0.4
0
sigma
0.5 1.0 L.sigma
10.0 5.0 0 0
0.2 0.4 k.sigma (b)
7.5 .
CONCLUSION
A BBIM was developed in this study to aid water quality management in the Hun– Taizi river system, China. The model was fitted to water quality monitoring data from 1995 to 2004. The proposed model can integrate the three key issues in WQM: pollutant load estimation, parameter estimation, and uncertainties in model develop-
Historical L range Mean 2.5% 97.5%
2.87– 2.98 1.13 1.12 1.13
S1
2.89– 2.96 1.08 1.07 1.13
S2
2.86– 2.98 1.95 1.96 1.96
S3
2.83– 2.99 1.18 1.12 1.51
S4 2.80– 3.02 0.80 0.75 1.64
S5 2.83– 3.01 1.12 1.00 1.56
S6 2.84– 3.01 1.02 0.75 1.50
S7 2.73– 3.06 2.29 2.29 2.30
S8 2.78– 3.13 1.90 1.89 1.93
S9 2.71– 3.02 1.40 1.41 1.40
S10 2.85– 2.98 1.18 1.02 1.88
S11 2.89– 2.93 1.70 1.29 2.18
S12
2.90– 2.95 1.49 1.26 1.90
S13
2.75– 3.00 2.27 1.91 2.53
S14
2.74– 3.04 1.18 0.95 1.43
S15
Table 7.6: The comparison of historical L ranges with maximum pollutant loading of BOD under different river velocity scenarios.
2.80– 3.00 1.18 0.96 1.43
S16
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
257
258
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
ment and application. It should be a useful tool for decision-making in water quality management under uncertainty. The basic water quality model applied in this study is relatively simple, with a distributed-sources assumption. The model result is based on annual average data of a 10-year dataset. Thus, in spite of the model’s capacity to handle uncertainties in WQM, it can be improved if a longer-period dataset is available for the better interpretation of water quality changes and seasonal variations. The pollutant loading can then be estimated on a seasonal basis or even over a much shorter period. In addition, based on load estimation from the BBIM presented in this study, future work will be devoted to a more accurate apportionment of sources.
ACKNOWLEDGEMENTS This study was supported by the China National Water Pollution Control Programme (2008ZX07102–001).
REFERENCES APHA. 1998. Standard Methods for the Examination of Water and Wastewater. American Public Health Association, Washington, DC. Arhonditsis, G.B., Qian, S.S., Stow, C.A., Lamon, C.E. and Reckhow, K.H. (2007). Eutrophication risk assessment using Bayesian calibration of process-based models: application to a mesotrophic lake. Ecological Modelling. 28: 215–229. Bordalo, A.A., Nilsumranchit, W. and Chalermwat, K. (2001). Water quality and uses of the Bangpakong River (Eastern Thailand). Water Research. 35: 3635–3642. Borsuk, M.E., Higdon, D., Stow, C.A. and Reckhow, K.H. (2001). A Bayesian hierarchical model to predict benthic oxygen demand from organic matter loading in estuaries and coastal zones. Ecological Modelling. 143: 165–181. Bumgarner, J.R. and McCray, J.E. (2007). Estimating biozone hydraulic conductivity in wastewater soil-infiltration systems using inverse numerical modeling. Water Research. 41: 2349–2360. Chapra, S C. (1997). Surface Water-Quality Modeling. McGraw-Hill, New York. Dowd, M. and Meyer, R. (2003). A Bayesian approach to the ecosystem inverse problem. Ecological Modelling. 168: 39–55. Gelman, A. and Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, New York. Gelman, A., Carlin J.B., Stern, H.S. and Rubin, D.B. (2004). Bayesian Data Analysis, 2nd edn. Chapman & Hall/CRC, Boca Raton, FL. Ghenu, A., Rosanta, J.M. and Sinia, J.F. (2008). Dispersion of pollutants and estimation of emissions in a street canyon in Rouen, France. Environmental Modelling & Software. 23: 314–321. Gill, J. (2002). Bayesian Methods: A Social and Behavioral Sciences Approach. Chapman & Hall/ CRC, Boca Raton, FL. Huang, J.J. and McBean, E.A. (2007). Using Bayesian statistics to estimate the coefficients of a twocomponent second-order chlorine bulk decay model for a water distribution system. Water Research. 41: 287–294. Liu, Y., Guo, H.C., Zhang, Z.X., Wang, L.J., Dai, Y.L. and Fan, Y.Y. (2007). An optimization method based on scenario analyses for watershed management under uncertainty. Environmental Management. 39: 678–690. Lunn, D.J., Thomas, A., Best, N. and Spiegelhalter, D. (2000). WinBUGS – a Bayesian modelling framework: concepts, structure, and extensibility. Statistics and Computing. 10: 325–337.
BAYESIAN-BASED INVERSE METHOD IN RIVER WATER QUALITY MODELLING
259
Malve, O. and Qian, S.S. (2006). Estimating nutrients and chlorophyll a relationships in Finnish lakes. Environmental Science and Technology. 40: 7848–7853. Michalak, A.M. and Kitanidis, P.K. (2003). A method for enforcing parameter nonnegativity in Bayesian inverse problems with an application to contaminant source identification. Water Resources Research. 39: 1033. Michalak, A.M. and Kitanidis, P.K. (2004). Estimation of historical groundwater contaminant distribution using the adjoint state method applied to geostatistical inverse modeling. Water Resources Research. 40: W08302. Palmer, G.M., Zhu, C., Breslin, T.M., Xu, F., Gilchrist, K.W. and Ramanujam, N. (2006). Monte Carlo-based inverse model for calculating tissue optical properties. Part II: Application to breast cancer diagnosis. Applied Optics. 45: 1072–1078. Pan, L. and Wu, L. (1998). A hybrid global optimization method for inverse estimation of hydraulic parameters: annealing-simplex method. Water Resources Research. 34: 2261–2269. Qian, S.S., Stow, C.A. and Borsuk, M.E. (2003). On Monte Carlo methods for Bayesian inference. Ecological Modelling. 159: 269–277. Qin, X.S., Huang, G.H., Zeng, G.M., Chakma, A. and Huang, Y.F. (2007). An interval-parameter fuzzy nonlinear optimization model for stream water quality management under uncertainty. European Journal of Operational Research. 180: 1331–1357. Reckhow, K.H. (1994). Importance of scientific uncertainty in decision-making. Environmental Management. 18: 161–166. SEPA. 2007. Report on the State of the Environment in China 2006. State Environmental Protection Agency. http://www.sepa.gov.cn/plan/zkgb/06hjzkgb/(accessed 10 August 2007). Shen, J. and Kuo, A.Y. (1998). Application of inverse method to calibrate an estuarine eutrophication model. Journal of Environmental Engineering, ASCE. 124: 409–418. Shen, J., Jia, J.J. and Sisson, G.M. (2006). Inverse estimation of non-point sources of fecal coliform for establishing allowable load for Wye River, Maryland. Water Research. 40: 3333–3342. Shrestha, S., Kazama, F. and Newham L.T.H. (2008). A framework for estimating pollutant export coefficients from long-term in-stream water quality monitoring data. Environmental Modelling & Software. 23: 182–194. Snehalatha, S., Rastogi, A.K. and Patil, S. (2006). Development of numerical model for inverse modeling of confined aquifer: application of simulated annealing method. Water International. 31: 266–271. Vermeulen, P.T.M., Heemink, A.W. and Valstar, J.R. (2005). Inverse modeling of groundwater flow using model reduction. Water Resources Research. 41(6), W06003. Vieira, J.M.P. and Lijklema, L. (1989). Development and application of a model for regional water quality management. Water Research. 23: 767–777. Wan, Z. and Vallino, J. (2005). An inverse ecosystem model of year-to-year variations with first order approximation to the annual mean fluxes. Ecological Modelling. 187: 369–388. Woodbury, A.D. and Ulrych, T.J. (2000). A full-Bayesian approach to the groundwater inverse problem for steady state flow. Water Resources Research. 8: 2081–2093. Zou, R., Lung, W.S. and Wu, J. (2007). An adaptive neural network embedded genetic algorithm approach for inverse water quality modeling. Water Resources Research. 43: W08427.
PART III Soil, Sediment and Subsurface Modelling and Pollutant Transport
CHAPTER
8
Colloid-Facilitated Contaminant Transport in Unsaturated Porous Media Arash Massoudieh and Timothy R. Ginn
8.1
INTRODUCTION/EVIDENCE
Classical models of contaminant transport in porous media often consider contaminants occurring in the two major phases: dissolved or bound to an immobile matrix. It has been shown in the past two decades that colloidal particles can also play a significant role in the transport of contaminants in the natural subsurface (McCarthy and Zachara, 1989), and therefore, in order to predict the transport of contaminants in porous media, realistically a third phase (the mobile colloidal phase) should also be considered. Colloids are defined here as particles for which Brownian forces matter: they generally include sizes ranging from 1 nm to 10 m in characteristic diameter. There are various types of colloid in the groundwater, including colloids produced as a result of mineral precipitation involving Fe, Al, Ca and silica precipitates, as well as colloids produced as a result of rock or sediment fragmentation, organic colloids, and biocolloids such as bacterial cells and cell components, viruses and protozoa. It has been shown that colloids are ubiquitous in groundwater and in the vadose zone (McCarthy and Degueldre, 1993) and, because of their large surface area, colloids can act as agents for enhancing the migration of highly sorbing pollutants in porous media. In such media, colloids can be mobilised as a result of chemical perturbations that decrease the net attractive force between the solid mineral phases and the colloidal particles, including a decrease in ionic strength, or an increase in ions that can adsorb to the mineral surfaces and change their surface charge (Ryan and Elimelech, 1996). Also, it has been shown that hydrodynamic perturbation – such as rapid infiltration, episodic wetting and drying, and large increases in shear stress caused by changes in pore water velocity – can also cause detachment of colloidal particles from the solid minerals (Saiers and Lenhart, 2003a; Zhuang et al., 2007). Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
264
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Ryan and Elimelech (1996) suggested that three criteria should be present in order for the effect of colloidal particles on the transport of contaminants to be important: • • •
colloidal particles should be present in large enough concentrations; the contaminant should bind to the colloidal particles; and the colloidal particles should be able to move in the porous medium faster than the contaminant would move in the absence of colloids.
There is strong evidence for the potential role of colloids in accelerating the movement of metal and transuranic contaminants in the natural subsurface. Nyhan et al. (1985) observed the migration of Pu and Am to distances several orders of magnitude larger than those expected from their equilibrium partitioning to the immobile solid phase as quantified by the sorption distribution coefficient. However, they did not attribute this phenomenon explicitly to the role of colloidal particles. Short et al. (1988) observed a significant association of U and Th with colloidal particles consisting mainly of Fe and Si species, with sizes ranging from 18 nm to 1 m. Buddemeier and Hunt (1988) found almost 100% of transition elements (Mn, Co) and lanthanide radionuclides (Ce, Eu) associated with colloidal particles in a site in Nevada. Ryan and Elimelech (1996) pointed out the inadequacy of two-phase models for highly sorbing contaminants in the presence of colloids. Penrose et al. (1990) also observed Am and Pu moving to unexpected distances, considering the apparent partitioning coefficients of these metals, and suggested that this phenomenon was due to irreversible association of these metals with mobile colloids. However, Marty et al. (1997) did not support the idea that colloid-facilitated transport played a role in the transport of radionuclides in this site, and suggested that the ratios of 239 Pu/ 238 Pu, and the fact that the radionuclides had been transported faster than the average groundwater flow, showed that they had been transported through surface water. Colloid-facilitated transport is more frequently identified as a vehicle with heavier metals and radionuclides. Most of the Pu occurring in groundwater samples from the Nevada test site was associated with colloidal particles (Reimus et al., 2005), and the concentration of Pu in these samples was highly correlated with the number density of colloids. McCarthy et al. (1998) identified organic matter colloidal fractions as significant facilitators of transuranic radionuclide transport in the karstic aquifer system at Oak Ridge National Laboratory. Novikov et al. (2006) noted for the Russian Mayak site that Pu transport rates of ,1 km/14 yrs were facilitated by iron hydroxide colloids, which probably arose from the activity of iron-reducing bacteria. Pang and Simunek (2006) highlighted the role of bacterial cells themselves in the conveyance of Cd, and radionuclides can also occur as biogenic mineral phases associated with cells (e.g., Suzuki et al., 2002). Acid mine drainage provides another context for the formation of colloidassociated contaminant phases involving metals. These solutions, resulting from the oxidation and dissolution of minerals brought to the surface, undergo slow neutralisation via mixing with ambient waters, resulting in the reprecipitation of Fe and other metal phases as colloids (e.g., Accornero et al., 2005; Hochella et al., 2005), which
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
265
can impact on hydrologically linked subsurface waters through flood events (e.g., Kretzschmar and Schafer, 2005). The focus of this chapter is to review various modelling approaches used to quantify colloid-facilitated transport in porous media. Section 8.2 focuses on modelling the processes affecting the movement of colloids, and Section 8.3 on modelling the approaches used in contaminant transport in the presence of colloids. The focus is mainly on unsaturated porous media, although models constructed solely for saturated groundwater are also discussed, and their evolution into models applicable to unsaturated media is described. Section 8.4 discusses the methods used to incorporate the effects of geochemical and physical heterogeneities in the models, and Section 8.5 discusses the unknowns and future directions necessary to build more realistic contaminant transport models in the presence of mobile colloids.
8.2
COLLOID TRANSPORT IN UNSATURATED MEDIA
The basis for any colloid-facilitated contaminant transport model is a colloid transport model, and so it is important to first review the different approaches used to model colloid transport in saturated and unsaturated media before moving on to the topic of modelling colloid-facilitated transport. In unsaturated media (Figure 8.1), owing to the high variability in hydraulic shear stress, ionic strength, pH and moisture content as a result of wetting and drying cycles, colloid transport is more complicated than in saturated media. Also, processes such as film straining and entrapment in the air/water interface (AWI) may influence the movement of particles in unsaturated media. These processes are non-existent in the case of colloid transport in saturated porous media. Although different models have been proposed for the deposition and release of colloids in porous media, there is no consensus on a single modelling framework applicable to all circumstances. For further details of modelling approaches to colloid transport, the interested reader is referred to the paper by Ryan and Elimelech (1996), who extensively reviewed the various processes involved in colloid transport in porous media, and to that by DeNovio et al. (2004) for unsaturated media.
ob ile Fi lm
in ed St ra
str
ai
ne
ch
d
ed
to
AW I
A ir
M A tta
A tta ch e
d
Figure 8.1: Colloid retardation mechanisms in unsaturated porous media.
266
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Typically, the colloid phase is treated as a suspension, and an advection– dispersion equation is considered to model colloid transport in porous media: @ ðŁGm Þ @Gs @Ga @Łvc Gm @ @Gm Dc Ł þˆ þ ¼ (8:1) þ rB @z @t @t @z @z @t where Gm [M/L3 ] is the mass concentration of mobile colloidal particles; Gs [M/M] is the immobilised (attached, filtered, trapped) colloid concentration; Ga [M/L2 ] is the concentration of colloids trapped in the air/water interface; ˆ is the surface area of the air/water interface; rB [M/L3 ] is the dry bulk density of the soil matrix; Ł is the water content equal to the porosity in a saturated medium; s [Mc /M] is the sorbed contaminant concentration; vc [L/T] is the advective velocity for colloidal particles; and Dc [L2 /T] is the dispersion coefficient for colloidal particles. In cases where the film straining is considered as a separate mechanism from entrapment into the AWI an additional term representing it is added to the equation: @ ðŁGm Þ @Gs @Ga @G @Łvc Gm @ @Gm Dc Ł þ rB þˆ þ þ ¼ (8:2) @z @t @t @t @t @z @z where G is the concentration of colloids entrapped by film straining, expressed as the mass of colloids per volume of bulk medium. Most of the colloid-facilitated transport and colloid transport models have ignored the generation of colloids by the erosion of colloidal materials from the immobile solid phase. Massoudieh and Ginn (2007), after Sen et al. (2004), assumed that colloids irreversibly attach to the collectors, and that the source of colloids being released is different from the colloids filtered previously. They therefore divided the attached colloid population (Gs ) into two portions, irreversibly attached colloids (Gsi ) and immobile colloids available for release (Gsf ): @ ðŁGm Þ @ ð Gsi þ Gsf Þ @Ga @G @Łvc Gm @ @Gm Dc Ł þˆ þ þ ¼ (8:3) þ rB @z @t @t @t @z @z @t Although this approach considers a separate source for colloids being released to the medium, it still considers a finite quantity of immobile colloids available to be released in the overall system. Thus it does not explicitly assume colloid generation in the system; however, by assuming a large value for the initial Gsf compared with the detachment rate, the generation of colloids from the solid matrix can be simulated. To consider the rate of generation of immobile colloids explicitly, an additional term representing the generation process (e.g., erosion) can be used: @ ðŁGm Þ @Gs @Ga @G @Łvc Gm @ @Gm Dc Ł þ rB þˆ þ þ ¼ þ pG rB (8:4) @z @t @t @t @t @z @z where pG represents the specific rate of colloid generation from the soil matrix. As the source of colloid generation is the solid matrix, to preserve the mass we need also to have @rB ¼ pG rB @t
(8:5)
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
8.2.1
267
Attachment mechanisms
Quantifying the filtration rate of colloidal particles in a natural porous medium is a challenging scientific problem involving the physical and chemical characteristics of the mineral surfaces, the colloids and the pore water. Colloid filtration theory (CFT) has traditionally been used to model the attachment rate of colloidal particles. The basic assumption underlying CFT is that the rate of deposition of colloidal particles onto a porous medium is proportional to the frequency at which the mobile colloids come into contact with the grain surface, known as the collision efficiency (), and the fraction of collisions that leads to attachment, known as the sticking efficiency (Æ) (Rajagopalan and Tien, 1976), as in rb
@Gs ¼ k att ŁGm @t
(8:6)
where k att ¼
3 ð1 ÞÆvc 4 ac
(8:7)
where is the porosity of the medium, and the collision efficiency is defined by considering an idealised isolated spherical collector covered by a spherical liquid shell, referred to as the Happel sphere-in-cell model. The collision efficiency is defined as the fraction of particles entering the sphere-in-cell model that collide with the sphere surface, and is given by the deposition rate normalised by the total particle influx rate: ¼
I a2c vc Gm
(8:8)
where ac is the radius of the collector and I is the deposition rate on the collector. Yao et al. (1971) suggested a relationship for the collision efficiency by adding the collision rates as a result of diffusion, sedimentation and interception onto an isolated sphere. The deposition rate as a result of each process was calculated analytically when ignoring the other processes. Expanding on this approach, Rajagopalan and Tien (1976) used a deterministic numerical trajectory analysis on a wide variety of conditions, and then performed a regression analysis on their results and obtained a closed-form approximation for the value of as a function of a set of non-dimensional variables defining the geometrical and physical properties of the colloids and the grain. Tufenkji and Elimelech (2004) included the effects of hydrodynamic interactions and van der Waals’ forces, and solved the convection–diffusion equation directly to obtain a revised relationship for the collision efficiency. Both Rajagopalan and Tien (1976) and Tufenkji and Elimelech (2004) assumed additivity of the various processes causing the deposition of particles, including gravity, Brownian diffusion, and hydrodynamic and van der Waals’ forces. Nelson and Ginn (2005) offered a new equation using stochastic direct particle tracking, and concluded that the effects of Brownian diffusion and advection cannot be directly added together. Of course, because of the idealisation of the porous medium by the Happel sphere-in-cell model,
268
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
it should not be expected that these equations will have a high predictive capability, especially for porous media with non-uniform particle size (e.g., Ryan and Elimelech, 1996). In a heterogeneous medium where the matrix grains are not monodisperse, Equation 8.7 can be written in terms of the specific surface area f, defined as the surface area over the bulk volume of the soil: k att ¼ 14 f Ævc
(8:9)
Deposition of colloids on the grains can either increase (ripening) or decrease (blocking: e.g., Redman et al., 2001; Tufenkji et al., 2003) the deposition of incoming particles depending on the presence of attractive or repulsive forces between the particles. Ripening is reported by Kretzschmar et al. (1995), who observed an increase in filter efficiency for colloid transport in intact cores of saprolite as increasing amounts of colloids were deposited. More recently, Tong et al. (2008) found ripening triggered by relatively subtle changes in solution chemistry and fluid velocity. Tong et al. (2008) point out that the propensity to trigger ripening increased with the number and length of grain-to-grain contacts. To incorporate the blocking effect, a correction factor, referred to as the blocking function, is multiplied by the attachment rate coefficient (e.g., Dabros and van de Ven, 1982): k att ¼ 14 f Ævc Bð Gs Þ
(8:10)
The isotherm has been used to model the effect of blocking (Kallay et al., 1987). The mass-based blocking function can be expressed as Bð G s Þ ¼ 1
Gs Gs,max
(8:11)
where Gs,max is the maximum mass concentration of colloids that can be attached to a unit bulk mass of the medium. This approach does not take into account the areal exclusion effect of particles with finite sizes on the grain surfaces. Schaaf and Talbot (1989) analytically solved the random sequential adsorption (RSA) model using the statistical mechanics approach and found a third-order equation for the blocking function expressed in terms of the mass concentration of attached colloidal particles: Gs Gs 2 Gs 3 Bð Gs Þ ¼ 1 4 þ 3:31 þ 1:407 (8:12) Gs,ch Gs,ch Gs,ch where Gs,ch is the characteristic attached concentration, defined as Gs,ch ¼ 0:728 fap rp
(8:13)
where ap is the radius of colloids and rp is the mass density of colloids. Johnson and Elimelech (1995) compared the Langmuir- and RSA-based dynamic blocking functions, and concluded that the RSA model does a better job in fitting observed breakthrough curves. Song and Elimelech (1993) offered a double-layer particle deposition model, and also considered the non-uniform distribution of deposition over
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
269
a spherical collector. Song and Elimelech (1994) used a mono-layer model, but included the effect of the heterogeneity of the forces between colloidal particles and different patches of the porous medium by expressing the distribution of the surface potentials of surface sites as Gaussian. CFT is applicable only under favourable conditions for the attachment of colloids when the surface charges of the colloidal particles and the grains are opposite. Under unfavourable conditions, in theory, CFT predicts no attachment. Therefore, other mechanisms have been suggested to cause filtration under unfavourable conditions. These mechanisms include straining, wedging and entrapment of colloids in flowstagnation regions. Bradford et al. (2003) considered both attachment controlled by CFT and straining, assuming that attachment controlled by CFT is reversible, whereas straining is irreversible. They suggested a straining rate coefficient as a function of distance from the inlet of the porous medium: @G d 50 þ z ¼ Łk str Gm (8:14) d 50 @t where kstr [T 1 ] is the straining rate coefficient and d50 is the medium soil particle size. The problem with this formulation is that, in field-scale modelling, it is not always clear what the basis for z should be. Bradford and Bettahar (2006) also took into account saturation of sites by strained colloids by a Langmuirian-type blocking function: @G G d 50 þ z ¼ Łk str 1 Gm (8:15) d 50 @t G,max where G,max is the maximum solid phase concentration of strained colloids. The reason for the dependence of the straining rate on the travel distance can also be attributed to the filtration of larger colloids at regions closer to the inlet and the reduced susceptibility of the remaining colloid to be trapped in small pores. However, to model this mechanism, a multidisperse colloid transport model should be used. Xu et al. (2006) suggested the following equation for the straining rate, which is an inverse function of the strained concentration: @G ¼ k 0 C Gm =º @t
(8:16)
where k0 and º are empirical coefficients. This relationship does not predict a declining attachment rate with distance from the inlet, as opposed to the relationship by Bradford et al. (2003). In the representative elemental volume (REV) approach, the velocity distribution in the pores with various diameters is replaced by one average velocity. In the case of straining, in addition to the attachment and detachment rates, which depend on the flow velocity, the straining is also controlled by the pore diameter, which is homogenised in an REV approach. Various researchers have suggested using a dual porosity model to capture the effect of the distributions of pore size on colloid transport, including:
270 • • • • • • •
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Corapcioglu and Wang (1999) and Bradford et al. (2009) for the general modelling context; Woessner et al. (2001) for virus transport; Kim and Corapcioglu (2002) for organic and bacterial colloids; Schelde et al. (2002) for colloid release; Robinson et al. (2003) for radionuclide transport at the Yucca Mountain site; Dusek et al. (2006) for Cd transport in soils; and Harter et al. (2008) for Cryptosporidium oocyst transport.
In spite of the progress made, there are still many challenges in understanding the processes involved in colloid attachment in real porous media, mainly because of the difficulties in capturing and upscaling the effect of the irregular geometries of the pores, the chemical and physical heterogeneities of the matrix and colloids, and the water chemical properties on the attachment behaviour of colloids (Figure 8.2). The relationships offered for film straining in saturated porous media have been used for colloid transport models in unsaturated media without considering the effect of film straining (Simunek et al., 2006). Wan and Tokunaga (1997) used a probabilistic approach and, by assuming a power law relationship between the film straining rate and average linear velocity and the ratio between the diameters of particles and the film thickness, suggested a relationship for the rate of straining as a function of matric potential.
8.2.2
Detachment mechanisms
Detachment of colloids from the surfaces has not been studied as much as attachment. CFT determines only the rates of attachment of colloids to surfaces. Early colloidfacilitated models ignored detachment altogether (Song and Elimelech, 1993, 1994; Yao et al., 1971). In early colloid-facilitated transport models, typically the detachment is modelled as a rate-limited process with a constant rate coefficient (Corapcioglu and Jiang, 1993; Jiang and Corapcioglu, 1993; followed by Bradford et al., 2003; Figure 8.2: Various contaminant phases in unsaturated porous media in the presence of colloids. Mobile – available
Sorbed to solid matrix
Solid matrix
Sorbed to colloids
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
271
Saiers and Hornberger, 1996; van de Weerd and Leijnse, 1997; van de Weerd et al., 1998): rb
@Gs ¼ k att ŁGm k det rb Gs @t
(8:17)
Ryan and Elimelech (1996) categorise the factors influencing the mobilisation of colloids into physical and chemical perturbations. The physical causes of mobilisation are associated with hydraulic transients and colloid–colloid collisions. The aqueous chemical perturbations that can cause colloid mobilisation include changes in ionic strength and pH. In addition, the colloid–surface bond perturbations that can lead to mobilisation include bond ageing or weakening, and colloid–colloid bond interactions on surfaces (Dabros and van de Ven, 1982). Several different hydraulic changes have been observed to be able to cause colloid mobilisation, including an increase in shear stress (Sen et al., 2002; Shang et al., 2008), wet and dry cycles (Saiers and Lenhart, 2003b), and movement of the AWI (Saiers and Lenhart, 2003b). Owing to the complexities in colloid mobilisation, in most colloid-facilitated transport models either the colloids are assumed to be irreversibly attached to the porous medium, or a constant detachment rate coefficient is used. Govindaraju et al. (1995) and Sen et al. (2002) used the threshold relationship originally suggested by Arulanandan et al. (1975) and Khilar et al. (1985) for evaluating the erosion of fine particles by seepage flow in earth dams: pG ¼ Æh
f H ð w c Þ rs
(8:18)
where Æh is the release coefficient, H is the Heaviside function, w is the shear stress exerted by the flow, and c is the critical shear stress. This relationship assumes that all the colloidal particles have the same critical shear stress, and also that the shear stress exerted by the flow is a uniform function of the velocity. In reality, it is expected that both the critical and the actual shear stress will be spatially variable. In that case, Equation 8.18 should be modified to pG ¼ Æh
f ðw Þ rs
(8:19)
where is a function that depends on the joint distribution of flow shear stress and the critical shear stress, and w is a hypothetical quantity representing the average shear stress. Schelde et al. (2002) hypothesised that the release of colloidal particles in unsaturated media does not depend on the flow shear stress, but is mainly a diffusionlimited process: they based this on the results of the column studies performed by Jacobsen et al. (1997). Based on this hypothesis they developed a dual-domain mobile/immobile diffusive transport model for colloid transport with constant colloid exchange rates between the domains. Jarvis et al. (1999) considered the effect of raindrop impact on the mobilisation of colloids close to the surface. They considered a limited source of colloids available that can replenish to reach a maximum level as a
272
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
result of processes during the wet and dry periods, including freezing/thawing and wetting/drying cycles. Majdalani et al. (2007) proposed a detachment model in which the detachment rate is a function of the cumulative detached mass at any location. They proposed the following relationship for the detachment rate: k det ¼ kdet Eð z, tÞŁv
(8:20)
where kdet is the detachment rate constant, and E(z, t ) is a function that expresses the effect of the past history of detachment at location z, and is related to the cumulative detached concentration through the function Eð z, tÞ ¼
1 þ º1 X ð z, tÞ eº2 X ð z,tÞ
(8:21)
where º1 [M/L3 ] and º2 [M/L3 ] are model parameters, and X(z, t ) is the cumulative detached concentration of particles from the surface at location z: ðt X ð z, tÞ ¼ pG ðÞrb d (8:22) 0
This model is based loosely on the consideration that detachment may modify surface structure (pore space geometry) in ways that make the frequency of detachment depend on the cumulative mass removed from a particular location in the porous medium. In subsequent work, Majdalani et al. (2008) showed that wetting/drying in unsaturated porous media generates erosive-type release of colloids that increases with ‘‘time dried’’: they hypothesised that this is related to weakening of the soil matrix tension during longer drying periods, and that this effect may be greater than those caused by ionic strength or rainfall intensity (e.g., Tong et al., 2008). Saiers and Lenhart (2003a, 2003b) found that the source of colloids available to be detached depends on the moisture content and the flow velocity, so that, at higher flow rates, more colloids become available for release. Based on this observation they considered attached colloids to be in multiple compartments, so that the release of colloids from each compartment is dictated by the relationship between the moisture content and a critical moisture content assigned to that compartment: Gs ¼
NC X
Gs,i
(8:23)
i¼1
and @Gs,i Ł 1 Gm k det,i Gs,i ¼ k att rb NC @t
(8:24)
where NC is the number of compartments, and the release rate from each compartment is controlled by the moisture content and the average linear pore water velocity according to the following relationship:
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
( k det,i ¼
0 kdet vk
for Ł , Łcr i for Ł . Łcr i
273
(8:25)
The Derjaguin–Landau–Verway–Overbeek (DLVO) force plays an important role in holding the colloidal particles on the surfaces: thus any change in this force caused by factors such as a change in the chemical composition of the pore water can cause the release of colloidal particles. This role has been observed in several studies (Roy and Dzombak, 1996, 1997; Ryan and Gschwend, 1994); however, owing to the complicated nature of the dependence of colloid mobilisation on the chemical properties, it is less thoroughly quantified. Saiers and Lenhart (2003b) addressed this within their multiple compartments model by specifying that each compartment has a different threshold for release of colloids as a function of ionic strength (Equation 8.23) with @Gs,i Ł 1 Gs,i Gm k det,i Gs,i ¼ k att (8:26) rb NC Gs max @t where katt and Gs max depend on the ionic strength and the detachment rate coefficient from compartment i, and kdet, i is controlled by the relationship between the ionic strength IS and the critical ionic strength IScr assigned to compartment i: ( 0 for IS . IScr i k det,i ¼ (8:27) kdet eð NCªGs,i =Gs max Þ for IS , IScr i More general conceptual models build from the way that DLVO theory specifies the interaction forces between colloids and surfaces under variably favourable conditions. Smets et al. (1999) found that the DLVO secondary energy minimum plays a role in the detachment frequency of Pseudomonas fluorescens cells. In particular, they found that DLVO-predicted energy barriers underestimated cell collision efficiencies, suggesting that secondary energy minimum interactions governed the initial attachment of cells. The partial reversibility of adhesion on ionic strength reduction supported the secondary minimum interaction hypothesis. Redman et al. (2004) furthered this hypothesis, and reported a sizeable increase in attachment with fluid ionic strength despite DLVO-calculated repulsive electrostatic interactions, and hypothesised that deposition is likely to occur in the secondary energy minimum, which DLVO calculations indicate increases in depth with ionic strength. The general application of DLVO theory to colloid and/or microbial transport in porous media has been critically reviewed in Grasso et al. (2002), in Bostrom et al. (2001), in Hermansson (1999) with particular reference to microbial colloids and their adhesion to surfaces, and in Strevett and Chen (2003), who introduced extended DLVO theory to account for hydration forces. More general models of the way in which detachment frequency depends on residence time while attached have a basis in the works of van de Ven and co-workers (e.g., Adamczyk et al., 1983; Dabros and van de Ven, 1982). Dabros and van de Ven (1982) specify a convolution form for the detachment rate as follows:
274
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
J net ð tÞ ¼ J att ð tÞ J det ð tÞ
(8:28)
where Jnet (t ) is the net rate of transfer from aqueous to attached phases, Jatt (t ) is the rate of attachment, and Jdet (t ) is the rate of detachment, which involves a dependence on residence time while attached: ðt J det ð tÞ ¼ ð t ÞJ net ðÞd (8:29) 0
where (t ) is the desorption rate coefficient per residence time sorbed, t. Dabros and van de Ven (1982) suggested that (t ) is a decreasing function of t, so that it represents bond strengthening, and wrote a model for it that drops exponentially from an initial value to a lower asymptotic value. Meinders et al. (1992) recognised this generality and put it to use in the analysis of data from a microflow (parallel-plate) chamber, hypothesising the same form for (t ). This approach was subsequently employed in Johnson et al. (1995), who used a step-function form for (t ). Ginn (2000) generalised this approach further to make attachment/detachment a function of residence time in any phase with different memories of residence time effects. Talbot (1996) also provided a general mathematical framework for residence time-dependent desorption.
8.2.3
Air/water interface
Unsaturated conditions can affect the movement of colloids in several ways, including the formation of discontinuous capillary fringes, prevention of the movement of colloids, attachment to the AWI, forced deposition onto the surfaces owing to the presence of thin films of water (referred to as film straining), and the effect of wetting and drying on colloid mobilisation (Chu et al., 2001; McCarthy and McKay, 2004; Powelson and Gerba, 1994). Several researchers have found a higher rate of colloid removal in unsaturated conditions than in saturated conditions (Gargiulo et al., 2008; Jewett et al., 1999; Keller and Sirivithayapakorn, 2004; Powelson and Gerba, 1994; Sirivithayapakorn and Keller, 2003). Sirivithayapakorn and Keller (2003) observed that the attachment of colloids to the AWI is irreversible as long as the AWI exists; however, during the imbibitions and disappearance of the AWI, colloids can become mobilised into the aqueous phase. Similar behaviour was observed by Torkzaban et al. (2006) in their experiments on the transport of viruses under variably saturated conditions. Wan and Wilson (1994) suggested that the presence of the air phase can decrease the mobility of colloids owing to the irreversible attachment of the colloid to the AWIs of stationary air phases. However, they also pointed out that attachment to the AWI can increase mobilisation in the case of moving interfaces during imbibitions or bubbling processes. Chu et al. (2001) attributed the larger retardation in unsaturated conditions mainly to higher attachment rates to the solid/water interface as a result of the presence of the air phase (film straining) rather than attachment to the AWI. On the other hand, Lazouskaya and Jin (2008) observed a higher concentration of colloids near AWI relative to the bulk solution for relatively hydrophilic colloids, suggesting the favourability of AWI for colloid attachment. Corapcioglu and Choi (1996) and
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
275
Choi and Corapcioglu (1997) modelled the entrapment into the AWI as a reversible process with linear kinetics using the volumetric concentration of colloids per air volume. Sim and Chrysikopoulos (2000) expressed the concentration of colloids entrapped in AWI as the mass of colloids per area of the AWI, as calculated using the relation suggested by Cary (1994):
and
@Ga ¼ Łk wa Gm Łk aw ˆGa @t
(8:30)
b 2Łsb Łb Ł1b Ł1b s Ł s ˆ¼ þ Łr r0 b 1b
(8:31)
where Łs and Łr are the saturated and residual water contents, respectively; r0 is the effective pore radius; and and b are empirical parameters. Massoudieh and Ginn (2007) used the equation suggested by Cary (1994); however, they assumed that the colloid exchange between the AWI and the bulk aqueous phase is fast enough such that an equilibrium partitioning can be assumed: Ga ¼ K aw Gm
(8:32)
where Kaw is the distribution coefficient of colloids between the water and air phases. Simunek et al. (2006) used a modified version of the kinetic mass exchange model considering the availability of space on the AWI for colloid attachment: @ˆGa ¼ Łk wa łaca f a Gm k aw ˆGa @t
(8:33)
where łaca is a dimensionless colloid retention function and fa (dimensionless) is the fraction of the air/water area available for attachment. The AWI area relationship suggested by Bradford and Leij (1997) is ð r g n ˆ¼ w łðŁÞdŁ (8:34) aw Łw where ł(Ł) is the matric potential, is the porosity, and aw is the surface tension at the AWI.
8.2.4
Plugging
Plugging is the reduction in porosity and hydraulic conductivity of the porous medium due to entrapment of the colloidal particles. Analysis of plugging effects has particular implications in quantifying the performance of sand filters in water treatment facilities. In contaminant fate and transport models, plugging can become important when large quantities of colloidal particles are present. This condition can occur in stormwater infiltration basins, for example, when large quantities of particles are entrapped in the surface layers of the sediments, causing the permeability to decrease. A few researchers have studied plugging using quantitative approaches. Sen et al.
276
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
(2002) suggested calculating the porosity changes via a mass balance on the entrapped colloid concentration, and then used the equation suggested by Khilar et al. (1985) to obtain the hydraulic conductivity as a function of porosity: @ r @Gs ¼ b @t rf @ t
(8:35)
2 0
(8:36)
and K ¼ K0
where K is the hydraulic conductivity, rf is the average density of colloidal particles, and K0 and 0 are the reference hydraulic conductivity and porosity, respectively. Alternative approaches exist, all of which involve linking the changes to the hydraulic conductivity through the pore space reduction to the colloid deposition. Because the change in porosity resulting from the deposition of a specified number of colloids with known geometry is a relatively reliable calculation, the robustness of these approaches is dependent primarily on the robustness of the linkage between the porosity and conductivity. Bedrikovetsky et al. (2001) used the following empirical hyperbolic relationship to predict the decrease in hydraulic conductivity as a result of colloid filtration: K¼
K0 1 þ Gs
(8:37)
where is an empirical parameter. Shapiro et al. (2007) employed a stochastic approach, by considering a distribution of pore sizes and colloid sizes and using a population balance model that takes into account the dynamics of the distributions as a result of colloid entrapment. Using several approximations, they showed that this method can lead to the following relationship for permeability, which is a generalisation of Equation 8.36: K ¼ K0 (8:38) 0 where is a parameter that depends on the statistical properties of the pore and colloid sizes.
8.3
COLLOID-FACILITATED TRANSPORT
Colloid-facilitated transport models have mostly been developed based on the mass balance of species dissolved in the aqueous phase, sorbed to mobile and immobile colloids and sorbed solid matrix. Different models arise from: (1) the different levels of complexity considered in the colloid transport model; and (2) the different levels of complexity considered in the treatment of mass exchange between the aqueous phase
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
277
and the mobile and immobile solid phases (e.g., equilibrium/kinetic, single site/ multiple sites).
8.3.1
Fully equilibrium models
Early approaches to modelling of colloid-facilitated transport involved adjusting the retardation factor for the transport of contaminants to reflect their additional mobility due to association with mobile colloids (e.g., Magee et al., 1991) or adjusting the effective advective velocity and dispersion coefficients (e.g., Enfield and Bengtsson, 1988). More recent models have been placed on an improved mechanistic basis by considering dissolved, sorbed (immobile) and sorbed to colloid (mobile) phases. The associated three-phase equation for colloid-facilitated transport in the presence of homogeneous colloids can be written as @C @Gm c rB @ s rB @Gs cs @C @Gm c @ @C @Gm c þ þ vc D þ Dc þ þ þv ¼ @t @z @z @z @t Ł @t Ł @t @z @z (8:39) where C [Mc /L3 ] is the mass concentration of dissolved contaminants in the pore water (Mc refers to the dimension for the mass of contaminants); s [Mc /M] is the chemical concentration sorbed to immobile surfaces, c [Mc /M] is the chemical concentration sorbed to mobile colloids, and cs [Mc /M] is the chemical concentration sorbed to immobile colloids; v [L/T] is the advective velocity for dissolved species; and D [L2 /T] is the dispersion coefficient for dissolved contaminants. Other terms are as previously defined. Assuming equilibrium between sorbed and mobile colloids as well as aqueous and all sorbed contaminant concentrations, Gs ¼ K s Gm
(8:40)
K Dc C ¼ c
(8:41)
K Ds C ¼ s
(8:42)
K Dsc C ¼ sc
(8:43)
and
where Ks is the equilibrium colloid distribution coefficient between the attached and mobile colloid phases, and KDc , KDs and KDsc are the contaminant equilibrium distribution coefficients between the bulk aqueous phase and mobile colloidal particles, the solid matrix, and immobile colloidal particles, respectively. Incorporating Equations 8.40–8.43 into Equation 8.39, and also assuming that the advective velocities and dispersion coefficients are equal for colloidal particles and aqueous species, yields
278
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
@C @Gm C rB @C rB @Gm C @C @Gm C þK Dc þ þ þv þ vK Dc K Ds K s K Dcs @t @t @t @t @z @z Ł Ł @ @C @Gm C D þ DK Dc ¼ @z @z @z
(8:44)
Now, assuming no change in the total concentration of colloidal particles with time, one can write r r @C @C 1þK Dc Gm þ B K Ds þ B K s K Dcs Gm þ vð1 þ K Dc Gm Þ @t @z Ł Ł (8:45) @ @C ¼ ð1 þ K Dc Gm Þ D @z @z This equation can be expressed either as a simple advection–dispersion equation (ADE) with adjusted retardation factors (e.g., Magee et al., 1991), or with adjusted advective velocities and dispersion coefficients, as in Enfield and Bengtsson (1988). Mills et al. (1991) also used this approach, but considered multicomponent contaminant transport in variably saturated media. The problem with this approach is that it assumes equilibrium between mobile and immobile colloids, and also ignores the spatial and temporal variations of colloid concentration in the medium.
8.3.2
Dynamics of colloid transport and kinetic solid–water mass transfer
Additional complexity in colloid-facilitated transport is achieved by using more sophisticated colloid transport models as well as more sophisticated chemical mass exchange models between different phases. Corapcioglu and Jiang (1993) and Jiang and Corapcioglu (1993) developed a colloid-facilitated transport model in granular porous media considering contaminants to reside at three phases: attached to the soil matrix; attached to mobile and immobile colloids; and dissolved in the aqueous phase. They considered colloid entrapment and release to be kinetically controlled, but instantaneous equilibrium to be attained between contaminants and all solid phases. In this case, the relationship expressing the rate of change of attached colloidal particles with time should replace Equation 8.40: rb
@Gs ¼ k att ŁGm k det rb Gs @t
(8:46)
In the case where all the chemical mass exchange processes between the solid and aqueous phase are considered to be kinetically controlled, the mass balance equations for concentrations sorbed to soil matrix, mobile and immobile phases can be written as @ s ¼ k sa ð s K Ds C Þ @t
(8:47)
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
@Gm c @Gm c @ @Gm c Dc þ vc ¼ k ca Gm ð c K Dc C Þ @z @t @z @z
1 k att ŁGm c k det rb Gs sc Ł
@Gs cs 1 ¼ k csa Gs ð cs K Dcs C Þ þ k att ŁGm c k det rb Gs sc rb @t
279
(8:48)
(8:49)
where ksa [1/T], kca [1/T] and kcsa [1/T] are the mass exchange rate coefficients between the bulk aqueous phase and soil matrix, mobile colloids and immobile colloids, respectively. The first terms in Equations 8.48 and 8.49 represent the kinetic contaminant exchange between mobile and immobile colloids and the bulk aqueous phase, and the second terms represent the effect of deposition and detachment of colloids to/from the surfaces. Saiers and Hornberger (1996) implemented a so-called ‘‘two-box’’ approach by categorising the sorption sites on the colloidal particles into two types, and considering aqueous–solid mass exchange for the first type of site to be at equilibrium, with Langmuir isotherms, and kinetically controlled with a linear isotherm for the second type of site. Therefore, the mass balance for the sorbed phases c , s and sc was written as
@Gm c2 @Gm c2 þ vc @t @z
c1 ¼ K Dc C
(8:50)
s1 ¼ K Ds C
(8:51)
sc1 ¼ K Dsc C
(8:52)
@ s2 ¼ k sa ð s2 K Ds C Þ @t @ @Gm c2 Dc ¼ k ca Gm ð c2 K Dc C Þ @z @z
1 k att ŁGm c2 k det rb Gs sc2 Ł
@Gs cs2 1 ¼ k csa Gs ð cs2 K Dcs C Þ þ k att ŁGm c2 k det rb Gs sc2 rb @t
(8:53)
(8:54)
(8:55)
Roy and Dzombak (1998) applied this ‘‘two-box’’ approach to model the transport of hydrophobic organic compounds in the presence of colloids. Turner et al. (2006) used the two-box model by Saiers and Hornberger (1996) to investigate the effect of the desorption rate from the colloidal particles by observing the transport of Cs, with slower desorption, and Sr, with faster desorption, in the presence of colloids, and found that the desorption rate has a significant impact on the role of colloids in the transport of contaminants. Choi and Corapcioglu (1997) developed the first colloid-facilitated model in unsaturated porous media for non-volatile contaminants. They introduced a new
280
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
colloidal phase to represent colloids captured in the AWI, they assumed the AWI to be proportional to the volumetric air content, and they used a linear kinetic mass exchange relationship to model the colloidal entrapment and release to the AWI. Thus, for colloid-facilitated transport in the unsaturated zone there are three distinct phases in which the colloids themselves can occur: aqueous mobile colloids, colloids attached to the solid phase, and colloids attached to the AWI. Corapcioglu and Wang (1999) developed a dual-porosity (mobile–immobile) model for colloid and colloid-facilitated transport using an equilibrium sorption assumption, aiming to capture the effects of pore-scale heterogeneity and preferential flow. Later, Bekhit and Hassan (2005b) used this as the basis for their two-dimensional colloid-facilitated contaminant transport model. As noted above, several researchers have found that the process of colloid entrapment can be considered to be irreversible as long as the physical and chemical factors affecting the mobilisation of the colloids, including ionic strength, pH, pore water velocity and moisture content, are constant. At the same time, colloids might be produced in the porous medium through weathering, wet–dry cycles, or decementation (Ryan and Elimelech, 1996). Therefore, in cases in which none of these chemical or physical perturbations take place, the source of the colloids being mobilised is different from that for the colloids previously captured. Sen et al. (2004) developed an equilibrium three-phase model for colloid-facilitated transport in saturated media considering an irreversible filtration of colloids to the soil matrix. They considered the source of colloids being released to be different from the colloids captured by the medium by defining two immobile colloidal phases: irreversibly captured colloids Gsi and immobile colloids available for release Gsf (Equation 8.3), controlled by the following mass balance equations: @Gsi ¼ k att ŁGm @t
(8:56)
@Gsf ¼ k det rb Gs @t
(8:57)
rb rb
They therefore had five contaminant phases: dissolved; sorbed to solid matrix; sorbed to mobile colloids; irreversibly attached colloids; and immobile available colloids. Massoudieh and Ginn (2007) used the same approach for colloid-facilitated transport of multiple compounds in unsaturated media, but considered the mass exchange between different phases to be kinetically controlled, while the colloidal exchange between the air phase and the aqueous phase was considered to be in equilibrium, leading to the following solute transport equation: @C @Gm c rB @ s rB @ ð Gsi csi þ Gsf csf Þ ˆ @Ga ca þ þ þ þ @t Ł @t @t @t Ł @t Ł @C @Gm c @ @C @Gm c þ vc D þ Dc ¼ þv @z @z @z @z @z
(8:58)
where ca is the sorbed concentration to the colloids entrapped in the AWI, assumed to
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
281
be equal to the concentration sorbed to the aqueous phase colloids owing to the high rate of colloid exchange between the AWI and the pore water (i.e., ca ¼ c ); and csi and csf are the sorbed concentrations to the irreversibly captured colloids and available immobile colloids, respectively, as controlled by the following mass balance equations: @Gsf csf 1 ¼ k csa Gs ð csf K Dcs C Þ k det rb Gs sc rb @t
(8:59)
@Gsi csi 1 ¼ k csa Gs ð csi K Dcs C Þ þ k att ŁGm c rb @t
(8:60)
Simunek et al. (2006) incorporated their colloid-facilitated model in a colloid transport model that considers both attachment and straining. They used a two-site model for sorption to the solid matrix and a non-linear single site model for adsorption and desorption to the colloidal phases. The model is implemented in the HYDRUS1D software package.
8.4
HETEROGENEITY
Physical and chemical heterogeneity is shown to play a significant role in the transport of colloidal particles in porous media. The existence of preferential flow paths can facilitate the migration of colloidal particles in groundwater and the vadose zone (Corapcioglu and Wang, 1999). Pore-scale heterogeneity, such as variability in the pore sizes, geometries and surface charges, can also influence the dynamics of colloid capture, as pointed out by several researchers (Song and Elimelech, 1994). The electrochemical heterogeneities in porous media can also create barriers as well as paths for colloidal transport, depending on their spatial distribution. In unsaturated media, rapid infiltration of water in distinct paths (known as ‘‘fingering’’), seen particularly in relatively dry soils, can enhance the migration of large amounts of colloidal particles into deep layers in a relatively short time (McCarthy and McKay, 2004). These heterogeneities are overlooked in models where the chemical and physical properties are considered to be homogenised in an REV. Song and Elimelech (1994) incorporated the effect of surface charge heterogeneity by considering the deposition rate coefficient to be randomly distributed over the surfaces with a given frequency distribution: ð 1 k att ¼ kð Þłaca ð Þd
(8:61) ˆ ˆ where k( ) is the transfer rate associated with the surface area element , and łaca ( ) is the surface availability for . By considering a clean bed initial condition and no detachment conditions, they obtained an exponential function for łaca ( ), and therefore were able to express the integral in Equation 8.61 with a correction factor multiplied by the average deposition rate. Therefore, the resulting governing equation looks like the conventional CFT, with the difference being that the deposition rate
282
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
coefficient is multiplied by a correction factor. Johnson et al. (1996) categorised surface sites into favourable and unfavourable (patchwise geochemical heterogeneity), and essentially used a two-site model to incorporate the effect of charge heterogeneity. Sun et al. (2001) incorporated the patchwise geochemical heterogeneity of Johnson et al. (1996) into a two-dimensional advection–dispersion model while taking into account detachment of colloids from the surfaces. They randomly generated the chemical heterogeneity factor º, which represents the ratio of the surface area associated with favourable sites to the total area of the sites over the two-dimensional grid, to model the geochemical heterogeneities with scales larger than the grid size. Saiers (2002) considered the effect of chemical heterogeneity of the colloidal particles, as well as the soil matrix, by considering a distributed aqueous phase– colloid mass exchange rate and partitioning coefficient, as well as a distributed colloid deposition rate on surface sites, implemented using a multiple rate model (Equations 8.23 and 8.26): X
i X
i Xh Xh X @ G @ G m, j c,ij s, j cs,ij s,i rB @C r @ þ þ B þ @t @t @t Ł @t Ł (8:62) ( X
i) Xh c,ij @ Gm, j @C @Gm c @ @C þ vc D þ Dc ¼ þv @z @z @z @z @z with @ s,i ¼ k sa,i ð s,i K Ds,i C Þ @t
(8:63)
@Gm, j c,ij 1 ¼ k ca,ij Gm, j ð c,ij K Dc,i C Þ ð k att, j ŁGm, j c,ij k det, j rb Gs, j sc,ij Þ Ł @t (8:64)
@Gs, j cs,ij 1 ¼ k csa,ij Gs, j ð cs,ij K Dcs,i C Þ þ k att, j ŁGm, j c2 k det, j rb Gs, j sc,ij rb @t (8:65) where the subscript i refers to the sorption site and subscript j refers to the colloid type category. The gamma distribution was assumed to govern the distribution of contaminant mass transfer rates to sorption sites, and the distribution coefficient was assumed to depend on the transfer rate through a power law equation suggested by Pedit and Miller (1994). No method has been suggested so far to consider the effect of physical heterogeneity at scales smaller than the computational grid size. The two-domain model by Corapcioglu and Wang (1999) is designed to capture the effects of macroand micropores or preferential flow paths using a dual-porosity model; however, it is not based on any relationship between the small-scale properties of the medium and the parameters used in the homogenised Darcy-scale representation. Some attempts have been made to incorporate the effect of heterogeneities at scales larger than the
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
283
computational grid by using Monte Carlo simulation and by randomly generating the physical properties affecting the transport of colloids and contaminants (Bekhit and Hassan, 2005a; Sun et al., 2001). Sun et al. (2001) and Bekhit and Hassan (2005a) considered the heterogeneity of the porous medium by incorporating a spatially variable parameter hydraulic conductivity, a colloid deposition coefficient and a contaminant distribution coefficient. They considered the colloidal particles to be uniform. By using this approach, the effect of heterogeneity at a scale larger than the grid scale can be captured. When a wide variety of surface characteristics as well as a range of colloid properties are present in the system, development of the conventional ADEs based on the REV concept becomes cumbersome, non-unique and computationally burdensome. Also, it can become impossible to develop upscaling techniques that can link the small-scale physical and chemical properties and their variations to homogenised representations on a grid scale using conventional stochastic methods. In such cases, particle tracking and microfluidic approaches are deemed useful, because they pose no limitations on the number of different distinct categories of sites and colloids considered in the model. The drawbacks in using such models include the extensive amount of computational resources they require and, sometimes, difficulties in estimating the necessary parameters. Marseguerra et al. (2001a, 2001b) developed a discrete particle colloid-facilitated model based on the Kolmogorov–Dmitriev theory branching processes (Marseguerra and Zio, 1997). This method is based on tracking the location of groups of particles representing colloidal particles and radionuclide molecules in a grid structure using a Monte Carlo scheme. Probabilities are assigned to each of the processes, such as attachment to or detachment from the solid matrix, sorption of contaminants to colloidal particles, or advective–diffusive transport of particles from one grid cell to another. The difference between this method and conventional particle-tracking approaches is that the exact positions of particles and molecules are not needed; only the grid cell in which they are located at each time step is recorded.
8.5
UNKNOWNS AND FUTURE DIRECTIONS
Although an enormous amount of work has been conducted to quantify colloidenhanced contaminant transport in porous media, our knowledge is still not sufficient to make predictions of this phenomenon in natural settings, and particularly at the scales of interest in risk assessment and remediation applications. In this section, the various potential directions that may eventually lead to the construction of more useful colloid-facilitated contaminant transport models are presented.
8.5.1
Upscaling to field scale
The final goal of contaminant transport modelling is generally to perform risk assessment or make predictions at the scale of contaminated subsurface ecosystems. So far, most of the studies have been conducted in laboratories, at scales ranging from a few centimetres to a few metres. Because of the large timescales of the problem, it is
284
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
not possible to conduct field studies to quantify the long-term behaviour of contaminants at the field scale. No systematic techniques have been suggested to upscale the outcomes of these studies to larger scales considering the physical and chemical heterogeneities present in real-world applications. Current computational power does not make it possible to incorporate heterogeneities at the scales captured in laboratory experiments directly into field-scale models. The stochastic methods using Monte Carlo simulation techniques to address the large-scale transport of contaminants in the presence of colloids have typically used grid sizes much larger than the length scales at which the laboratory experiments have been performed (e.g., Bekhit and Hassan, 2005a). Models fitted to laboratory experiments are not necessarily appropriate for application to larger-scale problems without significant alterations of the model parameters and, in some cases, the model structure, to capture the effect of heterogeneities. Although inverse modelling techniques have been used to estimate the parameters of contaminant transport models at the field scale, there are a few problems with this approach in the case of contaminant transport in the presence of colloids. One such problem is the fact that in many cases it is impractical to characterise the transport of colloids and contaminants at relevant timescales; also, this approach does not provide reliable information on the actual physical processes at small scales, and therefore does not provide any relationship between the small-scale characteristics of the medium and the overall transport. The ADEs that are applicable to small-scale problems are not necessarily applicable to large-scale heterogeneous media (McCarthy and McKay, 2004). In addition, as the models will be calibrated using the data obtained for a limited period, there is no guarantee that this approach will perform well in making long-term predictions. For all the reasons mentioned above, one vital step in making colloid and colloidenhanced transport models applicable to real-world applications is to build systematic approaches to upscaling the governing equations developed for laboratory-scale situations to larger scales, and eventually to scales useful for addressing field-scale contaminant transport problems.
8.5.2
Quantifying geochemical effects
Many studies have been conducted to identify geochemical effects (e.g., aqueous chemistry, organic matter including ligands, multicomponent speciation and surface complexation). As a result of these studies, we now have an insight into the various ways in which changes in the chemistry of the system can affect colloid mobilisation and attachment. For instance, it appears that changes in, as opposed to the absolute magnitude of, basic geochemical properties (especially ionic strength, concentration of organic matter, and pH) can lead to episodic colloid release or erosion events in natural conditions, on a par with the effects of hydrodynamic transients. However, our ability to couple multicomponent geochemical conditions and transients with colloid and colloid-facilitated transport to build predictive models for the long-term simulation of contaminant transport remains limited, in part because of the computational challenges in coupling models of multicomponent geochemical conditions with colloid transport and colloid–surface interactions, and in part because of the constitu-
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
285
tive theory challenges in the same constructions: that is, we do not know exactly how different geochemical environments will result in colloid surface properties favourable or not to transport in natural (not contrived) conditions. Geochemical fluctuations as a result of episodic infiltrations, or the movement of contaminants themselves, can affect the pH, ionic strength and chemical composition of the pore water, each of these influencing the deposition, mobilisation of colloids, and decementation of the solid matrix leading to colloid release. Biological factors such as biofilm growth have also been shown to affect colloid transport by changing the chemistry of the system or the pore water hydrodynamics. The development of general models to incorporate geochemical variations in colloid-facilitated transport seems a difficult goal to reach, at least in the short term; targeting the construction of specific models to capture the integrated effects of geochemical variations in certain studies appears to offer a more pragmatic way to bridge the knowledge gaps.
8.5.3
Heterogeneous colloid characteristics
In any aquifer or vadose zone, colloids are present with various sources and a spectrum of properties, in terms of both chemical composition (e.g., surface charge, sorption capability) and transport behaviour (e.g., size, weight). Lumping all the different categories of colloids into one homogeneous group is an oversimplification that can lead to an inability to explain observed migration of contaminants in the presence of colloids. Other than the model by Saiers (2002), which suggested a way to incorporate multidisperse colloids, no other studies have been found that address this challenge. Two reasons can be given for the rarity of such efforts. The first is that the determination of the range of colloids and the estimation of the parameters that control their transport in porous media, such as the deposition and release rates, are challenging. The second reason is the large computational burden of such an effort, particularly in view of the wide range of contaminants bound to colloids. One approach for taking the variability in colloid properties (such as parameters controlling mobility and sorption) into account is to express these properties as joint distributions, but quantification of such joint frequency distributions is a difficult task. Particletracking approaches have been shown to be more effective in handling heterogeneous colloid properties (Sun et al., 2001), but they typically require large amounts of computer memory in order to be applied to real cases.
8.5.4
Pore-scale heterogeneity
Classical colloid transport models are constructed using idealisation of the pore-scale geometry, by assuming that the medium consists either of uniformly placed spherical grains (Happel sphere-in-cell model) or of uniform, mono-sized (e.g., cylindrical or spherical) pores. A natural porous medium is neither of these. It consists of nonuniformly distributed, non-spherical and multidisperse grains, or a range of variably sized, tortuous pores. The question is: how does idealisation of the natural porous medium affect the capability of models that result from it? Several ‘‘microfluidic’’ techniques are now available to model hydrodynamic and colloid transport in
286
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
complicated and irregular geometries, such as the lattice Boltzmann method (Basagaoglu et al., 2008) and the smoothed particle hydrodynamic approach (Tartakovsky et al., 2009; Yamamoto et al., 2007). Micro-tomography techniques (Auzerais et al., 1996; Spanne et al., 1994) and Monte Carlo spherical packing simulation methods (Yang et al., 2000) have also been used to construct more realistic representation of porous media. These techniques have provided tools to test various hypotheses regarding the mechanism of colloid release and entrapment, and can help in gaining insight into the important processes affecting the transport of colloids. However, mainly because of limitations in the available computational power, pore-scale models are applicable only to very small domains, with sizes of the order of centimetres, and therefore it is not yet possible to evaluate the effects of heterogeneities at larger scales by direct application of these models.
REFERENCES Accornero, M., Marini, L., Ottonello, G. and Zuccolini, M.V. (2005). The fate of major constituents and chromium and other trace elements when acid waters from the derelict Libiola mine (Italy) are mixed with stream waters. Applied Geochemistry. 20: 1368–1390. Adamczyk, Z., Dabros, T., Czarnecki, J. and van de Ven, T.G.M. (1983). Particle transfer to solidsurfaces. Advances in Colloid and Interface Science. 19: 183–252. Arulanandan, K., Longanathan, P. and Krone, R.B. (1975). Pore and eroding fluid influences on surface erosion on soil. Journal of Geotechnical Engineering. ASCE. 101: 51–57. Auzerais, F.M., Dunsmuir, J., Ferreol, B.B., Martys, N., Olson, J., Ramakrishnan, T.S., Rothman, D.H. and Schwartz, L.M. (1996). Transport in sandstone: a study based on three-dimensional microtomography. Geophysical Research Letters. 23: 705–708. Basagaoglu, H., Meakin, P., Succi, S., Redden, G.R. and Ginn, T.R. (2008). Two-dimensional lattice Boltzmann simulation of colloid migration in rough-walled narrow flow channels. Physical Review E. 77: 031405. Bedrikovetsky, P., Marchesin, D., Shecaira, F., Souza, A.L., Milanez, P.V. and Rezende, E. (2001). Characterisation of deep bed filtration system from laboratory pressure drop measurements. Journal of Petroleum Science and Engineering. 32: 167–177. Bekhit, H.M. and Hassan, A.E. (2005a). Stochastic modeling of colloid-contaminant transport in physically and geochemically heterogeneous porous media. Water Resources Research. 41, W02010.1–W02010.18. Bekhit, H.M. and Hassan, A.E. (2005b). Two-dimensional modeling of contaminant transport in porous media in the presence of colloids. Advances in Water Resources. 28: 1320–1335. Bostrom, M., Williams, D.R.M. and Ninham, B.W. (2001). Specific ion effects: why DLVO theory fails for biology and colloid systems. Physical Review Letters. 87, 168103/168101–168104. Bradford, S.A. and Bettahar, M. (2006). Concentration dependent transport of colloids in saturated porous media. Journal of Contaminant Hydrology. 82: 99–117. Bradford, S.A. and Leij, F.J. (1997). Estimating interfacial areas for multi-fluid soil systems. Journal of Contaminant Hydrology. 27: 83–105. Bradford, S.A., Simunek, J., Bettahar, M., van Genuchten, M.T. and Yates, S.R. (2003). Modeling colloid attachment, straining, and exclusion in saturated porous media. Environmental Science & Technology. 37: 2242–2250. Bradford, S.A., Torkzaban, S., Leij, F., Sˇimunek, J. and van Genuchten, M.T. (2009). Modeling the coupled effects of pore space geometry and velocity on colloid transport and retention. Water Resources Research. 45, W02414, doi:10.1029/2008WR007096.
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
287
Buddemeier, R.W. and Hunt, J.R. (1988). Transport of colloidal contaminants in groundwater: radionuclide migration at the Nevada test site. Applied Geochemistry. 3: 535–548. Cary, J.W. (1994). Estimating the surface-area of fluid-phase interfaces in porous-media. Journal of Contaminant Hydrology. 15: 243–248. Choi, H.C. and Corapcioglu, M.Y. (1997). Transport of a non-volatile contaminant in unsaturated porous media in the presence of colloids. Journal of Contaminant Hydrology. 25: 299–324. Chu, Y., Jin, Y., Flury, M. and Yates, M.V. (2001). Mechanisms of virus removal during transport in unsaturated porous media. Water Resources Research. 37: 253–263. Corapcioglu, M.Y. and Choi, H. (1996). Modeling colloid transport in unsaturated porous media and validation with laboratory column data. Water Resources Research. 32: 3437–3449. Corapcioglu, M.Y. and Jiang, S.Y. (1993). Colloid-facilitated groundwater contaminant transport. Water Resources Research. 29: 2215–2226. Corapcioglu, M.Y. and Wang, S. (1999). Dual-porosity groundwater contaminant transport in the presence of colloids. Water Resources Research. 35: 3261–3273. Dabros, T. and van de Ven, T.G.M. (1982). Kinetics of coating by colloidal particles. Journal of Colloid and Interface Science. 89: 232–244. DeNovio, N.M., Saiers, J.E. and Ryan, J.N. (2004). Colloid movement in unsaturated porous media: recent advances and future directions. Vadose Zone Journal. 3: 338–351. Dusek, J., Vogel, T., Lichner, L., Cipakova, A. and Dohnal, M. (2006). Simulated cadmium transport in macroporous soil during heavy rainstorm using dual-permeability approach. Biologia. 61: S251-S254. Enfield, C.G. and Bengtsson, G. (1988). Macromolecular transport of hydrophobic contaminants in aqueous environments. Ground Water. 26: 64–70. Gargiulo, G., Bradford, S.A., Simunek, J., Ustohal, P., Vereecken, H. and Klumpp, E. (2008). Bacteria transport and deposition under unsaturated flow conditions: the role of water content and bacteria surface hydrophobicity. Vadose Zone Journal. 7: 406–419. Ginn, T.R. (2000). On the distribution of multicomponent mixtures over generalized exposure time in subsurface flow and reactive transport: theory and formulations for residence-time-dependent sorption/desorption with memory. Water Resources Research. 36: 2885–2893. Govindaraju, R.S., Reddi, L.N. and Kasavaraju, S.K. (1995). A physically-based model for mobilization of kaolinite particles under hydraulic gradients. Journal of Hydrology. 172: 331–350. Grasso, M., Catara, F. and Sambataro, M. (2002). Boson-mapping-based extension of the randomphase approximation in a three-level Lipkin model. Physical Review C. 66: 064303. Harter, T., Atwill, E.R., Hou, L., Karle, B.M. and Tate, K.W. (2008). Developing risk models of Cryptosporidium transport in soils from vegetated, tilted soilbox experiments. Journal of Environmental Quality. 37: 245–258. Hermansson, M. (1999). The DLVO theory in microbial adhesion. Colloids and Surfaces B: Biointerfaces. 14: 105–119. Hochella, M.F., Moore, J.N., Putnis, C.V., Putnis, A., Kasama, T. and Eberl, D.D. (2005). Direct observation of heavy metal-mineral association from the Clark Fork River superfund complex: implications for metal transport and bioavailability. Geochimica et Cosmochimica Acta. 69: 1651–1663. Jacobsen, O.H., Moldrup, P., Larsen, C., Konnerup, L. and Petersen, L.W. (1997). Particle transport in macropores of undisturbed soil columns. Journal of Hydrology. 196: 185–203. Jarvis, N.J., Villholth, K.G. and Ulen, B. (1999). Modelling particle mobilization and leaching in macroporous soil. European Journal of Soil Science. 50: 621–632. Jewett, D.G., Logan, B.E., Arnold, R.G. and Bales, R.C. (1999). Transport of Pseudomonas fluorescens strain P17 through quartz sand columns as a function of water content. Journal of Contaminant Hydrology. 36: 73–89. Jiang, S.Y. and Corapcioglu, M.Y. (1993). A hybrid equilibrium-model of solute transport in porousmedia in the presence of colloids. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 73: 275–286. Johnson, P.R. and Elimelech, M. (1995). Dynamics of colloid deposition in porous media: blocking based on random sequential adsorption. Langmuir. 11: 801–812.
288
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Johnson, P.R., Sun, N. and Elimelech, M. (1996). Colloid transport in geochemically heterogeneous porous media: modeling and measurements. Environmental Science & Technology. 30: 3284– 3293. Johnson, W.P., Blue, K.A., Logan, B.E. and Arnold, R.G. (1995). Modeling bacteria detachment during transport through porous-media as a residence-time dependent process. Water Resources Research. 31: 2649–2658. Kallay, N., Tomic, M., Biskup, B., Kunjasic, I. and Matijevic, E. (1987). Particle adhesion and removal in model systems. XI. Kinetics of attachment and detachment for hematite–glass systems. Colloids and Surfaces. 28: 185–197. Keller, A.A. and Sirivithayapakorn, S. (2004). Transport of colloids in unsaturated porous media: explaining large-scale behavior based on pore-scale mechanisms. Water Resources Research. 40: W12403.1–W12403.8. Khilar, K.C., Fogler, H.S. and Gray, D.H. (1985). Model for piping-plugging in earthen structures. Journal of Geotechnical Engineering, ASCE. 111: 833–846. Kim, S.B. and Corapcioglu, M.Y. (2002). Contaminant transport in dual-porosity media with dissolved organic matter and bacteria present as mobile colloids. Journal of Contaminant Hydrology. 59: 267–289. Kretzschmar, R. and Schafer, T. (2005). Metal retention and transport on colloidal particles in the environment. Elements. 1: 205–210. Kretzschmar, R., Robarge, W.P. and Amoozegar, A. (1995). Influence of natural organic matter on colloid transport through saprolite. Water Resources Research. 31: 435–445. Lazouskaya, V. and Jin, Y. (2008). Colloid retention at air/water interface in a capillary channel. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 325: 141–151. Lenhart, J.J. and Saiers, J.E. (2003). Colloid mobilization in water-saturated porous media under transient chemical conditions. Environmental Science & Technology, 37: 2780–2787. Magee, B.R., Lion, L.W. and Lemley, A.T. (1991). Transport of dissolved organic macromolecules and their effect on the transport of phenanthrene in porous media. Environmental Science & Technology. 25: 323–331. Majdalani, S., Michel, E., Di Pietro, L., Angulo-Jaramillo, R. and Rousseau, M. (2007). Mobilization and preferential transport of soil particles during infiltration: a core-scale modeling approach. Water Resources Research. 43(5): W05401.1–W05401.14. Majdalani, S., Michel, E., Di-Pietro, L. and Angulo-Jaramillo, R. (2008). Effects of wetting and drying cycles on in situ soil particle mobilization. European Journal of Soil Science. 59(2), 147–155. Marseguerra, M. and Zio, E. (1997). Modelling the transport of contaminants in groundwater as a branching stochastic process. Annals of Nuclear Energy. 24: 625–644. Marseguerra, M., Patelli, E. and Zio, E. (2001a). Groundwater contaminant transport in presence of colloids. I: A stochastic nonlinear model and parameter identification. Annals of Nuclear Energy. 28: 777–803. Marseguerra, M., Patelli, E. and Zio, E. (2001b). Groundwater contaminant transport in presence of colloids. II: Sensitivity and uncertainty analysis on literature case studies. Annals of Nuclear Energy. 28: 1799–1807. Marty, R.C., Bennett, D. and Thullen, P. (1997). Mechanism of plutonium transport in a shallow aquifer in Mortandad Canyon, Los Alamos National Laboratory, New Mexico. Environmental Science & Technology. 31: 2020–2027. Massoudieh, A. and Ginn, T.R. (2007). Modeling colloid-facilitated transport of multi-species contaminants in unsaturated porous media. Journal of Contaminant Hydrology. 92: 162–183. McCarthy, J.F. and Degueldre, C. (eds) (1993). Environmental Particles (Environmental Analytical and Physical Chemistry). Lewis Publishers, Boca Raton, FL. McCarthy, J.F. and McKay, L.D. (2004). Colloid transport in the subsurface: past, present, and future challenges. Vadose Zone Journal. 3: 326–337. McCarthy, J.F. and Zachara, J.M. (1989). Subsurface transport of contaminants: mobile colloids in the subsurface environment may alter the transport of contaminants. Environmental Science & Technology. 23: 496–502.
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
289
McCarthy, J.F., Czerwinski, K.R., Sanford, W.E., Jardine, P.M. and Marsh, J.D. (1998). Mobilization of transuranic radionuclides from disposal trenches by natural organic matter. Journal of Contaminant Hydrology. 30: 49–77. Meinders, J.M., Noordmans, J. and Busscher, H.J. (1992). Simultaneous monitoring of the adsorption and desorption of colloidal particles during deposition in a parallel plate flow chamber. Journal of Colloid and Interface Science. 152: 265–280. Mills, W.B., Liu, S. and Fong, F.K. (1991). Literature review and model (COMET) for colloid metals transport in porous media. Ground Water. 29: 199–208. Nelson, K.E. and Ginn, T.R. (2005). Colloid filtration theory and the Happel sphere-in-cell model revisited with direct numerical simulation of colloids. Langmuir. 21: 2173–2184. Novikov, A.P., Kalmykov, S.N., Utsunomiya, S., Ewing, R.C., Horreard, F., Merkulov, A., Clark, S.B., Tkachev, V.V. and Myasoedov, B.F. (2006). Colloid transport of plutonium in the far-field of the Mayak Production Association, Russia. Science. 314: 638–641. Nyhan, J.W., Drennon, B.J., Abeele, W.V., Wheeler, M.L., Purtymun, W.D., Trujillo, G., Herrera, W.J. and Booth, J.W. (1985). Distribution of plutonium and americium beneath a 33-yr-old liquid waste-disposal site. Journal of Environmental Quality. 14: 501–509. Pang, L.P. and Simunek, J. (2006). Evaluation of bacteria-facilitated cadmium transport in gravel columns using the HYDRUS colloid-facilitated solute transport model. Water Resources Research. 42, W12S10, doi:10.1029/2006WR004896. Pedit, J.A. and Miller, C.T. (1994). Heterogeneous sorption processes in subsurface systems. 1: Model formulations and applications. Environmental Science & Technology. 28: 2094–2104. Penrose, W.R., Polzer, W.L., Essington, E.H., Nelson, D.M. and Orlandini, K.A. (1990). Mobility of plutonium and americium through a shallow aquifer in a semiarid region. Environmental Science & Technology. 24: 228–234. Powelson, D.K. and Gerba, C.P. (1994). Virus removal from sewage effluents during saturated and unsaturated flow through soil columns. Water Research. 28: 2175–2181. Rajagopalan, R. and Tien, C. (1976). Trajectory analysis of deep-bed filtration with the sphere-in-cell porous media model. AIChE Journal. 22: 523–533. Redman, J.A., Estes, M.K. and Grant, S.B. (2001). Resolving macroscale and microscale heterogeneity in virus filtration. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 191: 57–70. Redman, J.A., Walker, S.L. and Elimelech, M. (2004). Bacterial adhesion and transport in porous media: role of the secondary energy minimum. Environmental Science & Technology. 38: 1777–1785. Reimus, P., Murrell, M., Abdel-Fattah, A., Garcia, E., Norman, D., Goldstein, S., Nunn, A., Gritzo, R. and Martinez, B. (2005). Colloid Characteristics and Radionuclide Associations with Colloids in Near-Field Waters at the Nevada Test Site, FY 2005 Progress Report, Los Alamos National Laboratory, Los Alamos, NM. Robinson, B.A., Li, C.H. and Ho, C.K. (2003). Performance assessment model development and analysis of radionuclide transport in the unsaturated zone, Yucca Mountain, Nevada. Journal of Contaminant Hydrology. 62–63: 249–268. Roy, S.B. and Dzombak, D.A. (1996). Naþ -Ca2(þ) exchange effects in the detachment of latex colloids deposited in glass bead porous media. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 119: 133–139. Roy, S.B. and Dzombak, D.A. (1997). Chemical factors influencing colloid-facilitated transport of contaminants in porous media. Environmental Science & Technology. 31: 656–664. Roy, S.B. and Dzombak, D.A. (1998). Sorption nonequilibrium effects on colloid-enhanced transport of hydrophobic organic compounds in porous media. Journal of Contaminant Hydrology. 30:179–200. Ryan, J.N. and Elimelech, M. (1996). Colloid mobilization and transport in groundwater. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 107: 1–56. Ryan, J.N. and Gschwend, P.M. (1994). Effects of ionic strength and flow rate on colloid releaserelating kinetics to intersurface potential energy. Journal of Colloid and Interface Science. 164: 21–34.
290
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Saiers, J.E. (2002). Laboratory observations and mathematical modeling of colloid-facilitated contaminant transport in chemically heterogeneous systems. Water Resources Research. 38: 3.1–3.4. Saiers, J.E. and Hornberger, G.M. (1996). The role of colloidal kaolinite in the transport of cesium through laboratory sand columns. Water Resources Research. 32: 33–41. Saiers, J.E. and Lenhart, J.J. (2003a). Colloid mobilization and transport within unsaturated porous media under transient flow conditions. Water Resources Research. 39: 33–42. Saiers, J.E. and Lenhart, J.J. (2003b). Ionic-strength effects on colloid transport and interfacial reactions in partially saturated porous media. Water Resources Research. 39(9): 10.1–10.11. Schaaf, P. and Talbot, J. (1989). Surface exclusion effects in adsorption processes. Journal of Chemical Physics. 91: 4401–4409. Schelde, K., Moldrup, P., Jacobsen, O.H., de Jonge, H., de Jonge, L.W. and Komatsu, T. (2002). Diffusion-limited mobilization and transport of natural colloids in macroporous soil. Vadose Zone Journal. 1: 125–136. Sen, T.K., Nalwaya, N. and Khilar, K.C. (2002). Colloid-associated contaminant transport in porous media: 2. Mathematical modeling. AIChE Journal. 48: 2375–2385. Sen, T.K., Shanbhag, S. and Khilar, K.C. (2004). Subsurface colloids in groundwater contamination: a mathematical model. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 232: 29–38. Shang, J.Y., Flury, M., Chen, G. and Zhuang, J. (2008). Impact of flow rate, water content, and capillary forces on in situ colloid mobilization during infiltration in unsaturated sediments. Water Resources Research. 44, W06411, doi:10.1029/2007WR006516. Shapiro, A.A., Bedrikovetsky, P.G., Santos, A. and Medvedev, O.O. (2007). A stochastic model for filtration of particulate suspensions with incomplete pore plugging. Transport in Porous Media. 67: 135–164. Short, S.A., Lowson, R.T. and Ellis, J. (1988). U-234/U-238 and Th-230/U-234 activity ratios in the colloidal phases of aquifers in lateritic weathered zones. Geochimica et Cosmochimica Acta. 52: 2555–2563. Sim, Y. and Chrysikopoulos, C.V. (2000). Virus transport in unsaturated porous media. Water Resources Research. 36: 173–179. Simunek, J., He, C.M., Pang, L.P. and Bradford, S.A. (2006). Colloid-facilitated solute transport in variably saturated porous media: numerical model and experimental verification. Vadose Zone Journal. 5: 1035–1047. Sirivithayapakorn, S. and Keller, A. (2003). Transport of colloids in unsaturated porous media: a pore-scale observation of processes during the dissolution of air/water interface. Water Resources Research. 39: 6.1–6.10. Smets, B.F., Grasso, D., Engwall, M.A. and Machinist, B.J. (1999). Surface physicochemical properties of Pseudomonas fluorescens and impact on adhesion and transport through porous media. Colloids and Surfaces B: Biointerfaces. 14: 121–139. Song, L. and Elimelech, M. (1993). Dynamics of colloid deposition in porous media: modeling the role of retained particles. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 73: 49–63. Song, L. and Elimelech, M. (1994). Transient deposition of colloidal particles in heterogeneous porous media. Journal of Colloid and Interface Science. 167: 301–313. Spanne, P., Thovert, J.F., Jacquin, C.J., Lindquist, W.B., Jones, K.W. and Adler, P.M. (1994). Synchrotron computed microtomography of porous-media-topology and transports. Physical Review Letters. 73: 2001–2004. Strevett, K.A. and Chen, G. (2003). Microbial surface thermodynamics and applications. Research in Microbiology. 154: 329–335. Sun, N., Elimelech, M., Sun, N.Z. and Ryan, J.N. (2001). A novel two-dimensional model for colloid transport in physically and geochemically heterogeneous porous media. Journal of Contaminant Hydrology. 49: 173–199. Suzuki, Y., Kelly, S.D., Kemner, K.M. and Banfield, J.F. (2002). Radionuclide contamination: nanometre-size products of uranium bioreduction. Nature. 419: 134–134.
COLLOID-FACILITATED CONTAMINANT TRANSPORT IN UNSATURATED POROUS MEDIA
291
Talbot, J. (1996). Time dependent desorption: A memory function approach. Adsorption: Journal of the International Adsorption Society. 2: 89–94. Tartakovsky, A.M., Meakin, P. and Ward, A.L. (2009). Smoothed particle hydrodynamics model of non-aqueous phase liquid flow and dissolution. Transport in Porous Media. 76: 11–34. Tong, M.P., Ma, H.L. and Johnson, W.P. (2008). Funneling of flow into grain-to-grain contacts drives colloid-colloid aggregation in the presence of an energy barrier. Environmental Science & Technology. 42: 2826–2832. Torkzaban, S., Hassanizadeh, S.M., Schijven, J.F. and van den Berg, H. (2006). Role of air/water interfaces on retention of viruses under unsaturated conditions. Water Resources Research. 42, W12S14, doi:10.1029/2006WR004904. Tufenkji, N. and Elimelech, M. (2004). Correlation equation for predicting single-collector efficiency in physicochemical filtration in saturated porous media. Environmental Science & Technology. 38: 529–536. Tufenkji, N., Redman, J.A. and Elimelech, M. (2003). Interpreting deposition patterns of microbial particles in laboratory-scale column experiments. Environmental Science & Technology. 37: 616–623. Turner, N.B., Ryan, J.N. and Saiers, J.E. (2006). Effect of desorption kinetics on colloid-facilitated transport of contaminants: cesium, strontium, and illite colloids. Water Resources Research. 42, W12S09.1–W12S09.17. van de Weerd, H. and Leijnse, A. (1997). Assessment of the effect of kinetics on colloid facilitated radionuclide transport in porous media. Journal of Contaminant Hydrology. 26: 245–256. van de Weerd, H., Leijnse, A. and Van Riemsdijk, W.H. (1998). Transport of reactive colloids and contaminants in groundwater: effect of nonlinear kinetic interactions. Journal of Contaminant Hydrology. 32: 313–331. Wan, J.M. and Tokunaga, T.K. (1997). Film straining of colloids in unsaturated porous media: conceptual model and experimental testing. Environmental Science & Technology. 31: 2413– 2420. Wan, J.M. and Wilson, J.L. (1994). Colloid transport in unsaturated porous-media. Water Resources Research. 30: 857–864. Woessner, W.W., Ball, P.N., DeBorde, D.C. and Troy, T.L. (2001). Viral transport in a sand and gravel aquifer under field pumping conditions. Ground Water. 39: 886–894. Xu, S., Gao, B. and Saiers, J.E. (2006). Straining of colloidal particles in saturated porous media. Water Resources Research. 42, W12S16, doi:10.1029/2006WR004948. Yamamoto, R., Kim, K. and Nakayama, Y. (2007). Strict simulations of non-equilibrium dynamics of colloids. Colloids and Surfaces A: Physicochemical and Engineering Aspects. 311: 42–47. Yang, R.Y., Zou, R.P. and Yu, A.B. (2000). Computer simulation of the packing of fine particles. Physical Review E. 62: 3900–3908. Yao, K.-M., Habibian, M.T. and O’Melia, C.R. (1971). Water and waste water filtration. Concepts and applications. Environmental Science & Technology. 5: 1105–1112. Zhuang, J., McCarthy, J.F., Tyner, J.S., Perfect, E. and Flury, M. (2007). In situ colloid mobilization in Hanford sediments under unsaturated transient flow conditions: effect of irrigation pattern. Environmental Science & Technology. 41: 3199–3204.
CHAPTER
9
Regional Modelling of Carbon, Nitrogen and Phosphorus Geospatial Patterns Sabine Grunwald, Gustavo M. Vasques, Nicholas B. Comerford, Gregory L. Bruland and Christine M. Bliss
9.1
INTRODUCTION
Soil carbon (C), nitrogen (N) and phosphorus (P) cycling control biogeochemical processes that form different soils. Turnover rates, microbial activity, leaching and other processes depend on the lability/recalcitrance of C, N and P, which determine resource depletion, recycling and accumulation of C, N and P within soil profiles and across landscapes. Different pools of C, N and P may show distinct landscape-level patterns and levels of variability that are related to the processes that generated them. Global climate change and eutrophication of ecosystems are highly dependent on cycling of C, N and P in the atmosphere, biosphere and pedosphere. Redfield (1958) discovered an unexpected congruence in C : N : P in the world’s oceans, and suggested that it can be used to draw inferences about the large-scale operation on the global biogeochemical system. He observed that, on average, plankton biomass contains C, N and P in an atomic ratio of 106 : 16 : 1, similar to the ratio in marine water, and demonstrating that the abundance of C, N and P is regulated by reciprocal interactions between marine organisms and the ocean environment. Sterner and Elser (2002) pointed out the elegance and simplicity of this stoichiometric relationship, the Redfield ratio, and searched for similar patterns and relationships in terrestrial ecosystems. Recent studies have found Redfield-like ratios in plant communities and forest ecosystems (McGroddy et al., 2004; Sterner and Elser, 2002) that may provide insight into the nature of nutrient limitations in terrestrial ecosystems. McGroddy et al. (2004) observed C : N : P ratios in forests worldwide of 1212 : 28 : 1 in foliage and 3007 : 45 : 1 in litter, and Cleveland and Liptzin (2007) found remarkably consistent C : N : P ratios in both total soil (186 : 13 : 1) and soil microbial biomass pools (60 : 7 : 1) in a global dataset in nonModelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
294
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
agricultural, unfertilised fields. The total soil C : N ratios varied between 2 and 30, and total soil N : P ratios between 1 and 77. It has been suggested that these ratios are robust across large geographical distances. Tighter mean C : N ratios of soil organic matter were estimated at 9.9 for arid Yermosols to 25.8 for Histosols at global scale by Batjes (1996). At first sight it appears that universal ratio models exist in different aquatic and terrestrial ecosystems, but there is substantial unexplained variability. For example, McGroddy et al. (2004) observed R2 from 0.87 (C : N) to 0.56 (N : P) in litter, and from 0.41 (C : N) to 0.64 (N : P) in foliage. According to Cleveland and Liptzin (2007), the R2 for soil C : N was 0.75, for soil C : P 0.31, and soil N : P 0.41 in a global dataset covering grassland, forest and other non-agricultural sites. This variability in soil systems may be due to different mobility of elements within the soil medium and significant spatial differences in soil nutrient concentrations driven by state factor variation (Jenny, 1941). Other factors may include nutrient redistribution mechanisms that operate only at fine scales (e.g. litterfall) perpetuating soil nutrient heterogeneity. Although studies at global scale have found average trends in C : N : P ratios (Cleveland and Liptzin, 2007; Hungate et al., 2003; McGroddy et al., 2004; Sterner and Elser, 2002), knowledge of the spatial variability of ratios at regional scales is lacking. These ratios, and in particular C : N and N : bioavailable P, provide signatures of nutrient limitations in aquatic and terrestrial ecosystems, primary productivity, which is dependent on the availability of N, or nutrient enrichment, which poses risks to the water quality and ecological resilience of a system. The spatial variability of soil nutrients and environmental properties across larger regions is complex as documented in aquatic (Grunwald and Reddy, 2008; Rivero et al., 2007) and terrestrial (Bruland et al., 2008; Grunwald et al., 2006; Lamsal et al., 2006) ecosystems, and may affect geospatial patterns of C : N, N : P, and other ratios. Spatially explicit models that predict soil properties using environmental landscape properties have been developed using statistical, geostatistical or mixed methods (Grunwald, 2006; McBratney et al., 2003). The objectives of this study were to: • • •
characterise geospatial patterns of C, N and P across a large subtropical region in the south-eastern USA; assess the spatial variability of C : N and N : P ratios; and explain C, N and P spatial patterns in soils using inferential, spatially explicit models that relate environmental soil-forming factors to soil properties.
9.2 9.2.1
MATERIALS AND METHODS Study area
The Santa Fe River Watershed (SFRW) (size: 3585 km2 ), part of the Suwannee Basin, is located in north-central Florida. The soils of the SFRW are predominantly sandy in texture, with some loamy to clayey areas and organic soils. The dominant soil orders in the watershed are Ultisols (36.7%), Spodosols (25.8%) and Entisols (14.7%), with
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
295
smaller areas of Histosols (2.0%), Inceptisols (1.1%) and Alfisols (1.0%). The two main physiographic regions in the watershed are the Gulf Coastal Lowlands and the Northern Highlands, which are separated from one another by the Cody Scarp, an escarpment that runs SE–NW across north-central Florida. The geology is dominated by limestone and karst terrain, capped by Miocene, Pliocene and Pleistocene– Holocene sediments (Randazzo and Jones, 1997). Elevation in the SFRW ranges from 1.5 to 92 m above mean sea level, with extensive coverage of level 0–2% slopes or gently sloping and undulating (2–5%) slopes; the major exception to this pattern is the moderately and strongly sloping land (5–12% slopes) along the Cody Scarp. The climate is subtropical, with mean annual precipitation of 1224 mm and mean annual temperature of 20.48C (1972–2008) (National Climatic Data Center, National Oceanic and Atmospheric Administration, 2008). Predominant land use/land cover consists of pine plantations (30%), wetlands (14%), pasture (13%), rangeland (12%) and upland forest (11%). Urban areas occupy less than 7% of the watershed, and crops around 5% (Florida Fish and Wildlife Conservation Commission, 2003).
9.2.2
Soil and environmental data
Soil data Soil samples were collected at four depth increments – layer 1 (L1), 0–30 cm; layer 2 (L2), 30–60 cm; layer 3 (L3), 60–120 cm; and layer 4 (L4), 120–180 cm – in composites proportional to the depth of sampling to ensure a constant sampling support for each increment. A stratified-random sampling design was used with strata derived from land use–soil order factor combinations in ArcGIS (Environmental Systems Research Institute (ESRI), Redlands, CA, USA). Sample sites were randomly selected within strata proportional to their aerial extent. The sampling design and protocol have been described in detail by Grunwald et al. (2006), Lamsal et al. (2006) and Bruland et al. (2008). Each sample location was georeferenced using a differential global positioning system in the geographic coordinate system (latitude and longitude), which were projected into an Albers Equal Area map projection using the ArcGIS software. Total C, N and Mehlich P were measured in all four layers. Each soil sample was analysed for total C and N by combustion using a Thermo-Finnigan Flash EA1112 elemental analyser (Thermo Electron Corp., Waltham, MA, USA), and Mehlich-1 P was analysed after procedures developed by Nelson et al. (1953). Mehlich-1 P is usually used to assess the soil P status for plant growth, instead of totals, but has been developed for acidic, low cation exchange capacity soils, such as the ones in Florida. Environmental landscape data We used the SCORPAN approach suggested by McBratney et al. (2003) to assemble various environmental factors for the SFRW (Equation 9.1). This conceptual model assumes that spatially explicit relationships exist between a target property (e.g. soil C) and environmental soil-forming variables (Grunwald, 2006; Grunwald et al., 2007).
296
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Sp ½ x, y, , t ¼ f S ½ x, y, , t, C ½ x, y, , t, O½ x, y, , t, R½ x, y, , t, P½ x, y, , t, A½ x, y, N (9:1) where Sp is the predicted soil attribute or soil class (e.g. soil C, N or C : N); S is soil attributes at a geographic location (point); C is climate; O is organisms (land use, vegetation); R is relief (topography); P is parent material (geology); A is age, or the time; N is space; x is the x-coordinate (easting); y is the y-coordinate (northing); and t is time. SCORPAN factors were delineated for the SFRW using various spatial data layers in ArcGIS. A summary of factors, data sources, and short descriptions is provided in Table 9.1. Spatial data layers were either GIS shapefiles or rasters (grids) (30 m resolution) that covered the whole watershed. For each soil sampling location the corresponding SCORPAN values were derived using an extraction function (Value to Points) in ArcGIS. A matrix was built that contained site identification numbers for each soil observation, geographic coordinates, measured C, N and P values, and SCORPAN values. This matrix was used to process data using statistical and geostatistical methods.
9.2.3
Geospatial prediction models
Spatial dependence structures were derived from semivariogram analysis for each soil property (C, N and P) and ratios. Semivariograms plot the semivariance against distance (lags) between data pairs (h). Characteristic parameters that can be derived from semivariograms are the nugget, sill, and spatial autocorrelation range. The nugget describes the measurement error and fine-scale variability. Semivariograms of properties or processes that exhibit second-order stationarity reach upper bounds at which they remain after their initial increase. This upper bound, or maximum, is known as the sill. The range marks the maximum distance of spatial autocorrelation (Grunwald, 2006; Webster and Oliver, 2001). Kriging, which is a weighted interpolation method that takes the spatial dependence structure identified in semivariograms into account, was used to interpolate soil properties (C, N and P). As soil properties showed non-Gaussian distributions, lognormal Kriging (LNK) (Goovaerts, 1997) was used to interpolate properties within each layer across the watershed. The back-transformation into original attribute space was performed according to the formula provided by Webster and Oliver (2001). The geostatistical analysis was performed in ISATIS 9.0 (Geovariances Inc., France). Cross-validation was performed to evaluate prediction performances reporting robust mean prediction error and robust mean standard errors (meaning that only predictions whose standardised errors lie between 2.5 and 2.5 are considered). The C : N and N : P ratio maps were derived by dividing interpolated soil property grids within each soil layer. To standardise C, N and P observed within different soil layers and across the watershed into a range from 0 to 100, a fractional rank method was used where each
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
297
Table 9.1: SCORPAN factors and their data sources within the Santa Fe River Watershed. Factors Code and source AVWATERCAPa
Parameters
STEMPCLN a
Dominant soil available water capacity within the map unit (0–180 cm) Dominant clay content within the map unit (0–180 cm) Dominant soil drainage class within the map unit (0–180 cm) Dominant soil hydric rating category within the map unit (0–180 cm) Dominant soil hydrologic group within the map unit (0–180 cm) Dominant soil saturated hydraulic conductivity within the map unit (0–180 cm) Dominant sand content within the map unit (0–180 cm) Soil taxonomic great group of major component within the map unit Dominant silt content within the map unit (0–180 cm) Soil taxonomic moisture subclass within the map unit Soil taxonomic order of major component within the map unit Soil taxonomic particle size class within the map unit Soil taxonomic series of major components within the map unit Soil taxonomic subgroup of major components within the map unit Soil taxonomic suborder of major components within the map unit Soil taxonomic temperature class within the map unit
C
PPTHIST b TEMPHIST b
Average annual precipitation (1993–2005) Average annual temperature (1993–2005)
O
BD1, . . ., BD7c BD1ME333, . . ., BD7ME333c ECOREGION f IRREDDIFF c
Landsat ETM+ spectral bands (2004) – six bands Mean spectral bands derived from Landsat ETM+ within a 3 3 3 window Eco-region Vegetation index derived from Landsat ETM+ spectral bands: infrared/red band Mean vegetation index within a 3 3 3 window Infrared/red ratio index derived from Landsat ETM+: infrared/red band Mean infrared/red ratio within a 3 3 3 window Land use 1995 Land use 2003 Normalised difference vegetation index derived from Landsat ETM+ spectral bands (infrared red)/(infrared + red) ( continued)
S
CLAYPCT a DRAINCLN a HYDRICRATN a HYDROLGRPN a KSAT a SANDPCT a SGREATGRPN a SILTPCT a SMSTSUBCLN a SORDERN a SPTSIZCLN a SSERIESN a SSUBGROUPN a SSUBORDERN a
IRDIFME333c IRREDRATIOc IRRATME333c LU1995N d LU2003N e NDVI c
298
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 9.1: ( continued ) Factors Code and source NDVIME333c PC1, . . ., PC6c
Parameters
TNDVIME333c
Mean NDVI within a 3 3 3 window Principal component scores derived from Landsat ETM+ spectral bands Mean principal component scores derived from Landsat ETM+ spectral bands within a 3 3 3 window Physiographic type Tasselled cap indices derived from Landsat ETM+ spectral bands Mean of tasselled cap indices derived from Landsat ETM+ spectral bands within a 3 3 3 window Transformed NDVI (derived from Landsat ETM+: SQRT(NDVI+0.5)) Mean of transformed NDVI within a 3 3 3 window
R
ASPECTN g CTI g CTIME333 ELEVg ELEVME333g SLOPEPCT g SLOPEPME333g UPAREAg
Aspect Compound topographic index Mean compound topographic index within a 3 3 3 window Elevation (30 m resolution) Mean elevation within a 3 3 3 window Slope Mean slope within a 3 3 3 window Upslope drainage area
P
DRASTICINDf EVGEOCATN f GEOLEPOCHN f GEOLUNITN f HYDGEOCLN f
Drastic index of Floridian aquifer vulnerability to pollution Environmental geological category Geological epoch Geological unit Hydrogeological unit
A, N
X_COORD Y_COORD XY
x-coordinate y-coordinate x–y coordinates
PC1ME333, . . ., PC6ME333c PHYSTYPE f TC1, . . ., TC6c TC1ME333, . . ., TC6ME333c TNDVI c
a
Soil Geographic Database (SSURGO) – Soil Data Mart (Natural Resources Conservation Service). National Climatic Data Center – National Oceanic and Atmospheric Administration. c United States Geological Survey, Earth Resources Observation and Science (EROS) Data Center. d Suwannee River Water Management District. e Florida Fish and Wildlife Conservation Commission. f Florida Department of Environmental Protection. g Digital Elevation Model – National Elevation Model (United States Geological Survey). b
rank was divided by the respective number of cases with valid values and multiplied by 100. Then, ordinary Kriging was used to interpolate fractional ranks of each variable across the watershed. This standardisation of soil property values allowed comparison of soil data side-by-side along soil profiles (layers 1 to 4) and spatially across the watershed using a common scale (0 to 100).
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
299
Regression trees (Breiman et al., 1984) were used in committee tree mode to infer C : N and N : P ratios, respectively, using SCORPAN variables (Table 1). Treebased models are non-parametric, and well suited to the analysis of complex, nonlinear relationships between predictor variables (i.e., SCORPAN factors) and the target variable (i.e., soil property ratio). Committee trees are a special variant of regression trees, in which a set of trees is generated by resampling with replacement from the original training data. The trees are then combined by averaging their outputs derived from regression trees. In ARCing (Adaptive Resampling and Combining), the way a new sample is drawn for the next tree depends on the performance of the prior tree (Breiman, 1998; CART, 2002). In ARCing, the probability with which a case is selected is increased by the frequency with which it has been misclassified in previous trees. Thus cases that are difficult to classify receive an increasing probability of selection, whereas cases that are classified correctly receive declining weights from resample to resample. Multiple versions of the predictor are formed by making bootstrap replicates of the learning set and using these as new learning sets. The predicted value generated by the committee trees is an average over these multiple versions of predictors. We generated committee trees in ARCing mode using 250 trees. The methodology of regression and committee trees has been described in more detail by Breiman et al. (1984), Breiman (1996, 1998) and Grunwald et al. (2009). To investigate relationships between CNP classes and SCORPAN factors, classification trees (Breiman et al., 1984) were used. The CNP classes were derived using medians for C, N and P as cutoffs to separate observed soil properties above or below the cutoff value, respectively. Tree-based modelling was performed in the CART software package (Salford Systems, San Diego, CA, USA). The cross-validated relative error (CVRE) and resubstitution relative error (RRE) (CART, 2002; Grunwald et al., 2009) were used to evaluate model performance using tenfold cross-validation.
9.3 9.3.1
RESULTS AND DISCUSSION Variation of C, N and P within the watershed
Total C, N and P showed pronounced variation throughout the watershed (Table 9.2). Total C medians were 10 529 mg kg1 (L1), 3705 mg kg1 (L2), 1808 mg kg1 (L3) and 1087 mg kg1 (L4), and showed a maximum of 201 988 mg kg1 in L1 and 18 918 mg kg1 in L4. This indicates large variability along soil profiles and across the watershed. Total N values showed less variation with median values of 493 mg kg1 (L1) and 116 mg kg1 (L4), maximum values of 10 829 mg kg1 (L1) and 770 mg kg1 (L4), and ranges of 10 739 mg kg1 (L1) to 742 mg kg1 (L4). Whereas C and N values tended to decline along soil profiles (L1 to L4), P tended to be more variable. The mean of P in L1 was 27.8 mg kg1 and 36.9 mg kg1 in L4, and the median was 6 mg kg1 in L1 and 2 mg kg1 in L4. Interestingly, the range of P in L1 was about three times smaller, at 670 mg kg1 , when compared with the range of 1576 mg kg1 in L4. The relatively high N and P values in the top soil layer can be attributed to fertilisation/land use management. In sandy soils in the watershed most
300
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
of the P is easily leachable owing to the lack of P fixing in the surface insoluble or fixed forms (Stevenson and Cole, 1999). Mean values of the C : N ratio ranged from 22.5 (L1), 25.4 (L2), 20.0 (L3) to 12.4 (L4), with similar patterns found in the median ratios of C : N, with 21.3 (L1), 23.8 (L2), 16.1 (L3) and 9.4 (L4) (Table 9.3). The high C : N ratio in L2 was due to relatively higher C (mean of 8105 mg kg1 ) when compared with N (mean of 329 mg kg1 ). Spodic horizons relatively rich in C typically occur at depths of 30– 60 cm (L2) within the watershed, which may explain the relatively high C values in this layer. In contrast, the low C : N ratio in L4 was due to the relatively low C (mean of 1659 mg kg1 ) when compared with N (mean of 133 mg kg1 ). In general, residues with C : N ratios greater than about 30, equivalent to N contents of about 1.5% or less, contain lower mineral N reserves due to net immobilisation. On the other hand, residues with C : N ratios below 20, or N contents greater than about 2.5%, often lead to an increase in mineral N levels through net mineralisation (Stevenson and Cole, 1999). However, there are multiple confounding factors, such as climate, nutrient application rate, lignin content, and level of activity of the soil microflora, that may impact on the C : N relationship (Stevenson and Cole, 1999). They provided typical ranges for C : N ratios in microbial biomass of 6–12, 5–14 for sewage sludge, 10–12 for soil humus, 150–500 for forest (woody wastes), and 15–20 for compost. The mean and median C : N ratios in our study were slightly higher than those in other studies, with mean ratios of 14.3 (soils) found by Cleveland and Liptzin (2007) and mean ratios between 15 and 27 found by Bengtsson et al. (2003) in forest soils. Corstanje et al. (2006) found C : N between 11.5 and 19.3 in a Florida wetland, and Reddy et al. (1998) identified C : N of 15 in another wetland in Florida affected by nutrient influx. The SFRW is a mixed land-use ecosystem that includes wetlands as well as upland sites under different land uses (urban, agricultural, forest and natural), thus contributing more variability in terms of C and N. Mean ratios of N : P showed remarkable differences among layers, with 1186 (L1), 3258 (L2), 8789 (L3) and 7729 (L4), whereas median N : P ratios were 70.5 (L1), 52.0 (L2), 59.3 (L3) and 49.6 (L4). These vertical patterns were controlled by the very large P values in layers 3 and 4 relative to N. Maximum measured P in L3 was 686 mg kg1 and in L4 was 1576 mg kg1 . In contrast, N showed maxima of 1378 mg kg1 (L3) and 770 mg kg1 (L4). Stevenson and Cole (1999) pointed out that the P content is more variable than C and N in soils, because C and N occur as structural components of humic and fulvic acids, whereas P does not. The P content in soils varies considerably, depending on the nature of the parent material, the degree of weathering, and the extent to which P has been lost through leaching. In Florida, Prich parent material (e.g., high-P limestones) may reach up to 2000 mg kg1 (Stevenson and Cole, 1999). In the SFRW, the maximum observed P value was found in the bottom layer (L4), with 1576 mg kg1 contributing to the relatively high N : P ratio. In contrast, Ultisols, which are prominent in the SFRW, tend to have very low P contents (,200 mg kg1 ). Reddy et al. (1998) found N : P of 19 in a nutrient-enriched zone and N : P of 60 in a non-affected zone in a subtropical Florida wetland. They attributed these different N : P ratios to the fact that, unlike C and N, P influx into a wetland is usually retained within the system in organic and inorganic pools. Thus, P loading can
143 8 105.0 2 126.9 3 704.9 25 433.7 8.5 81.3 268 062.4 932.1 268 994.5
L2
b
SEM: standard error of the mean. STD: standard deviation. c Range ¼ maximum minimum values.
a
Observations 143 Mean 14 871.7 1 828.6 SEM a Median 10 528.6 STDb 21 867.4 Skewness 6.3 Kurtosis 47.1 Rangec 199 318.1 Minimum 2 669.6 Maximum 201 987.7
L1
C L4
L1
141 135 143 3 928.7 1 658.9 748.4 1 111.4 182.2 103.9 1 808.2 1 087.3 492.9 13 197.7 2 117.3 1 243.6 8.0 5.0 6.2 64.4 34.2 43.5 112 724.8 18 748.5 10 739.0 384.4 168.9 90.1 113 109.2 18 917.5 10 829.2
L3 143 328.5 112.3 159.3 1 342.9 11.4 133.9 15 966.2 53.2 16 019.4
L2
N
141 148.4 11.8 119.6 139.6 5.5 43.4 1 340.8 36.9 1 377.8
L3 135 132.9 7.8 115.6 90.5 3.3 18.5 742.3 27.3 769.6
L4 143 27.8 5.6 6.1 66.7 6.8 61.1 670.2 0.03 670.2
L1
143 18.4 4.2 3.1 49.8 7.3 67.2 509.7 0.005 509.7
L2
Table 9.2: Descriptive statistics of C (mg kg1 ), N (mg kg1 ) and P (mg kg1 ) in layers 1 (L1), 2 (L2), 3 (L3) and 4 (L4).
L3 140 24.6 6.1 1.7 71.9 6.4 52.5 686.3 0.005 686.3
P
135 36.9 12.3 1.9 143.1 9.5 101.4 1 576.3 0.005 1 576.3
L4
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
301
302
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 9.3: Descriptive statistics of C : N and N : P ratios in layers 1 (L1), 2 (L2), 3 (L3) and 4 (L4). C:N L1 Observations Mean SEM a Median STDb Skewness Kurtosis Rangec Minimum Maximum
L2
143 143 22.5 25.4 0.6 1.4 21.3 23.8 6.6 16.5 0.6 7.4 0.52 70.7 30.9 180.9 12.4 8.9 43.3 189.8
N:P L3
L4
141 20.0 1.9 16.1 22.3 8.5 87.4 248.6 5.2 253.7
135 12.4 0.8 9.4 9.1 2.3 6.1 51.2 0.5 51.8
L1
L2
L3
L4
143 143 141 135 1 186.2 3 258.1 8 789.3 7 728.8 267.1 987.5 2 211.8 1 500.2 70.5 52.0 59.3 49.6 3 193.9 11 808.5 26 169.9 17 266.3 3.8 5.5 7.9 5.1 14.9 36.7 78.3 37.7 19 691.1 101 313.8 275 552.7 153 929.4 1.1 1.1 0.5 0.1 19 692.2 101 314.8 275 553.3 153 929.5
a
SEM: standard error of the mean. STD: standard deviation. c Range ¼ maximum minimum values. b
dramatically increase the total P content of detritus and surface soil. Cleveland and Liptzin (2007) observed a mean N : P of 13.1 in soils on grassland and forest sites.
9.3.2
Geospatial patterns of C, N and P across the watershed
The variation of C, N and P in L1 across the watershed showed contrasting patterns (Figure 9.1 – see colour insert), although the spatial autocorrelation ranges of C, N and P were similar (Table 9.4). The spatial autocorrelation range for C of 9248 m (L1) was similar to that for N (9247 m) but smaller than the range for P (15 010 m) in L1. Similar spatial autocorrelation ranges were found for C, N and P across all layers (L1 to L4), and ranged from 9247 m (shortest) to 17 774 m (longest). This suggests that regional patterns were pronounced, with relatively long spatial autocorrelation ranges. The spatial dependence, expressed by the nugget : sill ratio, was strongest for C, with 0.71 (L1), 0.85 (L2), 0.79 (L3) and 0.49 (L4). In contrast, nugget : sill ratios were lower for N, with 0.54 (L1), 0.58 (L2), 0.63 (L3) and 0.43 (L4), and even lower for P, with 0.57 (L1), 0.65 (L2), 0.38 (L3) and 0.34 (L4). Elevated C and N occurred in the southern part of the watershed associated with mixed land uses (agriculture–upland forest–pineland complexes) on Ultisols and Spodosols, but were also due to coincidental sampling of wetland sites in this area. In contrast, relatively low C and N values were found in the south-western portion on Entisols, Ocala limestone and Hawthorn Group (including the Coosawhatchie Formation) under mixed land use (Figure 9.1). Standardised (fractional ranks between 0 and 100) C, N and P patterns in the four different layers across the watershed are shown in Figure 9.2. In general, across all
303
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
Table 9.4: Interpolation parameters and cross-validation results for soil properties log C (logtransformed total C in mg kg1 ), log N (log-transformed total N in mg kg1 ) and log P (logtransformed Mehlich-P in mg kg1 ) in layers 1 (L1), 2 (L2), 3 (L3) and 4 (L4). Nugget
Sill
Range/mb
Spherical Spherical Spherical Bessel-J
0.055 0.104 0.096 0.064
0.078 0.122 0.122 0.130
9 248 10 745 11 264 9 248
L1 L2 L3 L4
Spherical Spherical Spherical Spherical
0.053 0.053 0.042 0.025
0.098 0.092 0.067 0.058
9 247 9 247 9 250 17 774
L1 L2 L3 L4
Spherical Bessel-J Bessel-J Bessel-J
0.499 0.966 1.000 1.127
0.873 1.482 2.649 3.311
15 010 9 248 9 250 9 247
Soil property
Layer
Semivariogram modela
log C log C log C log C
L1 L2 L3 L4
log N log N log N log N log P log P log P log P
Robust mean errorc
Robust mean std. errorc
0.02642 0.09212 0.02549 0.07153 0.00886 0.02700 0.00110 0.00349 0.03204 0.02150 0.01364 0.00095
0.10158 0.07161 0.05260 0.00203
0.03977 0.04772 0.02006 0.02029 0.03225 0.02956 0.00750 0.00701
a
Search neighbourhood: minimum samples, 4; maximum samples, 10. Range: spatial autocorrelation derived from fitted semivariogram. c A data item is considered robust when its standardised error lies between 2.5 and 2.5. b
four layers, C and N showed similar spatial behaviour, with largest values in the southern portion, lowest values in the south-west, and hotspots in the northern and eastern sections. In contrast, the highest P values in L1 to L4 occurred along a diagonal line dissecting the watershed into lowlands (western portion) and uplands (eastern portion), where the Cody Scarp is located. This spatial behaviour of soil P seemed pronounced in all layers. Soils located on steeper slopes along the escarpment are eroded, with subsoil exposed more closely to the soil surface. The elevated soil P may have been caused by land-use management (upper soil profile) and lithology/ parent material in the bottom soil profile. The relationships between parent material and soil P are evident, with soils close to the drainage outlet (most western portion of the watershed) formed in material derived from Ocala limestone, followed in the west–east direction by undifferentiated sediments, the Coosawhatchie Formation (and minor portions of the Statenville Formation), and undifferentiated Tertiary sediments. According to Randazzo and Jones (1997), the Coosawhatchie Formation is a complex mixture of phosphatic carbonates, sand and clay, and the Statenville Formation contains deposits of phosphatic sand and clay and fewer carbonate beds, which probably contributed to the elevated P content found in subsoils of the watershed. Point distribution and interpolated ratio maps of C : N in the four layers across the watershed are shown in Figures 9.3 and 9.4, respectively (see colour insert). They show distinct vertical and horizontal distribution patterns, with tight ratios in the subsurface and wider ones in the upper layers, especially L2. Interestingly, although
304
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 9.2: Standardised spatial patterns of soil C, N and P across the Santa Fe River Watershed. Fractional rank/% 0–10 C 10–20 20–30 30–40 40–50 50–60 60–70 70–80 80–90 90–100
N
P L1 (0–30 cm)
L2 (30–60 cm)
L3 (60–120 cm)
L4 (120–160 cm) N
0 5 10
20 30
Kilometers 40
5 N 0 10
20
30
Kilometers 40
Kilometers N
0 5 10
20 30
40
standardised C and N patterns across all layers and the watershed mirrored each other (Figure 9.2), absolute C and N values showed contrasting spatial distribution patterns, resulting in C : N ratios , 10 in L4 in the north-west and north-east, and .35 in L2 in the western portion of the watershed in proximity to the drainage outlet. The latter high C : N ratios of .35 can be attributed to the high water table tending to increase C accumulation and decrease N, owing to drainage, volatilisation, uptake and denitrification. Wetland sites were associated with larger C and relatively low N. Net mineralisation of N, i.e. the net increase in NH4 þ and NO3 , tends to occur in areas with C : N , 20, which were pronounced across the watershed, occurring on 39.5% (L1), 8.4% (L2), 57.0% (L3) and 96.6% (L4) of the area (Figure 9.4). This suggests that, in these specific areas, immobilised N is released through net mineralisation, posing a risk of N leaching into the aquifer, which is used for drinking water, and part of the C is liberated as carbon dioxide (CO2 ) into the atmosphere. These processes operate in the upper soil layer in particular, where microbial populations are relatively dense. Net immobilisation – that is, the incorporation of inorganic N into organic forms – tends to occur in areas with C : N . 30, but was minor, with 2.1% (L1), 20.1% (L2), 3.1% (L3) and 0.0% (L4). On the remaining areas within the watershed with C : N ratios between 20 and 30, neither gains nor losses in C and N are expected, with 58.4% (L1), 71.6% (L2), 39.9% (L3) and 3.4% (L4) coverage.
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
305
Although spatial coverage of low N : P (,8) predominated across the watershed, with 35.9% (L1), 65.6% (L2), 52.9% (L3) and 56.0% (L4), the extent of N : P ratios with high values (.16) was substantial. In the SFRW, areas of 42.2% (L1), 21.5% (L2), 35.3% (L3) and 36.5% (L4) accounted for N : P . 16. The spatial distribution patterns of N : P showed large ratios (.256 : 1) concentrated in the northern and south-eastern (L1) and north-eastern (L3 and L4) portions of the watershed (Figures 9.5 and 9.6 – see colour insert). Spatial distribution patterns of N : P in L2 reached a maximum of only 128 : 1. Along the Cody Scarp, N : P ratios were low (,8) owing to the very high P values relative to N, where high soil P can be attributed to the P-rich parent material that is exposed by erosional processes.
9.3.3
Models to infer soil C, N and P using environmental soil-forming factors
In the SFRW, the C : N : P ratios were 535 : 27 : 1 (L1), 441 : 18 : 1 (L2), 160 : 6 : 1 (L3) and 45 : 4 : 1 (L4) based on soil property means, and 1726 : 81 : 1 (L1), 1195 : 51 : 1 (L2), 1064 : 70 : 1 (L3) and 572 : 60 : 1 (L4) based on soil property medians. McGroddy et al. (2004) found average C : N : P of 1212 : 28 : 1 for soil and 3007 : 45 : 1 for foliage in a global forest dataset, which reflected the increased proportion of C-rich structural material characteristic of terrestrial vegetation. In nonagricultural soils, the C : N : P ratio was 186 : 13 : 1 based on a meta-analysis of 186 observations around the globe (Cleveland and Liptzin, 2007). Stevenson and Cole (1999) argued that the C : N : P ratios are very similar in different soil geographic regions across the world, with an average ratio of 108 : 8 : 1, which was only approximated in L3. This may be explained by the diversity of soils and land uses, which generated complex and heterogeneous spatial patterns of C, N and P. In the case of L4, the tighter N : P ratio of 4 : 1 may be explained by the unusual P-rich parent material in close contact with subsurface soils in the watershed. For a better understanding of which environmental soil-forming factors impart major control on C : N and N : P, we utilised committee trees. Inference models to predict C : N ratios in different soil layers are presented in Table 9.5. Predictor variables with the highest power to infer C : N ratios were • • • •
soil taxonomic attributes in L1; a mix of spectral (e.g. Landsat ETM+ Band 5 and 7) and topographic data in L2; spectral data in L3; and soil taxonomic factors, soil texture and spectral data in L4.
Landsat Band 5 (short-wave infrared) is sensitive to the amount of water in plants, whereas Band 7 (mid-infrared) is sensitive to moisture, concentration of minerals, and various soil properties. Tasselled cap indices 1, 2 and 3 translate image brightness (or overall spectral reflectance), greenness (vegetation index) and wetness, respectively, which reflect the concentration of soil constituents (e.g., C, N and P) and moisture. The normalised difference vegetation index (NDVI) infers green biomass and chlorophyll content. Overall, committee trees performed well, with low CVRE (,0.40) and RRE (,0.04), except in L3 with RRE of 0.48 and CVRE of 1.17.
306
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 9.5: Committee tree models of C : N ratios (ARCing mode) in the four different layers (compare variable codes in Table 9.1) with tenfold cross-validation.
Number of terminal nodes CVRE a RRE b Variable importance/%c
Layer 1 (0–30 cm)
Layer 2 (30–60 cm)
Layer 3 (60–120 cm)
Layer 4 (120–180 cm)
23
26
4
16
0.40 0.03 SSERIESN (100.0) SSUBGROUPN (66.6) SSUBORDERN (54.4) SORDERN (42.5) SPTSIZCLN (42.5)
0.03 0.019 BD5 (100.0) BD6 (99.9) PC1 (99.9) ELEV (89.6) ELEV 3 3 3 (89.6)
1.17 0.48 TC3 (100.0) IRREDDIFF (95.0) TC2 (95.0) IRREDRATIO (94.9) NDVI (94.9)
0.25 0.04 SSERIESN (100.0) SSUBGROUPN (62.1) SILTPCT (33.8) SANDPCT (24.4) TC1 (17.9)
a
CVRE: cross-validated relative error. Resubstitution relative error. c Only the five most important predictor variables are shown. b
Overall, spectral data that represent land use and vegetation (O factor), along with soil factors, had the most explanatory power to infer C : N ratios within the watershed. Predictions of N : P ratios based on committee trees were linked to spectral and topographic factors in the upper two layers (L1 and L2); a mix of factors (x–y coordinates, average precipitation, soil series, topographic and climatic variables) in L3; and soil taxonomic properties and saturated hydraulic conductivity in L4 (Table 9.6). In particular, spectral data that infer vegetation (O factor) were strongly related to N : P in L1. Tree predictions showed low to moderately large CVRE (0.19–0.43) and RRE (0.02–0.36). These findings suggest that SCORPAN environmental factors allow the inference of C : N and N : P ratios. Classification trees were used to predict CNP classes in layers 1, 2, 3 and 4 (Table 9.7; only L1 and L4 results are shown), with classification accuracies between 86.1% and 100.0% for L1 and between 51.4% and 100.0% for L4. C–N–P combinations with high C, N and P (class no. 8; 92 cases) dominated in L1, whereas low C, N and P (class no. 1; 64 cases) dominated in L4, followed by low C and N combined with high P (class no. 2; 35 cases) in L4. Tree-based models identified which environmental factors relate to CNP patterns within the soil profile. Interestingly, variable importances (in %) to predict CNP classes in L1 were highest in the following order: soil series (100.0%), tasselled cap index 3 (76.0%), tasselled cap index 4 (75.9%), Landsat ETM+ Band 4 (72.3%), principal component score 1 derived from Landsat ETM+ spectral data (72.1%), Landsat ETM+ Band 5 (72.0%), tasselled cap index 1 (72.0%),
307
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
Table 9.6: Committee tree models of N : P ratios (ARCing mode) in the four different layers (compare variable codes in Table 9.1) with tenfold cross-validation.
Number of terminal nodes CVRE a RRE b Variable importance/%c
Layer 1 (0–30 cm)
Layer 2 (30–60 cm)
Layer 3 (60–120 cm)
Layer 4 (120–180 cm)
24
17
19
12
0.19 0.02 BD4 (100.0) PC2 (99.8) PC3 (92.5) NDVI (59.6) IRREDRATIO (59.6)
0.28 0.03 BD6 (100.0) PC1 (87.0) BD5 (85.7) ELEV (79.7) UPAREA (79.2)
0.27 0.05 XY (100.0) PPTHIST (88.0) SSERIESN (84.1) TEMPHIST (75.7) ELEV (58.2)
0.43 0.36 SSERIESN (100.0) SSUBGROUPN (61.6) SGREATGRPN (58.9) SSUBORDERN (51.9) KSAT (51.9)
a
CVRE: cross-validated relative error. Resubstitution relative error. c Only the five most important predictor variables are shown. b
Table 9.7: Predictions of C, N and P classes using classification trees in layers 1 and 4 evaluated with tenfold cross-validation. CNP classesa
Class no.
Layer 1 (0–30 cm) Counts
% Correct Counts
% Correct
1
Not found in layer 1
64
79.7
2
Not found in layer 1
35
51.4
3
Not found in layer 1
9
100.0
4
Not modelled (only 1 case) 3 100.0
10
100.0
5
a
Layer 4 (120–180 cm)
6
2
100.0
7
43
86.1
8
92
88.1
Not modelled (only 2 cases) 4
100.0
Not modelled (only 1 case) 8
100.0
Classes based on higher than median (grey box) or lower than median (white box) C, N and P, respectively.
308
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
soil subgroup (68.8%), soil great group (48.3%), suborder (47.6%), slope (39.3%), soil order (28.9%), and other SCORPAN factors with minor contributions. In contrast, variable importance to predict CNP classes in L4 followed a different sequence, with highest contributions by soil series (100.0%), soil subgroup (68.2%), mean annual precipitation (50.6%), soil great group (37.0%), mean annual temperature (36.7%), x-coordinate (34.0%), soil suborder (33.6%), drastic index of aquifer vulnerability (33.3%), Landsat ETM+ principal component score 3 (28.3%), principal component score 1 (20.9%), y-coordinate (20.5%), and other minor contributions by other SCORPAN factors. These findings suggest that land use/land cover (O factor) and to some extent S factors control CNP in the topsoil, whereas S, C, P and O SCORPAN factors were closely linked to subsurface CNP patterns.
9.4
CONCLUSIONS
The Redfield ratio, with C106 : N16 : P 1, was initially developed for aquatic systems, has since been applied to various terrestrial systems, and was investigated as part of this study within the SFRW. We found distinct differences in soil C : N : P within different layers and spatially across the watershed. The variability of C, N and P in attribute and geographic space modulated C : N, N : P and C : N : P ratios, which allowed us to characterise congruencies and imbalances from globally derived average values. Spatial ratios of soil properties allowed us not only to infer the magnitude and extent of select ecosystem processes (e.g. net mineralisation and immobilisation of N), but also to analyse interrelationships. Regionalising of biogeochemical ratios, taking spatial variability and dependence structures of soil and environmental properties into account, has the ability to capture complex relationships among soil properties. This study demonstrated that soil C : N and N : P ratios and CNP classes in a terrestrial subtropical ecosystem are controlled by various environmental factor combinations that are depth dependent, with soil and anthropogenic factors (land use, vegetation) dominating the upper soil profile and a combination of SCORPAN factors (soil/geological ones in particular) dominating the lower soil profile. Imbalances in soil C : N and N : P ratios mapped across a large landscape may serve as indicators to infer the magnitude and extent of various ecosystem processes. Spatial ratios may be useful to manage regions, such as providing information to enhance C sequestration or reduce nutrient leaching. Anthropogenic factors, such as land-use management, and natural factors, such as P-rich geological material, need to be clearly distinguished.
ACKNOWLEDGEMENTS Funding was provided by the United States Department of Agriculture (USDA) Nutrient Science for Improved Watershed Management Program for the project ‘‘GeoTemporal Estimation and Visualisation of Soil Properties in a Mixed-Use Watershed’’, and by Natural Resources Conservation Service (NRCS) Cooperative Ecosystem
CARBON, NITROGEN AND PHOSPHORUS GEOSPATIAL PATTERNS
309
Studies Unit (CESU) for the project ‘‘Linking Experimental and Soil Spectral Sensing for Prediction of Soil Carbon Pools and Carbon Sequestration at the Landscape Scale’’. We would also like to thank Dr O. Sickman for his help with this research.
REFERENCES Batjes, N.H. (1996). Total carbon and nitrogen in the soils of the world. European Journal of Soil Science. 47: 151–163. Bengtsson, G., Bengtson, P. and Mansson, K.F. (2003). Gross nitrogen mineralization, immobilization, and nitrification rates as a function of soil C/N ratio and microbial activity. Soil Biology & Biogeochemistry. 35: 143–154. Breiman, L. (1996). Bagging predictors. Machine Learning. 24: 123–140. Breiman, L. (1998). Arcing classifiers. The Annals of Statistics. 26: 801–824. Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984). Classification and Regression Trees. Chapman & Hall/CRC Press, Boca Raton, FL. Bruland, G.L., Grunwald, S., Osborne, T.Z., Reddy, K.R. and Newman, S. (2006). Spatial distribution of soil properties in Water Conservation Area 3 of the Everglades. Soil Science Society of America Journal. 70: 1662–1676. Bruland, G.L., Bliss, C.M., Grunwald, S., Comerford, N.B. and Graetz, D.G. (2008). Soil nitratenitrogen in forested versus non-forested land-uses in a mixed-use watershed. Geoderma. 148: 220–231. CART (2002). Classification and Regression Trees: User’s Guide. Salford Systems, San Diego, CA. Cleveland, C.C. and Liptzin, D. (2007) C : N : P stoichiometry in soil: is there a ‘‘Redfield ratio’’ for microbial biomass? Biogeochemistry. 85: 235–252. Corstanje, R., Grunwald, S., Reddy, K.R., Osborne, T.Z. and Newman, S. (2006). Assessment of the spatial distribution of soil properties in a northern everglades. Journal of Environmental Quality. 35: 938–949. Florida Fish and Wildlife Conservation Commission (2003). Florida Vegetation and Land Cover Data Derived from Landsat ETM+ Imagery. FFWCC, Tallahassee, FL. Goovaerts, P., 1997. Geostatistics for Natural Resources Evaluation. Oxford University Press, Oxford. Grunwald, S. (ed.), 2006. Environmental Soil-Landscape Modeling: Geographic Information Technologies and Pedometrics. CRC Press, New York. Grunwald, S. and Reddy, K.R. (2008). Spatial behavior of phosphorus and nitrogen in a subtropical wetland. Soil Science Society of America Journal. 72: 1174–1183. Grunwald, S., Goovaerts, P., Bliss, C.M., Comerford, N.B. and Lamsal, S. (2006). Incorporation of auxiliary information in the geostatistical simulation of soil nitrate-nitrogen. Vadose Zone Journal. 5: 391–404. Grunwald, S., Rivero, R.G. and Reddy, K.R. (2007). Understanding spatial variability and its application to biogeochemistry analysis. In: Sarkar, D., Datta, R. and Hannigan, R. (Eds). Environmental Biogeochemistry: Concepts and Case Studies. Elsevier, pp. 435–462. Grunwald, S., Daroub, S.H., Lang, T.A. and Diaz, O.A. (2009). Tree-based modeling of complex interactions of phosphorus loadings and environmental factors. Science of the Total Environment. 407: 3772–3783. Hungate, B.A., Dukes, J.S., Shaw, M.R., Luo, Y. and Field, C.B. (2003). Nitrogen and climate change. Science. 302: 1512–1513. Jenny, H. (1941). Factors of Soil Formation. McGraw-Hill, New York. McBratney, A.B., Mendonc¸a Santos, M.L. and Minasny. B. (2003). On digital soil mapping. Geoderma. 117: 3–52. McGroddy, M.E., Daufresne, T. and Hedin, L.O. (2004). Scaling of C : N : P stoichiometry in forests worldwide: implications of terrestrial Redfield-type ratio. Ecology 85: 2390–2401. National Climatic Data Center (NCDC), National Oceanic and Atmospheric Administration (2008).
310
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Monthly surface data. NCDC, Asheville, NC. (2006). National Climatic Data Center. Available at: http://www.ncdc.noaa.gov/oa/ncdc.html. Last verified 26 February 2008. Nelson, W.L., Mehlich, A. and Winters, E. (1953). The development, evaluation and use of soil tests for phosphorus availability. Agronomy. 4: 153–188. Lamsal, S., Grunwald, S., Bruland, G.L., Bliss, C.M. and Comerford, N.B. (2006). Regional hybrid geospatial modeling of soil nitrate-nitrogen in the Santa Fe River Watershed. Geoderma. 135: 233–247. Randazzo, A.F. and Jones, D.S. (1997). The Geology of Florida. University Press of Florida, Gainesville, FL. Redfield, A. (1958). The biological control of chemical factors in the environment. American Scientist. 46: 205–221. Reddy, K.R., White, J.R., Wright, A. and Chua, T. (1998). Influence of phosphorus loading on microbial processes in the soil and water column of wetlands. In: Reddy, K.R., O’Connor, G.A. and Schelske, C.L. (Eds). Phosphorus Biogeochemistry in Subtropical Ecosystems. Lewis Publishers, Boca Raton, FL, pp. 249–273. Rivero, R.G., Grunwald, S. and Bruland, G.L. (2007). Incorporation of spectral data into multivariate geostatistical models to map soil phosphorus variability in a Florida wetland. Geoderma. 140: 428–443. Sterner, R.W. and Elser, J.J. (2002). Ecological Stoichiometry: The Biology of Elements from Molecules to the Biosphere. Princeton University Press, Princeton, NJ. Stevenson, F.J. and Cole, M.A. (1999). Cycles of Soil: Carbon, Nitrogen, Phosphorus, Sulfur, Micronutrients. John Wiley & Sons, New York. Webster, R. and Oliver, M.A. (2001). Geostatistics for Environmental Scientists. John Wiley & Sons, New York.
CHAPTER
10
Modelling of Solute Transport in Soils Considering the Effect of Biodegradation and Microbial Growth Mohaddeseh M. Nezhad and Akbar A. Javadi
10.1 INTRODUCTION In recent years, interest in understanding the mechanisms of and predicting contaminant transport through soils has dramatically increased because of growing evidence and public concern that the quality of the subsurface environment is being adversely affected by industrial, municipal and agricultural activities. Transport phenomena are encountered in almost every aspect of environmental engineering science. In assessing the environmental impacts of waste discharges, it is important to predict the impact of emission on contaminant concentration in nearby air and water (Nazaroff and Alvarez-Cohen, 2001). Contamination of groundwater is an issue of major concern in residential areas; it may occur as a result of spillages of hazardous chemicals, dumping of toxic waste, landfills, waste water, or industrial discharges (Depountis, 2000). Hazardous waste disposal is increasingly becoming one of the most serious problems affecting health and the environment. The movement of chemicals through soil to groundwater results in degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. One of the most challenging problems in modelling of solute transport in soils is how to effectively characterise and quantify the geometric, hydraulic and chemical properties of porous media. During the past two decades, attempts have been made to develop physical, analytical and numerical models for contaminant transport through soils considering the effects of different transport mechanisms. Abriola and Pinder (1985) developed a general model that addressed the multiphase flow problem and the transport of organic species. Celia and Boluloutas (1990) presented a numerical technique for the solution of the advection–diffusion type of transport equation. Li et al. (1999) developed a numerical model to simulate contaminant transport through Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
312
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
soils, taking into account the influence of various mechanisms of the miscible contaminant transport, including advection, mechanical dispersion, molecular diffusion and adsorption. Karkuri and Molenkamp (1997) presented a formulation for groundwater flow and pollutant transport through multilayered saturated soils in one dimension. Sheng and Smith (2002) used a finite element method for solution of the advection–dispersion transport equation for multicomponent contaminations. Zhang and Brusseau (2004) developed a mathematical model incorporating the primary mass transfer processes that mediate the transport of immiscible organic liquid constituents in water-saturated, locally heterogeneous porous media. Yan and Vairavmoorthy (2004) developed a model to simulate water flow and contaminant transport through homogeneous partially saturated media by using the finite difference technique. Kacur et al. (2005) presented a numerical approximation scheme for the solution of contaminant transport with diffusion and adsorption by a finite volume method. Inclusion of chemical effects in solute transport simulation has continued to evolve, although numerous difficulties still limit development in this field. Rubin and James (1973) provided an early example of a mathematical transport model with equilibrium-controlled reactions. Rubin (1983) described the mathematical requirements for simulation of several classes of reaction, and noted the computational difficulties presented by various systems. Extensive research has been carried out on conceptual and mathematical modelling of the biodegradation of organic chemicals in the subsurface. Three different conceptual frameworks have been assumed for the shape, structure and type of growth of bacteria present in aquifers: biofilms, microcolonies, and Monod kinetics. The main aspects of and characteristics related to these assumptions have been presented by Baveye and Valocchi (1989). In Molz et al. (1986), the microcolony assumption is discussed, and it is assumed that the bacteria grow in small discrete units of 10–100 bacteria per colony, attached to particle surfaces. Investigation of the structure of bacteria as a biofilm or microcolony can help in focusing on the mechanisms taking place at the pore scale. However, these assumptions are not useful or appropriate in modelling large-scale transport phenomena. A mathematical Monod kinetic model has been used by Brusseau et al. (1999). They investigated the effects of different parameters of the Monod kinetic model on the rate of biodegradation of contaminants by non-dimensionalisation of the transport governing equation coupled by the Monod kinetic model. Parkhurst et al. (1980) provided a mathematical basis for the incorporation of inorganic equilibrium reactions in solute transport analysis. A summary of the chemical processes in groundwater is provided by Gillham and Cherry (1982) and Cherry et al. (1984). Ahuja and Lehman (1983) and Snyder and Woolhiser (1985) presented experimental results that indicated that a more detailed description of chemical transport in soil and water was needed. A review of the coupling of transport simulation with both equilibrium-controlled and kinetic reactions is provided by Yeh and Tripathi (1989). In spite of numerous mathematical models that have been developed to simulate the migration of pollutants in soils, most existing models still concentrate either on geochemical processes (Engesgaard and Kip, 1992; Walter et al., 1994) or on biological transformations (Clement et al., 1996; Kindred and Celia, 1989). Mironenko and Pachepsky (1998) simulated the accumula-
MODELLING OF SOLUTE TRANSPORT IN SOILS
313
tion of a chemical transported from soil to ponding water. They considered onedimensional convective–diffusive solute transport in water and soil. Their results showed that the relative effect of diffusion on the accumulation of a solute in ponding water may be significant at infiltration rates that are not uncommon in agriculture practice. Wallach et al. (2001) introduced a mathematical model for transport of soildissolved chemicals by overland flow. The model can predict water flow and chemical transport in the soil profile prior to rainfall ponding and during the surface runoff event. The model was used to investigate the dependence of surface runoff pollution and its extent on the system hydrological parameters. Gao et al. (2001) presented a model for simulating the transport of chemically reactive components in groundwater systems. McGrail (2001) developed a numerical simulator to assist in the interpretation of complex laboratory experiments examining transport processes of chemical and biological contaminants subject to non-linear adsorption or source terms. The governing equations for the problem were solved by the method of finite differences. The sources for the surplus of soil nitrogen, such as fertilisation farm waste disposal and improper agro-technical management, were studied in controlled field experiments (Stoicheva et al., 2001) and in real-life farming practice (Stoicheva et al., 2004). More recently, Stoicheva et al. (2006) reported results from an experimental investigation into the nitrogen distribution in geological materials underlying soils under natural and anthropogenic loading in different agro-climatic regions in Bulgaria. Osinubi and Nwaiwu (2006) evaluated the nature of sodium diffusion in compacted soils through laboratory adsorption and diffusion tests on three lateritic soils. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes (Stagnitti et al., 2001). This information is vital in the evaluation of existing theoretical models and the development of improved conceptual models of transport processes. The high costs, large timescales and lack of control over the boundary conditions have prevented the development of fieldscale experiments (Hellawell and Sawidou, 1994). Stagnitti et al. (2001) investigated the effects of chemical reactions on contaminant transport through a number of physical model tests on an undisturbed block of soil. Despite the development of conceptual and mathematical models for the effects of chemical reactions on contaminant transport, most of the existing numerical models do not include the effect of chemical reactions on the concentration and transport of contaminants in soil. Therefore, the focus of this chapter is on the development, validation and application of a coupled transient finite element model for contaminant transport in unsaturated soils that incorporates the effects of chemical reactions and biodegradation on the contaminant transport. In what follows, the main governing phenomena of miscible contaminant transport are considered, with particular emphasis on the effect of chemical reactions and biodegradation. The contaminant transport equation and the balance equations for flow of water and air are solved numerically using the finite element method, subject to prescribed initial and boundary conditions. The developed model is validated by comparing the model prediction results with those
314
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
reported in the literature. The model is also applied to a case study involving measurements of the effects of chemical reactions on the solute transport process. It is shown that the developed model is capable of predicting, with good accuracy, the variations of the contaminant concentration with time, considering the effect of chemical reactions.
10.2
GOVERNING EQUATIONS OF FLUID FLOW
The introduction outlined a broad range of issues that are of interest in relation to the transport of contaminants in soils. The problem becomes more complex when the soil is unsaturated. Unsaturated soil is a multiphase system, because at least two fluid phases are present: water and air. The governing equations that describe fluid flow and contaminant transport in the unsaturated zone will be presented in this section.
10.2.1
Modelling of water flow
The governing differential equation for water flow is based on the conservation of mass of the groundwater, leading to (Thomas and He, 1997) @uw @u a þ cwa ¼ =ð K ww =uw Þ þ =ð K wa =ua Þ þ J w (10:1) cww @t @t where cww ¼ cfw þ cvw cwa ¼ cfa þ cva cvw ¼ nSa K fw cva ¼ nSa K fa cfw ¼ nðrw rv Þ cfa ¼ nðr w rv Þ K ww ¼
@Sw @s
@Sw @s
rw K w þ K vw rw ªw
K wa ¼ rv K a þ rw K va K fa ¼ r0
@ h @ł =ua @ł @s
K fw ¼ r0
@ h @ł =uw @ł @s
315
MODELLING OF SOLUTE TRANSPORT IN SOILS
Datms Vv n K fw rw Datms Vv n K fa ¼ rw
K vw ¼ K va
and J w ¼ rw =(K w =z) in which n is the porosity of the soil, Kw is the conductivity of water [LT 1 ], Ka is the conductivity of air [LT 1 ], Sw is the degree of saturation of water, Sa is the degree of saturation of air, rw is the density of water [M L3 ], rv is the density of water vapour [M L3 ], r0 is the density of saturated soil vapour [M L3 ], s is the soil suction [M L1 T 2 ], Vv is the mass flow factor, uw is the pore water pressure [M L1 T 2 ], ua is the pore air pressure [M L1 T 2 ], Datms is the molecular diffusivity of vapour through air, ªw is the unit weight of water [M L2 T 2 ], ł is the capillary potential, h is the relative humidity, and =z is the unit normal orientated downwards in the direction of the force of gravity.
10.2.2
Modelling of air flow
The governing differential equation for air flow is based on the conservation of mass of the ground air, leading to (Thomas and He, 1997) @uw @ua caw þ caa ¼ =ð K aw =uw Þ þ =ð K aa =ua Þ þ J a (10:2) @t @t where caw ¼ caw1 þ caw2 caa ¼ caa1 þ caa2 caw1 ¼ nrda ð H a 1Þ caa1 ¼ nrda ð H a 1Þ
@Sw @s
@Sw @s
caw2 ¼ nð Sa þ H a Sw Þcdaw caa2 ¼ nð Sa þ H a Sw Þcdaa Rv cdaw ¼ K fw Rda 1 Rv cdaa ¼ K fa Rda T Rda
316
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
K aw ¼
H a rda Kw ªw
K aa ¼ K a =uw and J a ¼ H a rda =ð K w =zÞ in which Ha is Henry’s volumetric coefficient of solubility, rda is the density of dry air [M L3 ], Rda is the specific gas constant for dry air, and Rv is the specific gas constant for water vapour.
10.3 10.3.1
MECHANISMS OF CONTAMINANT TRANSPORT Advection
Advection is the transport of material caused by the net flow of the fluid in which that material is suspended. Whenever a fluid is in motion, all contaminants in the flowing fluid, including both molecules and particles, are advected along with the fluid (Nazaroff and Alvarez-Cohen, 2001). The rate of contaminant transport that occurs by advection is given by the product of contaminant concentration c and the component of apparent groundwater velocity, such as v x , v y , etc. For two-dimensional cases with two components of contaminant transport in the x and y directions, the rate of contaminant transport due to advection will be (Javadi and Al-Najjar, 2007)
10.3.2
Fx,advection ¼ vwx c
(10:3a)
F y,advection ¼ vw y c
(10:3b)
Diffusion
The process by which contaminants are transported by the random thermal motion of contaminant molecules is called diffusion (Yong et al., 1992). The rate of contaminant transport that occurs by diffusion is given by Fick’s law. In terms of two components of the contaminant transported in the x and y directions, the rate of contaminant transport by diffusion will be (Javadi and Al-Najjar, 2007) Fx,diffusion ¼ Dm
@c @x
(10:4a)
F y,diffusion ¼ Dm
@c @y
(10:4b)
where Dm is the molecular diffusion coefficient in the porous medium [L2 T 1 ].
317
MODELLING OF SOLUTE TRANSPORT IN SOILS
10.3.3
Mechanical dispersion
Mechanical dispersion is a mixing or spreading process caused by small-scale fluctuation in groundwater velocity along the tortuous flow paths within individual pores. In terms of two components of contaminant transport in the x and y directions, the rate of contaminant transport by mechanical dispersion is given by Fx,dispersion ¼ Dwxx
@ ðŁw cÞ @ ðŁw cÞ Dwxy @x @y
(10:5a)
F y,dispersion ¼ Dw yx
@ ðŁw cÞ @ ðŁw cÞ Dw yy @x @y
(10:5b)
where Dw xx, Dw xy, Dw yx and Dw yy are the coefficients of the dispersivity tensor [LT 1 ]. For the water phase, these coefficients can be computed from the following expressions (Javadi and Al-Najjar, 2007): Dwxx ¼ ÆTW jvw j xx þ ðÆLW ÆTW Þ
vw x vw x þ Dmw xx jvj
(10:6a)
Dw yy ¼ ÆTW jvw j yy þ ðÆLW ÆTW Þ
vw y vw y þ Dmw yy jvj
(10:6b)
Dwxy ¼ Dw yx ¼ ðÆLW ÆTW Þ
vw x vw y þ Dmw xy jvj
(10:6c)
where ÆTW is the transverse dispersivity for the water phase, ÆLW is the longitudinal dispersivity for the water phase, ij is the Kronecker delta q (ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ij ¼ 1 when i ¼ j, 0
otherwise), |v| is the magnitude of the water velocity (jvj ¼
v2w x þ v2w y ), Łw is the
volumetric water content, Dmw is the coefficient of water molecular diffusion [L2 T], and vw x and vw y are the components of the water velocity, computed as vw x ¼ v x =Łw and vw y ¼ v y =Łw : The amount of contaminant in each phase (Æ) is denoted by the mass fraction, which is defined as øÆ ¼
Mass of contaminant in phase Æ Mass of phase Æ
(10:7)
The standard mass balance equation of contaminant transport can then be written as (Javadi et al., 2006) @ ðŁw rw øw Þ þ Fadvection Fdispersion–diffusion þ ºw Łw rw øw ¼ F w @t
(10:8)
where ºw is the reaction rate for water [T 1 ], and F w is the source/sink term for water [M L3 T 1 ]. In the transport equation 10.6, the first term describes the change of contaminant mass in time; the second term represents the movement of contaminant due to advection; and the third term represents the effects of dispersion (which is assumed to be Fickian in form, with the dispersion tensor given by Bear (1979)). In the fourth term, the contaminants are assumed to be reactive with a decay rate of º,
318
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and the right-hand-side term describes sources and sinks in the equation. Rather than use the contaminant mass fraction, it is often more convenient to use the volumetric concentration (cÆ ), which is defined as cÆ ¼
Mass of contaminant in phase Æ Volume of phase Æ
(10:9)
¼ rÆ øÆ In most of the work presented in this chapter, the contaminant volumetric concentration in water will be used. The governing equation can be rewritten in terms of c by substituting the above relation into the governing equation: " # @ ðvwx cÞ @ ðvw y cÞ @ ðŁw cw Þ þ þ @t @x @y
@ Dwxx ð@c=@xÞ þ Dwxy ð@c=@ yÞ @ Dw yy ð@c=@ yÞ þ Dw yx ð@c=@xÞ þ Łw @x @x þ ºw Łw cw ¼ F w (10:10) In the numerical model presented in this work, sorption of the contaminant onto the solid phase, in addition to advection, diffusion and dispersion, is considered.
10.3.4
Chemical reaction
The transport of contaminant in soil may result in reactions occurring between the contaminant and the soil constituents. These include chemical, physical and biological processes (Yong et al., 1992). Most chemical reactions affecting solute transport can be divided broadly into two groups (Zheng and Bennett, 2002): •
•
those that are ‘‘sufficiently fast’’ and reversible, so that local equilibrium can be assumed (i.e., reactions that can be assumed to reach equilibrium in each locality within the residence times characterising the transport regime); and those that are ‘‘insufficiently fast’’ and irreversible, so that the local equilibrium assumption is not appropriate.
Under each group, Rubin (1983) made a further distinction between ‘‘homogeneous’’ reactions that take place within a single phase, and ‘‘heterogeneous’’ reactions that involve more than one phase. The heterogeneous classes of reactions are the surface reactions (i.e., sorption and ion exchange) and classical reactions (i.e., precipitation/ dissolution, oxidation/reduction or redox and complexation). Adsorption When a porous medium is saturated with water containing dissolved matter, it frequently happens that certain solutes, to one degree or another, are removed from
MODELLING OF SOLUTE TRANSPORT IN SOILS
319
solution and immobilised in or on the solid matrix of the porous medium by electrostatic or chemical forces. Adsorption refers to the adherence of chemical species, primarily on the surface of the porous matrix. The main factors affecting the adsorption of pollutants to or from the solid are the physical and chemical characteristics of the pollutant considered, and of the surface of the solid (Li et al., 1999). The amount of sorption is generally dependent on the contaminant and the composition of the soil. Using the assumption that adsorption occurs only from the water to the solid phase, the equation for the water phase can be modified to include adsorption (Javadi et al., 2006): @ ðŁw cw Þ @ ðŁs rs K d cw Þ þ þ =ðvw cw Þ =ðŁw Dw =cw Þ þ ºw Łw cw ¼ F w @t @t
(10:11)
where Kd is the distribution coefficient. In the case of sorption, the equation for the water phase is modified to include a retardation factor. The principal assumption used in deriving a retardation factor is that water is the wetting fluid, so that the air phase does not have any contact with the solid phase (Zheng and Bennett, 2002). Therefore the equation can be rewritten as @ ð RŁw cw Þ þ =ðvw cw Þ =ðŁw Dw =cw Þ þ ºw Łw cw ¼ F w @t
(10:12)
where R is the retardation coefficient ¼ 1 + Łs rs Kd /Łw , rs is the density of the solid phase, and Łs is the volumetric content of the solid phase. Linear (first-order decay) chemical reaction The effects of chemical reactions on solute transport are generally incorporated in the advection, diffusion–dispersion and adsorption equations through additional terms (Zheng and Bennett, 2002). Consider a chemical reaction such as aA þ bB > rR þ qQ
(10:13)
where a, b, r and q are the valences for ions. A general kinetic rate law for species A can be expressed as (Zheng and Bennett, 2002) @cA ¼ ºcAn1 cBn2 þ ªcRm1 cQm2 @t
(10:14)
where cA , cB , cR and cQ are the concentrations of reactant species A and B and resultant species R and Q, respectively; º and ª are the rate constants for the forward and reverse reactions, respectively; and n1, n2 and m1, m2 are empirical coefficients. The sum of n1 and n2 defines the order of the forward reaction, and the sum of m1 and m2 defines the order of the reverse reaction. Equation 10.14 expresses the rate of change in species A as the sum of the rates at which it is being used in the forward reaction and generated in the reverse reaction. Certain chemical reactions, such as radioactive decay, hydrolysis and some forms of biodegradation, can be characterised as first-order, irreversible processes (Zheng and Bennett, 2002). In this type of reaction, Equation 10.14 simplifies to
320
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
@c ¼ ºc @t
(10:15)
Chemical changes are a result of chemical reactions. All chemical reactions involve a change in substances and a change in energy. During a chemical reaction, energy is either released (exothermic reaction) or absorbed (endothermic reaction). Monod kinetic model A common chemical reaction in soils is biotransformation, which is effected by an organism on a chemical compound. Some microorganisms possess an astonishing catabolic versatility to degrade or transform toxic compounds to non-toxic compounds. Therefore biological processes play a major role in the removal of contaminants and pollutants from the environment. The Monod kinetic equation is a well-established expression for describing the degradation of a particular substrate by microorganisms: c u ¼ umax (10:16) Kc þ c where u is the specific growth rate of a microbial population [T 1 ]; umax is the maximum rate of growth of a microbial population when adequate substrates exist in the aquifer [T 1 ]; c represents the substrate concentration [M L3 ]; and Kc is a constant that represents the substrate concentration at which the rate of growth is half the maximum rate [M L3 ]. Chemical components are utilised by microorganisms as nutrients, and the change in the substrate concentration can be written as (Zheng and Bennett, 2002) Rc ¼
@c umax c ¼ M t @t Y Kc þ c
(10:17)
where Mt is the concentration of the microbial population responsible for the biodegradation process [M L3 ], and Y is a yield coefficient, defined as the ratio of the biomass formed per unit mass of substrate utilised. The Monod kinetic equation is added to the advective, dispersive and adsorptive terms in the governing equations of present reactants in the domain as @ ð RŁw cw Þ þ =ðvw cw Þ =ðŁw Dw =cw Þ þ Rc ¼ F w @t
(10:18)
During the biodegradation process, the microbial population grows by utilisation and degradation of the substrate, and so the variation of biomass concentration is estimated as @ Mt ¼ ðu dÞ M t @t
(10:19)
where u is given in Equation 10.16, and d is the specific death rate for the microbial population [T 1 ]. Therefore, the biomass concentration is calculated as
321
MODELLING OF SOLUTE TRANSPORT IN SOILS
@ Mt c ¼ umax M t dM t Kc þ c @t
(10:20)
For considering the effect of biomass growth, the governing equations of solute transport (Equation 10.18) and biomass growth (Equation 10.20) must be solved simultaneously.
10.4 NUMERICAL SOLUTION 10.4.1
Numerical solution of governing equations for water and air flow
The governing differential equations for water flow (Equation 10.1) and air flow (Equation 10.2), as defined in Section 10.2, have two variables uw and ua ; these variables are primary unknowns. The primary unknowns can be approximated using the shape function approach as uw ¼ ^uw ¼
n X
Ns uws
(10:21)
Ns uas
(10:22)
1
ua ¼ ^ua ¼
n X 1
where Ns is the shape function, uws is the nodal pore water pressure, uas is the nodal pore air pressure, and n represents the number of nodes in each element. Replacing the primary unknowns with shape function approximations of the above equations, Equations 10.1 and 10.2 can be written as =ð K ww =^uw Þ þ =ð K wa =^ua Þ þ J w Cww =ð K aw =^uw Þ þ =ð K aa =^ua Þ þ J a Caw
@^uw @^ua Cwa ¼ Rw @t @t
(10:23)
@^uw @^ua Caa ¼ Ra @t @t
(10:24)
where Rw and Ra are the residual errors introduced by the approximation functions. A finite element scheme is applied to the spatial terms, employing the weighted residual approach, to minimise the residual error represented by Equation 10.23 or 10.24, and integrating the equation over the spatial domain (e ). Spatial discretisation of the governing differential equation for water flow can be written as Cww where
@uws @uas þ Cwa þ K ww uws þ K wa uwa ¼ f w @t @t
(10:25)
322
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Cww ¼
n ð X e¼1
Cwa ¼
n ð X e¼1
K ww ¼
e
n ð X e¼1
K wa ¼
e
e
n ð X e¼1
e
N T Cww N d e N T Cwa N d e =N T ð K ww N Þ d e =N T ð K wa N Þ d e
and fw ¼
n ð X e¼1
e
=N T ð K w rw =zÞ d e
n ð X e¼1
ˆe
^wn þ rw v ^vd þ rw v ^va dˆe N r rw v
^wn is the approximated water velocity normal to the boundary surface, v ^vd is in which v ^va is the the approximated diffusive vapour velocity normal to the boundary surface, v approximated pressure vapour velocity normal to the boundary surface, and ˆe is the element boundary surface. Similarly, spatial discretisation of the governing differential equation for air flow leads to Caw
@uws @uas þ Caa þ K aw uws þ K aa uas ¼ f a @t @t
where Caw ¼
n ð X e¼1
Caa ¼
n ð X e¼1
K aw ¼
e
e
N T Caw N d e N T Caa N d e
n ð X =N T ð K aw =N Þ d e e¼1
e
n ð X K aa ¼ =N T ð K aa =N Þ d e e¼1
and
e
(10:26)
323
MODELLING OF SOLUTE TRANSPORT IN SOILS
fa ¼
n ð X =N T ð K w rda H a =zÞ d e e
e¼1
n ð X e¼1
ˆe
^an gdˆe ^fn þ v N T rda fv
^fn is the approximated velocities of free dry air and v ^an is the approximated in which v velocities of dissolved dry air. Spatially discretised equations for the coupled flow of water and air, given by the above equations, can be combined in a matrix form as K ww K wa uws Cww Cwa u_ ws f : þ : w ¼0 (10:27) K aw K aa uas Caw Caa u_ as fa where u_ ws ¼ @uws =@ t and u_ as ¼ @uas =@ t. A time discretisation of Equation 10.27 is achieved here by application of a fully implicit mid-interval backward-difference algorithm. Applying a finite difference scheme (Stasa, 1985) to Equation 10.27 will result in B nþ1=2 nþ1 n nþ1=2 nþ1 A þ C nþ1=2 ¼ 0 þ (10:28) ˜t where 2 A¼4 2 B¼4 2 C¼4
K ww
K wa
K aw
K aa
Cww
Cwa
Caw
Caa
fw
3 5 3 5
3 5
fa and ¼
uws uas
Equation 10.28 can be further simplified to give
nþ1
¼ A
nþ1=2
B nþ1=2 þ ˜t
1
B nþ1=2 n C nþ1=2 ˜t
(10:29)
324
10.4.2
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Numerical solution of the contaminant transport governing equation
The primary unknown of Equation 10.18 can be approximated using the shape function approach as n X
Łc ¼ Ł^c ¼
Ns Łcs
1
c ¼ ^c ¼
n X
N s cs
1
where cs is the nodal contaminant concentration, and n is the number of nodes per element. In the present work, a triangular element has been used (n ¼ 3). Replacing the primary unknowns with a shape function approximation of the above equations, and employing the Galerkin weighted residual approach to minimise the residual error represented by this approximation, the discretised global finite element equation for a single component of the contaminant takes the form M
dc þ Hc þ F ¼ 0 dt
(10:30)
where n ðb X Łc M¼ A ij a ˜t 1 H¼
n ðb X
F¼
n X 1
vcBij þ DcE ij þ ºcA ij
a
1
b @c @c @c @c þº N 2 vc D @x @ y @x @ y a
ð
A ij ¼ NN dxd y Bij ¼
ð N
and E ij ¼
@N @N N dxd y @x @ y ð @N @N @N @N dxd y @x @ y @x @ y
Applying a finite difference scheme (Stasa, 1985) to Equation 10.30 will result in M
ðŁcÞ nþ1 ðŁcÞ n þ H ð1 ªÞc n þ ªc nþ1 þ F nþ1 ¼ 0 ˜t
(10:31)
where ˜t is the time step, ª is a parameter between 0 and 1, and n and n + 1 stand for
MODELLING OF SOLUTE TRANSPORT IN SOILS
325
time levels (t n and t nþ1 ¼ t n + ˜t ). The solution of Equations 10.28 and 10.31 will give the distribution of the contaminant concentrations at various points within the soil and at different times, taking into account the interaction between the flow of air and water and various mechanisms of contaminant transport.
10.4.3
Numerical solution of Monod kinetic equations
The fourth-order Runge–Kutta method is used to solve the Monod kinetic equations incorporated into the transport governing equations. In this method, for a partial differential equation @G t ¼ f ð G t , tÞ @t
(10:32)
the solution of the current step can be expressed as G t ð i þ 1Þ ¼ G t ð iÞ þ ð˜tb =6Þð k 1 þ 2k 2 þ 2k 3 þ k 4 Þ
(10:33)
where G t (i + 1) is the current solution, G t (i) is the previous solution, i is the step number, and ˜tb is the step length of t. The coefficients in Equation 10.33 are defined as k 1 ¼ f ½ tð iÞ, G t ð iÞ
(10:34)
k 2 ¼ f ½ tð iÞ þ 0:5˜tb , G t ð iÞ þ 0:5˜t b k 1
(10:35)
k 3 ¼ f ½ tð iÞ þ 0:5˜tb , G t ð iÞ þ 0:5˜tb k 2
(10:36)
k 4 ¼ f ½ tð iÞ þ ˜tb , G t ð iÞ þ ˜tb k 3
(10:37)
Computations of the bioreaction and degradation equations (i.e., Equations 10.19 and 10.17) are performed in parallel. The time step length (˜tb ) for the Runge–Kutta method is much smaller than that for the non-reactive transport equation (˜t ). The operator splitting technique has been used to solve the transport equation coupled to biodegradation. The general procedure includes the following steps: 1 2
3
Solve the transport equation without the reaction term to obtain the solution of substrate concentration at time t + ˜t using a standard finite element method. Use the current concentrations as initial condition to compute the bioreaction equations using the Runge–Kutta method and obtain the biomass concentration and the new substrate concentration. Finally, use the new substrate concentration as the initial condition for the solution of the transport equation for the next time step.
10.5 NUMERICAL RESULTS The developed finite element model is validated by application to a contaminant transport problem from the literature (Li et al., 1999). The model is then applied to a
326
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
case study to predict the transport of contaminant in a site in Australia (Stagnitti et al., 2001).
10.5.1
Example 1
In this example, the capabilities of the developed finite element model to predict the effects of different mechanisms of contaminant transport, including the combined effect of advection, diffusion–dispersion and adsorption, are examined. A onedimensional transport of a contaminant through a bar, 30 m long and 1 m high, is considered. The bar is subjected to an initial Gaussian contaminant distribution of amplitude c ¼ 1 centred at x ¼ 5.0 m (see Figure 10.1), and the boundary conditions are c ¼ 0 on the left and right boundaries and zero flux on the top and bottom boundaries. A steady and uniform intrinsic velocity field with vw ¼ 1 m s1 is assumed over the bar. In the first case, only the transport of the contaminant by advection is considered. Figure 10.2 shows the distribution of contaminant concentration along the bar at times t ¼ 0, 10 and 20 s. It is shown that the results of the developed model are in close agreement with those reported by Li et al. (1999) for this example. To illustrate the capabilities of the model to predict the effects of other mechanisms of contaminant transport, another case involving the combined effect of advection and dispersion is considered, and the results are shown in Figure 10.3. In this case, the same initial Gaussian distribution of concentration as in the previous case is considered, together with the longitudinal dispersivity for the water phase, ÆLw ¼ 0:5, and the reaction rate for the water phase, ºw ¼ 0. Figure 10.4 shows the distribution of contaminant concentration along the bar due to the combined effect of advection and dispersion at time t ¼ 10 s. Again, it is shown that the results of the model are in very good agreement with those reported by Li et al. (1999) for this case. To investigate the effect of adsorption, a third case involving the combined effect of advection, dispersion and adsorption is considered. The result for this case is shown in Figure 10.5. Again, the same initial Gaussian distribution of concentration as in the previous cases is considered, together with additional adsorption parameters, including volumetric solid content of Łs ¼ 2.7 and Figure 10.1: Problem definition. y (c 1)
1m 30 m x
327
MODELLING OF SOLUTE TRANSPORT IN SOILS
Figure 10.2: Concentration distributions at t ¼ 0, 10 and 20 s. 1.2
1.0
FEM model Li et al. (1999)
t0
Contaminant concentration
t 20 s 0.8
t 10 s
0.6
0.4
0.2
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Coordinate x/m
Figure 10.3: Contaminant concentration distributions at t ¼ 0, 5, 10 and 20 s for advection and diffusion–dispersion. 1.0
Contaminant concentration
0.9
t 0
0.8 0.7
t 5 s
0.6
t 10 s
0.5
t 20 s
0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Coordinate x/m
distribution coefficient Kd ¼ 0.01. Figure 10.5 shows the distribution of contaminant concentration along the bar for two cases, with and without consideration of adsorption. It is shown that, as expected, adsorption has caused an additional decrease in concentration.
328
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 10.4: Concentration distributions at t ¼ 10 s for advection and diffusion–dispersion. 1.0 FEM model Li et al. (1999)
0.9
Contaminant concentration
0.8 0.7 0.6 t 10 s
0.5 0.4 0.3 0.2 0.1 0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Coordinate x/m
Figure 10.5: Concentration distributions at t ¼ 10 s, with and without adsorption. 1.0 FEM model (adv-dif-disp) 0.9
FEM model (adv-diff-dis-ads)
0.8
Contaminant concentration
0.7 0.6 0.5
t 10 s
0.4 0.3 0.2 0.1 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Coordinate x/m
10.5.2
Case study
The developed finite element model is also applied to a case study. The case study involves a physical test, conducted by Stagnitti et al. (1998), to study the combined effect of advection, diffusion–dispersion, adsorption and chemical reactions on
329
MODELLING OF SOLUTE TRANSPORT IN SOILS
contaminant transport. In this experiment, a large, undisturbed soil core (42.5 cm 3 42.5 cm in plan and 40 cm deep) (see Figure 10.6) was extracted from a farm located about 300 km west of Melbourne in Australia. The farm was used primarily for beef cattle grazing, and superphosphate had not been applied for approximately 25 years (Stagnitti et al., 1998). In the experiment, a multiple sample percolation system (MSPS) was used to sample moisture and chemicals leaching from the soil core. The Figure 10.6: (a) View of the multiple sample percolation system (MSPS) (from Stagnitti et al., 1998); (b) location of collection wells in the MSPS (from Stagnitti et al., 1998); (c) problem definition. 21
22
23
24
25
16
17
18
19
20
11
12
13
14
15
6
7
8
9
10
1
2
3
4
5
(b)
(a)
MSPS
Funnels
(c)
330
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
system consisted of a metal-alloy baseplate that was shaped into 25 equal-sized collection wells (funnels). The soil core was irrigated by a purpose-built drip irrigation system. The area coverage, speed and direction of irrigation were adjustable, and were controlled so that the irrigation system could deliver a constant and uniform application of water and soluble nutrients on the soil surface. The soil core was irrigated with distilled water for several months prior to the application of the nutrient solution. An irrigation rate of 2.82 mL min1 , comparable with the mean daily rainfall for the region, was uniformly applied to the surface of the soil core. A solution containing 0.l mol of NaCl, 0.01 mol of KNO3 and 0.1 mol of KH2 PO4 was prepared. A total of 1967 mL of this solution was irrigated on the soil surface. Following application of this solution, distilled water was irrigated on the soil surface at the same rate for 18 days. Leachate solutions were analysed for chloride (Cl ), nitrate (NO3 ), and phosphate (PO4 3 ). The daily leachate concentrations collected from each of the 25 individual collection wells were aggregated to calculate a total daily concentration for the entire soil core for each ion for each day of the duration of the experiment. Samples were collected every 12 h from wells that drained more than 50 mL. Wells that had less than 50 mL were left until at least the next collection period, 12 h later. The developed finite element model was used to simulate the percolation of the solution through the soil block in the experiment. The results of the analysis are shown in Figures 10.7, 10.8 and 10.9 in the form of variations of concentrations of chloride, nitrate and phosphate in the soil with time. The figures also include the experimental data from the tests conducted by Stagnitti et al. (2001). The initial solute concentrations of chloride, nitrate and phosphate in the irrigation were 6186.10 mg L1 , 273.65 mg L1 and 4724.10 mg L1 , respectively (Stagnitti et al., 2001). Using these
Figure 10.7: Breakthrough curves for chloride (Cl ). 0.08 FEM model Experimental data
Relative concentration, c/c0
0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0
5
10 Time/days
15
20
331
MODELLING OF SOLUTE TRANSPORT IN SOILS
Figure 10.8: Breakthrough curves for nitrate (NO3 ). 0.06
Experimental results No sorption and no chemical reaction Monod kinetic model
Relative concentration, c/c0
0.05 0.04 0.03 0.02 0.01 0 0
2
4
6
8
10
12
14
16
18
20
Time/days
Figure 10.9: Breakthrough curves for phosphate (PO4 3 ). 3.50 104 Experiment
Relative concentration, c/c0
3.00 104
Monod model
2.50 104
Linear first-order model
2.00 104 1.50 104 1.00 104 5.00 105 0 0
2
4
6
8
10 12 Time/days
14
16
18
20
values, and assuming that Cl behaves as a conservative component, a chemical reaction coefficient of º ¼ 0 (no reaction) and a retardation factor of R ¼ 1 (no adsorption) were used in the finite element model. Also, a dispersion coefficient of D ¼ 0.006 m2 day1 was used (Stagnitti et al., 2001). Figure 10.7 shows the solute breakthrough curve for chloride (Cl ) at the bottom of the MSPS. It is shown that the
332
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
results of the developed numerical model are in close agreement with the experimental results reported by Stagnitti et al. (2001). Two predicted breakthrough curves for nitrate (NO3 ) using the finite element model are plotted in Figure 10.8, together with the experimental data. In one case, chemical reaction and adsorption are ignored by assuming R ¼ 1 and initial biomass concentration M0 ¼ 0 in the model. In the other case, the effects of adsorption and chemical reaction on the distribution of nitrate are considered, where R ¼ 1.06 is used in the finite element model (the same value as used by Stagnitti et al., 2001), indicating a small adsorption. As there is not enough information about field measurements of Monod kinetic model parameters for this case, representative values of these parameters are estimated from the literature. An initial biomass concentration of M0 ¼ 20 mg L1 , maximum biomass growth rate of umax ¼ 5 3 104 hour1, and half-saturated constant of Kc ¼ 1 mg L3 are assumed. Figure 10.8 shows the variation of the relative concentration of nitrate (NO3 ) with time obtained by the numerical model for the above two cases, together with the experimental results. Figure 10.9 shows the breakthrough curve predicted using the numerical model for PO4 3, together with the experimental results. Here, degradation of PO4 3 is simulated by both a first-order decay model and a Monod kinetic model, and the results of these two models are presented. First-order decay is an approximation of the full Monod kinetic model for simulating biodegradation. However, this simple model does not consider the effect of microbial growth in the reaction. A value of 8.117 is considered for the retardation coefficient R. This value, which was suggested by Stagnitti et al. (2001), indicates very strong adsorption. A first-order decay º ¼ 0.3 hour1 is used for the linear biodegradation model, while the halfsaturated constant Kc ¼ 0.1 mg L3 is used for the Monod kinetic model. The value used for º represents a linear biodegradation coefficient equivalent to the Monod kinetic model parameters. The results show that the numerical model predicted the changes in concentration of PO4 3 with time reasonably well, considering the scatter in the experimental data. The results obtained by the Monod kinetic biodegradation model show more degradation of the solute compared with that obtained by the linear first-order reaction model. This is because, in the Monod kinetic model, the effect of biomass growth is considered, and an increase in the amount of biomass concentration causes an increase in the amount of degradation of the component. In this experiment, three different chemicals with different degrees of retardation and reaction were considered, and the model predictions are in close agreement with the experimental results. This illustrates the robustness of the developed finite element model in simulating the effects of chemical reactions on contaminant transport processes.
10.6
CONCLUSIONS
One of the most challenging problems in modelling solute transport in soils is how to effectively characterise and quantify the effects of chemical reactions on the transport process. Recent studies have shown that the current models of contaminant transport analysis are not able to describe the effects of chemical reactions adequately. Furthermore, the effect of chemical reactions on the fate and transport of contaminants
MODELLING OF SOLUTE TRANSPORT IN SOILS
333
is not included in many of the existing numerical models for contaminant transport. This chapter presented a coupled transient finite element model for predicting the flow of air and water and contaminant transport in unsaturated soils, including the effect of chemical and biochemical reactions. The model is capable of simulating various phenomena governing miscible contaminant transport in soils, including advection, dispersion, diffusion, adsorption and biochemical reaction effects. The mathematical framework and the numerical implementation of the model were presented. The model was validated by application to a test case from the literature, and was then applied to simulation of a physical model test involving the transport of contaminants in a block of soil, with the aim of studying the effects of biochemical reactions on contaminant concentration and transport. In the experiments, three different chemicals with different degrees of retardation and reaction were considered. The numerical results illustrated the performance of the presented model in simulating the effects of different phenomena governing the transport of contaminants in soils. The finite element model performed well in predicting the transport of contaminants through the soil with and without inclusion of the effects of chemical reactions and biodegradation. Comparison of the results of the numerical model with the experimental results shows that the model is capable of predicting the effects of chemical reactions and biodegradation with very high accuracy.
REFERENCES Abriola, L. and Pinder, G.F. (1985). A multiphase approach to the modeling of porous media contaminated by organic compounds. 1: Equation development. Water Resources Research. 21: 11–18. Ahuja, L.R. and Lehman, O.R. (1983). The extent and nature of rainfall–soil interaction in the release of soluble chemicals to runoff. Journal of Environmental Quality. 12: 34–40. Bear, J. (1979). Dynamics of Fluids in Porous Media. Elsevier, Amsterdam. Baveye, P. and Valocchi, A.J. (1989). An evaluation of mathematical models of the transport of biologically reacting solutes in saturated soils and aquifers. Water Resources Research, 25(6): 1414–1421. Brusseau, M.L., Xie, L.H. and Li, L. (1999). Biodegradation during contaminant transport in porous media. 1: Mathematical analysis of controlling factors. Journal of Contaminant Hydrology. 37: 269–293. Celia, M.A. and Boluloutas, E.T. (1990). A general mass-conservative numerical solution for the unsaturated flow equation. Water Resources Research. 26(7): 1483–1496. Cherry, J.A., Gillham, R.W. and Barker, J.F. (1984). Contaminant in groundwater: chemical processes. In: Groundwater Contamination. National Academy Press, Washington, DC, pp. 46–64. Clement, T.P., Hooker, B.S. and Skeen, R.S. (1996). Numerical modeling of biologically reactive transport near nutrient injection well. Journal of Environmental Engineering. 122(9): 833– 839. Depountis, N. (2000). Geotechnical centrifuge modeling of capillary phenomena and contaminant migration in unsaturated soils. PhD dissertation, University of Cardiff, UK. Engesgaard, P. and Kip, K.L. (1992). A geochemical model for redox-controlled movement of mineral fronts in ground-water flow systems: a case of nitrate removal by oxidation of pyrite. Water Resources Research. 28(10): 2829–2843. Gao, H., Vesovic, V., Butler, A. and Wheater, H. (2001). Chemically reactive multi-component
334
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
transport simulation in soil and groundwater. 2: Model demonstration. Environmental Geology. 41: 280–284. Gillham, R.W. and Cherry, J.A. (1982). Contaminant migration in saturated unconsolidated geologic deposits. In: Narisimham, T.N. (Ed.). Recent Trends in Hydrogeology, Geological Society of America Special Paper 189, pp. 31–62. Hellawell, E.E. and Sawidou, C. (1994). A study of contaminant transport involving density driven flow and hydrodynamic clean up. Proceedings of the Centrifuge ’94 Conference, University of Cambridge, pp. 357–362. Javadi, A.A. and Al-Najjar, M.M. (2007). Finite element modeling of contaminant transport in soils including the effect of chemical reactions. Journal of Hazardous Materials. 143(3): 690–701. Javadi, A.A., Al-Najjar, M.M. and Elkassas, A.S.I. (2006). Numerical modeling of contaminant transport in unsaturated soil. Proceedings of the 5th International Congress on Environmental Geotechnics, Cardiff, pp. 1177–1184. Kacur, J., Malengier, B. and Remesikova, M. (2005). Solution of contaminant transport with equilibrium and non-equilibrium adsorption. Journal of Computer Methods in Applied Mechanics and Engineering. 194: 479–489. Karkuri, H.M. and Molenkamp, F. (1997). Analysis of advection dispersion of pollutant transport through layered porous media. Proceedings of the International Conference on Geoenvironmental Engineering, Cardiff, pp. 193–198. Kindred, J.S. and Celia, M.A. (1989). Contaminant transport and biodegradation. 2: Conceptual model and test simulations. Water Resources Research. 25(6): 1149–1159. Li, X., Cescotto, S. and Thomas, H.R. (1999). Finite element method for contaminant transport in unsaturated soils. Journal of Hydrologic Engineering. 4(3): 265–274. McGrail, B.P. (2001). Inverse reactive transport simulator (INVERTS): an inverse model for contaminant transport with nonlinear adsorption and source terms. Environmental Modeling and Software. 16: 711–723. Mironenko, E.V. and Pachepsky, Y.A. (1998). Estimating transport of chemicals from soil to ponding water. Journal of Hydrology. 208: 53–61. Molz, F.J., Widdowson M.A. and Benefield, L.D. (1986). Simulation of microbial growth dynamics coupled to nutrient and oxygen transport in porous media. Water Resources Research. 22(8): 1207–1216. Nazaroff, W.W. and Alvarez-Cohen, L. (2001). Environmental Engineering Science. John Wiley & Sons, New York. Osinubi, K.J. and Nwaiwu, C.M.O. (2006). Sodium diffusion in compacted lateritic soil. Proceedings of the 5th International Congress on Environmental Geotechnics, Cardiff, pp. 1224–1231. Parkhurst, D.L., Thorstenson, D.C. and Plummer, L.N. (1980). PHREEQC: A Computer Program for Geochemical Calculations. US Geological Survey Water Resources Investigations Report 80-96. Rubin, J. (1983). Transport of reacting solutes in porous media: relation between mathematical nature of problem formulation and chemical nature of reactions. Water Resources Research. 35(8): 2359–2373. Rubin, J. and James, R.V. (1973). Dispersion affected transport of reacting solutes in saturated porous media: Galerkin method applied to equilibrium controlled exchange in unidirectional steady water flow. Water Resources Research. 9(5): 1332–1356. Sheng, D. and Smith, D.W. (2002). 2D finite element analysis of multicomponent contaminant transport through soils. International Journal of Geomechanics. 2(1): 113–134. Snyder, I.K. and Woolhiser, D.A. (1985). Effect of infiltration on chemical transport into overland flow. Transactions of the American Society of Agricultural Engineers. 28: 1450–1457. Stagnitti, F., Li, L., Barry, A., Allinson, G., Parlange, J.Y., Steenhuis, T. and Lakshmanan, E. (2001). Modeling solute transport in structured soils: performance evaluation of the ADR and TRM models. Mathematical and Computer Modeling Journal. 34: 433–440. Stagnitti, F., Sherwood, J., Allinson, G., Evans, L., Allinson, M., Li, L. and Phillips, I. (1998). An investigation of localised soil heterogeneities on solute transport using a multisegment percolation system. New Zealand Journal of Agricultural Research. 41: 603–612.
MODELLING OF SOLUTE TRANSPORT IN SOILS
335
Stasa, L.F. (1985). Applied Finite Element Analysis for Engineers. Holt, Rinehart and Winston, New York. Stoicheva, D., Kercheva, M. and Stoichev, D. (2001). NLEAP water quality applications in Bulgaria. In: Shaffer, M., Ma, L. and Hansen S. (Eds), Modeling Carbon and Nitrogen Dynamics for Soil Management. Lewis Publishers, Boca Raton, FL, pp. 333–345. Stoicheva, D., Kercheva, M. and Koleva, V. (2004). Assessment of nitrate leaching under different nitrogen supply of irrigated maize. In: Bieganowski, A., Jozefaciuk, G. and Walczak, R. (Eds), Modern Physical and Physicochemical Methods and their Application in Agroecological Research. Institute of Agrophysics, PAS, Lublin, pp. 208–218. Stoicheva, D., Kercheva, M. and Stoichev, D. (2006). Nitrogen distribution in vadose zone under some Bulgarian soils. Proceedings of the 5th International Congress on Environmental Geotechnics, Cardiff, pp. 1256–1263. Thomas, H.R. and He, Y. (1997). A coupled heat–moisture transfer theory for deformable unsaturated soil and its algorithmic implementation. International Journal of Numerical Methods in Engineering. 40: 3421–3441. Wallach, R., Grigorin, G. and Rivlin, J. (2001). A comprehensive mathematical model for transport of soil-dissolved chemicals by overland flow. Journal of Hydrology. 247: 85–99. Walter, A.L., Frind, E.O., Blowes, D.W., Ptacek, C.J. and Molson, J.W. (1994). Modelling of multicomponent reactive transport in groundwater. 1: Model development and evaluation. Water Resources Research. 30: 3137–3148. Yan, J.M. and Vairavmoorthy, K. (2004). 2D numerical simulation of water flow and contaminant transport in unsaturated soil. Proceedings of the 6th International Conference on Hydroinformatics, Singapore, pp. 1–8. Yeh, G.T. and Tripathi, V.S. (1989). A critical evaluation of recent development of hydro-geochemical transport model of reactive multi-chemical components. Water Resources Research. 25: 93– 108. Yong, R.N., Mohamed, A.M.O. and Warkentin, B.P. (1992). Principles of Contaminant Transport in Soils. Elsevier Science, Amsterdam. Zhang, Z. and Brusseau, M.L. (2004). Nonideal transport of reactive contaminants in heterogeneous porous media. 7: Distributed-domain model incorporating immiscible-liquid dissolution and rate-limited sorption/desorption. Journal of Contaminant Hydrology. 74: 83–103. Zheng, C. and Bennett, G.D. (2002). Applied Contaminant Transport Modeling. Papadopoulos & Associates, Bethesda, MD.
PART IV Forest Ecosystem and Footprint Modelling
CHAPTER
11
Flux and Concentration Footprint Modelling ¨ . Rannik, J. Rinne, A. Sogachev, T. Vesala, N. Kljun, U T. Markkanen, K. Sabelfeld, Th. Foken and M.Y. Leclerc
11.1 INTRODUCTION There has been an explosion of direct flux measurement sites since the 1990s, with the advent of robust, affordable, high-quality instrumentation and the implementation and widespread use of the eddy covariance technique. The most direct and most common technique to measure trace gas fluxes is based on eddy covariance (EC). The EC technique facilitates direct turbulent flux measurements affecting marginally the natural gas transfer between the surface and air. It also provides a tool to estimate exchange over larger areas than, for example, a single measuring chamber. However, whereas a chamber confines a known source area for its measurement, the area represented by EC measurements is a complex function of the observation level, surface roughness length and canopy structure, together with meteorological conditions (wind speed and direction, turbulence intensity and atmospheric stability). Most often, fluxes are calculated using 12 –1 hour time averages, and different flux values represent different source areas, although the difference between the areas corresponding to successive values may be small. In the case of an inhomogeneous surface, knowledge of both the source area and the strength is needed to interpret the measured signal. The footprint defines the field of view of the flux/concentration sensor, and reflects the influence of the surface on the measured turbulent flux (or concentration). Strictly speaking, a source area is the fraction of the surface (mostly upwind) containing effective sources and sinks contributing to a measurement point (Kljun et al., 2002). The footprint is then defined as the relative contribution from each element of the surface area source/sink to the measured vertical flux or concentration (Leclerc Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
340
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
and Thurtell, 1990; Schuepp et al., 1990). Functions describing the relationship between the spatial distribution of surface sources/sinks and a signal are called the footprint function or the source weight function, as shown in Horst and Weil (1992, 1994); see also Schmid (1994) for details. The fundamental definition of the footprint function is given by the integral equation of diffusion (Wilson and Swaters, 1991; see also Pasquill and Smith, 1983): ð ¼ ð x; ^xÞQð^xÞd^x (11:1)
where is the quantity being measured at location x (note that x is a vector), and Q(^x) is the source emission rate/sink strength in the surface vegetation volume . can be the concentration or the vertical eddy flux, and is then the concentration or flux footprint function, respectively. The determination of the footprint function is not a straightforward task, and several theoretical approaches have been derived over the previous decades. They can be classified into four categories: • • • •
analytical models; Lagrangian stochastic particle dispersion models; large-eddy simulations; and ensemble-averaged closure models.
Additionally, parameterisations of some of these approaches have been developed, simplifying the original algorithms for use in practical applications (e.g., Horst and Weil, 1992, 1994; Hsieh et al., 2000; Kljun et al., 2004; Schmid, 1994). The criterion of a 100 : 1 fetch to measurement height ratio was long held as the golden rule guiding internal boundary layer estimation, and nowadays it is still used as a rule of thumb to crudely approximate the source area of flux measurements over short canopies in daytime conditions. However, the unsatisfactory nature of the 100 : 1 ratio and the related footprint predictions was first explicitly discussed almost 20 years ago by Leclerc and Thurtell (1990). This study was also the first application of the Lagrangian stochastic approach, with stability effects included in footprint estimation. Accordingly, simple analytical models, such as that by Schuepp et al. (1990), have become widely used and integrated into the EC software. With the addition of realistic velocity profiles and stability dependence, the advent and level of sophistication of Horst and Weil’s (1992, 1994) analytical solution further expanded the scope of this approach. Whereas their 1992 model can be applied only numerically, the 1994 model provides an approximate analytical solution. The latter, later made two-dimensional by Schmid (1997), has been widely used, thanks to the additional insight provided for experiments over patchy surfaces. Further development of footprint analysis was provided by Leclerc et al. (1997), who applied a more complicated numerical largeeddy simulation model to the footprint problem; this approach was recently developed by Cai and Leclerc (2007) and Steinfeld et al. (2008). Somewhat surprisingly, the older numerical modelling approach for turbulent boundary layer, the closure model, was first discussed in relation to footprint estimation by Sogachev et al. in 2002. A
FLUX AND CONCENTRATION FOOTPRINT MODELLING
341
Table 11.1: Overview of the most important footprint models (if no remark, analytical model) (adopted from Foken, 2008, and updated). Author
Remarks
Pasquill (1972) Gash (1986) Schuepp et al. (1990)
First model description, concept of effective fetch Neutral stratification, concept of cumulative fetch Use of source areas, but neutral stratification and averaged wind velocity Lagrangian footprint model One-dimensional footprint model Separation of footprints for scalars and fluxes LES model for footprints Footprint model within forests Lagrangian model for forests Analytical model with exponential wind profile Three-dimensional Lagrangian model for various turbulence stratifications with backward trajectories Boundary layer model with 1.5 order closure Footprint estimates for a non-flat topography Footprint model with reactive chemical compounds Footprints from backward and forward in-time particle simulations driven with LES data Footprint estimates for a forest edge Footprint estimates for a complex urban surface Footprint model with LES embedded particles
Leclerc and Thurtell (1990) Horst and Weil (1992) Schmid (1994, 1997) Leclerc et al. (1997) Baldocchi (1997) Rannik et al. (2000, 2003) Kormann and Meixner (2001) Kljun et al. (2002) Sogachev and Lloyd (2004) Sogachev et al. (2004) Strong et al. (2004) Cai and Leclerc (2007) Klaassen and Sogachev (2006) Vesala et al. (2008a) Steinfeld et al. (2008)
thorough overview of the development of the footprint concept is given in Schmid (2002), with Foken and Leclerc (2004) and Vesala et al. (2008a) providing more recent information on the subject. Table 11.1 lists the most important studies on footprint modelling.
11.2 BASIC The footprint problem essentially deals with the calculation of the relative contribution to the mean concentration or flux at a fixed point in the presence of an arbitrary given source of a compound. The source area naturally depends on the measurement height and wind direction. The footprint is also sensitive to both atmospheric stability and surface roughness, as first pointed out by Leclerc and Thurtell (1990). This stability dependence has been further investigated by Kljun et al. (2002), comparing crosswind-integrated footprints predicted by a three-dimensional Lagrangian simulation (Figure 11.1). Here, the measurement height and roughness length were fixed at 50 m and 0.05 m, respectively, whereas the friction velocity, vertical velocity scale, Obukhov length and boundary layer height were varied to represent strongly and forced convective, neutral and stable conditions. It can be seen that the peak location is closer
342
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 11.1: Crosswind-integrated footprint for flux measurements for four different cases of stability (strongly convective, forced convective, neutral and stable conditions). Measurement height, 50 m; roughness length, 0.05 m.
Flux footprint function/m1
0.003
Strongly convective Forced convective Neutral Stable
0.002
Wind direction 0.001
0
0
500
1000
1500
Along-wind distance from sensor/m
to the receptor and less skewed in the upstream direction with increasingly convective conditions. In unstable conditions, the turbulence intensity is high, resulting in the upward transport of any compound, and a shorter travel distance/time. Typically, the location of the footprint peak ranges from a few times the measurement height (unstable) to a few dozen times (stable). Note also the small contribution of the downwind turbulent diffusion in convective cases. Concentration footprints tend to be longer (Figure 11.2 – see colour insert). In terms of the Lagrangian framework, this can be explained as follows. The flux footprint value over a horizontal area element is proportional to the difference in the numbers of particles (passive tracers) crossing the measurement level in the upward and downward directions. Far from the measurement point, the number of upward and downward crossings of imaginary particles or fluid elements across an imaginary x–y plane typically tends to be about the same, and thus the upward and downward movements are counterbalanced, decreasing the respective fractional flux contribution of those source elements to the flux system. In contrast to the flux footprint, each crossing contributes positively to the concentration footprint independent of the direction of the trajectory. This increases the footprint value at distances further from the receptor location. In the lateral direction, the stability influences footprints in a similar fashion. Mathematically, the surface area of influence on the entire flux goes to infinity, and thus one must always define the percentage level for the source area (Schmid, 1994). Often 50%, 75% or 90% source areas contributing to a point flux measurement are considered. Figures 11.2 and 11.3 (see colour insert) show the 75% level source areas for convective and strongly convective conditions.
FLUX AND CONCENTRATION FOOTPRINT MODELLING
343
Finally, one should mention the fundamental mathematical difference between concentration and flux footprint functions. It turns out that the flux footprint is not the Green’s function of the flux transport equation, and the expression with the relevant unit source involves some unclosed terms, leading to additional closure assumptions (Finnigan, 2004). This may result in a complex behaviour of the flux footprint function: in particular, it may be even negative for a complex, convergent flow over a hill (Finnigan, 2004). In a horizontally homogeneous shear flow, the flux footprint f does satisfy 1 . f . 0, as is always the case for the concentration footprint. The vertical distribution of the source/sink can also lead to anomalous behaviour (e.g., Markkanen et al., 2003). Then the flux footprint in fact represents a combined footprint function that is a source-strength-weighted average of the footprints of individual layers. Because of the principle of superposition, the combined function may become negative if one or more of the layers has a source strength that is opposite in sign to the net flux between vegetation and atmosphere (Lee, 2003). The combined function is no longer a footprint function in the sense of Equation 11.1 and we suggest that be called the (normalised) flux contribution function (see also Markkanen et al., 2003).
11.3 LAGRANGIAN STOCHASTIC TRAJECTORY APPROACH Stochastic Lagrangian models describe the diffusion of a scalar by means of a stochastic differential equation – a generalised Langevin equation. The model determines the evolution of a Lagrangian particle in space and time by combining the evolution of its trajectory as a sum of deterministic drift and random terms. The drift term is to be specified for each specific model (Thomson, 1987). The Lagrangian stochastic approach can be applied to any turbulence regime, thus allowing footprint calculations for various atmospheric boundary layer flow regimes. For example, in the convective boundary layer, turbulence statistics are strongly non-Gaussian and, for realistic dispersion simulation, a non-Gaussian trajectory model has to be applied. However, most Lagrangian trajectory models fulfil the main criterion for the construction of Lagrangian stochastic models – the well-mixed condition (Thomson, 1987) – for only one given turbulence regime. As one of a few, Kljun et al. (2002) presented a footprint model based on a trajectory model for a wide range of atmospheric boundary layer stratification conditions. For an overview of Lagrangian trajectory models for different flow types, including non-Gaussian turbulence, see Wilson and Sawford (1996). It should be noted, however, that the Lagrangian stochastic approach is rigorously justified only in the case of stationary isotropic turbulent flow. Even in the case of homogeneous but anisotropic turbulence the justification problem remains unsolved: in particular, there are several different stochastic models that satisfy the well-mixed condition (Sabelfeld and Kurbanmuradov, 1998; Thomson, 1987). This is often called the uniqueness problem (for details, see the discussion in Kurbanmuradov and Sabelfeld, 2000; Kurbanmuradov et al., 1999, 2001). In addition, the mean rotation of the velocity fluctuation vector has been proposed as the additional criterion for
344
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
selecting the Lagrangian stochastic model (Wilson and Flesch, 1997), but this criterion does not define the unique model (Sawford, 1999). The Lagrangian stochastic method is nevertheless very convenient in footprint application: once the form of the parameterisation is chosen, the stochastic Langevin-type equation is solved by a very simple scheme (e.g. Sabelfeld and Kurbanmuradov, 1990; Sawford, 1985; Thomson, 1987). The approach needs only a one-point probability density function (pdf) of the Eulerian velocity field, and is efficient in numerical calculations provided that the simulation scheme and stochastic estimator are properly constructed. The Lagrangian stochastic trajectory model, together with appropriate simulation methods and corresponding estimators for concentration or flux footprints, is usually merged into a Lagrangian footprint model. For a detailed overview of the estimation of concentrations and fluxes by the Lagrangian stochastic method, the concentration and flux footprints in particular, see Kurbanmuradov et al. (2001).
11.3.1
Forward and backward models
The conventional approach of using a Lagrangian model for footprint calculation is to release particles at the surface point source and track their trajectories downwind of this source towards the measurement location (e.g., Horst and Weil, 1992; Leclerc and Thurtell, 1990; Rannik et al., 2000). Particle trajectories and particle vertical velocities are sampled at the measurement height. Alternatively, it is possible to calculate the trajectories of a Lagrangian model in a backward time frame (Flesch, 1996; Flesch et al., 1995; Kljun et al., 2002; Thomson, 1987). In this case, the trajectories are initiated at the measurement point itself and tracked backwards in time, with a negative time step, from the measurement point to any potential surface source. The particle touchdown locations and touchdown velocities are sampled, and the mean concentration and mean flux at the measurement location are calculated. From theoretical grounds, the forward and backward footprint estimates are equivalent (Flesch et al., 1995). However, certain numerical errors, and in particular the numerical errors arising from trajectory reflection from the surface in the case of backward simulation, must be avoided (for details, see Cai et al., 2008). The backward estimators for concentration and flux do not assume homogeneity or stationarity of the turbulence field. The calculated trajectories can be used directly without a coordinate transformation. Therefore, if inhomogeneous pdfs of the particle velocities are applied, backward Lagrangian footprint models hold the potential to be applied efficiently over inhomogeneous terrain.
11.3.2
Advantages and disadvantages of Lagrangian models
The benefits of Lagrangian models include their capability to consider both Gaussian and non-Gaussian turbulence. While flow within the surface layer is nearly Gaussian, non-Gaussianity characterises flow fields of both the canopy layer and the convective mixed layer. Another benefit of Lagrangian stochastic models over analytical ones is their applicability in near-field conditions, that is, in conditions in which fluxes of
FLUX AND CONCENTRATION FOOTPRINT MODELLING
345
constituents are disconnected from their local gradients, thus providing a proper description for within-canopy dispersion. This makes it possible to locate trace gas sources/sinks within a canopy. Baldocchi (1997), Rannik et al. (2000, 2003), Finnigan (2004) and Mo¨lder et al. (2004) studied the qualitative effects of canopy turbulence on the footprint function. In the case of tall vegetation, the footprint depends primarily on two factors: canopy turbulence, and the source/sink levels inside the canopy. These factors become of particular relevance for observation levels close to the treetops (Lee, 2003; Markkanen et al., 2003; Rannik et al., 2000). Furthermore, Go¨ckede et al. (2007) found a strong influence of the turbulent characteristics of the different flowcoupling regimes within the forest on the footprint results. Lagrangian stochastic models are not uniquely defined in three-dimensional flow fields. In one dimension, the solution is, however, unique. Also, these models require a predefined wind field. Thus, as a weakness of these models, their performance depends on how well they are constructed for a particular flow type, and on how good is the description of wind statistics. However, the description of wind statistics inside a canopy becomes uncertain, owing to a poor understanding of the stability dependence of the canopy flow, as well as of the Lagrangian correlation time. In terms of parameterisation of the value of the Kolmogorov constant C0 , it has been shown that Lagrangian stochastic model results are sensitive to the absolute value of the constant (Mo¨lder et al., 2004; Rannik et al., 2003). Poggi et al. (2008) showed that C0 may vary non-linearly inside the canopy, but the Lagrangian stochastic model predictions were not sensitive to gradients of C0 inside the canopy. The wind statistics necessary for Lagrangian stochastic footprint simulations originate from similarity theory, from experimental data, or from the output from a flow model capable of producing wind statistics. Combined with closure model results, the Lagrangian stochastic approach has been applied to study the influence of transition in surface properties on the footprint function. The first attempt was made by Luhar and Rao (1994) and Kurbanmuradov et al. (2003). They derived the turbulence field of the twodimensional flow over a change in surface roughness using a closure model, and performed Lagrangian simulations to evaluate the footprint functions. Hsieh and Katul (2009) applied a Lagrangian stochastic model in combination with second-order closure modelling to study the footprints over an inhomogeneous surface with a step change in surface roughness, humidity and temperature, and observed a good correspondence with field measurements of fluxes. The Lagrangian stochastic approach was also combined with large-eddy simulation of convective boundary layer turbulence to infer concentration footprints (Cai and Leclerc, 2007; Steinfeld et al., 2008). In this case, the Lagrangian stochastic simulations were performed for subgridscale turbulent dispersion. Although Lagrangian stochastic models can be applied in combination with Eulerian closure or large-eddy simulation to obtain numerical efficiency, the long computing times, caused by the large number of trajectories required to produce statistically reliable results, are a weakness of such footprint models, in comparison with more simple analytical models. To overcome this, Hsieh et al. (2000) proposed an analytical model derived from Lagrangian model results. More recently, a simple parameterisation based on a Lagrangian footprint model was proposed by Kljun et al.
346
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
(2004). This parameterisation allows the determination of the footprint from atmospheric variables that are usually measured during flux observation programmes. There are features that specifically characterise either forward or backward models. The central benefit of backward models over their forward counterparts is their ability to estimate footprints for both inhomogeneous and non-stationary flow fields, whereas the applicability of forward models is restricted by the so-called ‘‘inverse plume assumption’’. Application of backward models, however, includes numerical details that need to be properly solved. Under some conditions, the method can be sensitive to trajectory initial velocities, which have to be drawn according to a Eulerian joint pdf. This can be solved by a numerical spin-up procedure, as performed by Kljun et al. (2002), or by constructing a numerical scheme for drawing the initial velocities according to a Eulerian joint pdf. The backward estimator for the surface flux footprint involves a numerically unstable sum of the inverse of interception velocities, producing unrealistic spikes at the areas in which statistics are poor. This problem does not occur with forward estimators, as only the information on the vertical transfer direction of each particle is used to resolve the flux footprint. Cai et al. (2008) showed that the concentration footprint inferred from backward simulation can be erroneous because of discretisation error close to the surface, where turbulence is strongly inhomogeneous, and proposed a scheme for numerical adjustment to eliminate the error. In addition, the backward footprint simulation can violate the wellmixed condition at the surface when a perfect reflection scheme is applied to skewed or inhomogeneous turbulence (Wilson and Flesch, 1993). This numerical problem can be also avoided by a suitable numerical scheme (Cai et al., 2008).
11.4
LARGE-EDDY SIMULATION APPROACH
The large-eddy simulation (LES) approach is free from the drawback of a predefined turbulence field, and using Navier–Stokes equations it resolves the large eddies while parameterising subgrid-scale processes. This approach presupposes that most of the flux is contained in the large eddies: since these are directly resolved, this method provides a high level of realism to the flow, despite complex boundary conditions (e.g. Hadfield, 1994). This technique is considered the technique of choice for many cases not ordinarily studied using simpler models, and can include the effect of pressure gradient. However, it is computationally even more expensive than the Lagrangian approach. Furthermore, it is limited by the number of grid points in flow simulations to relatively simple flow conditions. Nevertheless, the method has been applied to simulate footprints in the convective boundary layer (Cai and Leclerc, 2007; Leclerc et al., 1997; Steinfeld et al., 2008) and to model the turbulence structure inside forest canopies (Shen and Leclerc, 1997). The LES approach shows potential for application in future studies, and is ideally suited to tackle footprint descriptions in inhomogeneous conditions. The LES approach provides a valuable ‘‘dataset’’, against which simpler footprint models can be verified. The LES approach can also cope with horizontal surface inhomogeneities, as shown in Shen and Leclerc (1994, 1995). The method can be used to model footprints for even higher complexity in surface
FLUX AND CONCENTRATION FOOTPRINT MODELLING
347
properties, such as variable canopy height, density and structure, but its applicability is limited in practice by computational demands. The first LES study arose from the studies of Moeng and Wyngaard (1988). LES is a sophisticated model that directly computes the three-dimensional, timedependent turbulence motions with scales equal to or greater than twice the grid size, and only models the subgrid-scale motions. Typically, it predicts the three-dimensional velocity fields, pressure and turbulent kinetic energy. Depending on the purpose, it can also simulate the turbulent transport of moisture, heat, carbon dioxide and pollutants. There are several parameterisations available in treating the subgrid scales. One of the most widely used simulations is that originally developed by Moeng (1984) and Moeng and Wyngaard (1988), and modified by Leclerc et al. (1997) and Su et al. (1998), and by Patton et al. (2001) and Vila`-Guerau de Arellano et al. (2005) for adaptation to include canopy and boundary layer scalar transport. Often, the subgrid scale is parameterised using the 1.5 order of closure scheme. Depending on the LES used, the simulation can contain a set of cloud microphysical equations, thermodynamic equations, and can predict the temperature, mixing ratios and pressure. Some LES also include a terrain-following coordinate system. A spatial cross-average and temporal average are applied to the simulated data once the simulation has reached quasi steady-state equilibrium. Typical boundary conditions are periodic, with a rigid lid applied to the top of the domain, so that gravity waves are absorbed and reflection of the waves from the upper portion of the domain is decreased. While most of the other footprint approaches utilising LES use pre-calculated flow statistics to simulate particle dispersion (Cai and Leclerc, 2007; Leclerc et al. 1997), in the approach of Steinfeld et al. (2008) particles are embedded into the LES model. This is less costly, both in CPU time and in disk space, and thus it is guaranteed that the particles will experience the flow field in a high degree of detail. However, this approach facilitates forward in-time simulations only, whereas with precalculated statistics it is possible to simulate the particles backwards in time as well (Cai and Leclerc, 2007). Recently, LES studies have been applied to canopy turbulence, and have been shown to reproduce many observed characteristics of airflow within and immediately above a plant canopy, including skewness, coherent structures, two-point statistics and shear sheltering. Concentrations and flux footprints have been studied using LES (e.g. Leclerc et al., 1997), by examining the behaviour of tracers released from multiple sources inside a forest canopy.
11.5 CLOSURE MODEL APPROACH Another way to solve the Navier–Stokes equation (NS), besides LES, is to apply ensemble-averaged NS and to use some empirical information to close the set of equations. This closure model approach is potentially an effective tool for estimating footprints over heterogeneous and hilly surfaces. Because of high computing demands, however, its first practical applications were delayed, and appeared only recently.
348
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Presenting the atmospheric boundary layer SCADIS (Scalar Distribution) model based on 1.5 closure, Sogachev et al. (2002) demonstrated flux footprint estimation as one of the several possible model applications. Sogachev and Lloyd (2004) described in detail a footprint modelling technique based on the calculation of the individual contributions from each surface cell to a vertical flux at a receptor point. This is carried out by means of a comparative analysis of vertical flux fields formed by source cells, activated consecutively. The fields are then normalised, yielding a contribution from each cell to the flux (flux contribution function). The actual footprint function can be constructed from the contribution footprint by weighting it according to known source strengths in each cell. Footprint functions modelled by SCADIS were compared with footprints derived from both analytical and Lagrangian stochastic approaches for conditions of uniform surface (e.g. Kormann and Meixner, 2001; Leclerc and Thurtell, 1990; Schuepp et al., 1990). The best agreement was obtained in neutral conditions, but this was mainly because of existing uncertainties with modelling of non-neutral flow, rather than with the method itself. A comparison of the results suggested that the closure method is suitable for practical applications, and it has also been used effectively for inhomogeneous surfaces (Sogachev et al., 2004). Figure 11.4 (see colour insert) shows how a wind regime formed over different landscapes affects the flux footprint in the case of sources located on the floor of a spatially homogeneous forest (canopy height h ¼ 15 m). Footprints for a sensor located at a height of 2h above the forest were calculated with the SCADIS model. The wind at sensor height is along the y-axis. Figure 11.4a shows a case in which the wind flow has no directional change with height, as is usually assumed in analytical or Lagrangian stochastic calculations. This case cannot be considered realistic, but simply demonstrates that the model reproduces well the symmetrical structure of the footprint shown in Figure 11.2 (see colour insert). Owing both to suppressed turbulent exchange and decreased wind speed within the canopy layer, the footprint is different from that for open conditions. Note that analytical models are unable to estimate footprints for ground sources in the presence of vegetation. Figure 11.4b shows what the footprint would be when the wind direction is changing within the canopy, according to Smith and Carson (1972). For illustrative purposes, the wind flow at a height of 13h is given. The footprint has an asymmetric form, spreading over a large area that is shifted to the left, regardless of the wind direction observed at the tower. Footprints have more complex shapes for non-flat terrain, such as the bell-shaped hill in Figure 11.4c, and the valley in Figure 11.4d. The topography of the disturbance was 500 m in diameter and 25 m in height (or depth) in the cases presented. The sensor was located at the summit (or the bottom). It is clear that combinations of different elevation and forest disturbance can lead to even more complex footprints than illustrated here. Vertical fluxes and footprint behaviour over a few simplified landscape types were investigated by Sogachev et al. (2005a). The inputs required for the model calculations are of two different types. The first group includes the characteristics of the area over which the calculations are performed, such as the spatial coordinates, and the vertical and horizontal characteristics of the vegetation (structure, photosynthetic characteristics, etc.). The second
FLUX AND CONCENTRATION FOOTPRINT MODELLING
349
group consists of meteorological parameters at any given point in time (radiation conditions and vertical profiles of wind speed, temperature, moisture and passive scalars). The model is adjusted to use information on wind speed and scalars obtained from synoptic levels, and then to predict their distribution within the atmospheric boundary layer. The relatively simple input information (usually collected at the site of interest, or provided by large-scale models) and low computing cost, plus the limited number of constants in the closure equations, make SCADIS attractive for practical applications. The approach has also been applied for estimating footprints for existing flux measurement sites in the Tver region in European Russia (Sogachev and Lloyd, 2004), and in Hyytia¨la¨ in Finland (Sogachev et al., 2004). The behaviour of both scalar fluxes and flux footprints near a forest edge was investigated in detail for the Florida AmeriFlux site (Sogachev et al. 2005b) and the Bankenbosch forest in the Netherlands (Klaassen and Sogachev, 2006). Vesala et al. (2006) analysed footprints for a complicated situation of fluxes over a small lake surrounded by forest. In Sogachev et al. (2005b), additional proofs of the suitability of the closure approach were given by a comparison of the footprints predicted by SCADIS and two different Lagrangian stochastic models (Kurbanmuradov and Sabelfeld, 2000; Thomson, 1987). Recently, Vesala et al. (2008b) successfully implemented this method for estimation of the footprint for a measuring tower surrounded by complex urban terrain. Following the example above for the Tver region (Sogachev and Lloyd, 2004), it is the second attempt at footprint prediction in three-dimensional real space mentioned in the literature. The performed footprint analysis allowed for discrimination of surface and canopy sinks/sources and complex topography. The heterogeneity of the urban surface results in complex transport from sources to receptor, and the footprint signature was asymmetric along the prevailing wind direction. Thus, any twodimensional footprint models (especially those based on analytical solutions) should be avoided for urban surfaces, even those with flat topography. To summarise, Sogachev and Lloyd’s (2004) approach to the use of the closure model for footprint calculations is basically simple in its realisation (activation of surface elements), and it could be implemented in other ensemble-averaged boundary layer models, regardless of their order of closure. Recent improvements suggested by Sogachev (2009) open up the possibility of using the approach, based on two-equation closure models, for any atmospheric stability. Although the computing costs may be high in three-dimensional calculations, and some uncertainties may remain for very low measurement heights, closure models can link sources and observed fluxes precisely under conditions of vegetation heterogeneities and hilly terrain.
11.6 FOOTPRINTS OF REACTIVE GASES Recently, the effect of chemistry on the fluxes of non-inert trace gases measured above vegetation canopies has gained attention (Rinne et al., 2007; Strong et al., 2004). The chemical degradation of any compound emitted from the surface is likely to reduce the upward flux measured some distance above the ground, and also alter the footprint
350
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
function. The matter is further complicated by the fact that measurements of chemically active compounds such as isoprene or monoterpenes are commonly conducted above deep vegetation canopies. Two approaches have been applied to study the effect of chemistry on measured fluxes. There are several LES studies (e.g. Patton et al., 2001; Vila`-Guerau de Arellano et al., 2004; Vinuesa and Vila`-Guerau de Arellano, 2003), and a few Lagrangian model studies (Rinne et al., 2007; Strong et al. 2004). Strong et al. (2004) and Rinne et al. (2007) have examined the effect of chemistry on hydrocarbon fluxes measured above forest ecosystems, and their footprints. The chemistry was calculated using a first-order differential equation, resulting in an exponential decay function. An example of the resulting cumulative one-dimensional footprint functions is shown in Figure 11.5. We can see that, while the cumulative footprint of an inert trace gas approaches unity with increasing distance from the measurement point, that of the chemically active species remains lower. The more reactive the compound, the more sensitive is the flux to the compound’s degradation. Also, the level of the emission affects the total measured flux, as air parcels originating below the canopy have, on average, longer transport times and thus more time for chemical degradation. The chemical degradation also shortens the cumulative footprints. If we compare, for example, the 80% cumulative footprint of an inert gas to that of a gas with a lifetime of 5 min, we can see that the 80% value for the inert gas is reached at about 200 m distance, but that for gas with a 5 min lifetime is only about 70 m.
Figure 11.5: Effect of chemical degradation ( c is the lifetime of the chemical compound) on cumulative footprint functions as calculated by a stochastic Lagrangian transport model.
Normalised cumulative footprint
1.0
0.8
Inert τc 1 h τc 30 min τc 5 min τc 1 min
0.6
0.4
0.2
0 101
102 103 Distance from measurement point/m
104
Source: Rinne, R., Taipale, R., Markkanen, T., Ruuskanen, T.M., Helle´n, H., Kajos, M.K., Vesala, T. and Kulmala, M. (2007). Hydrocarbon fluxes above a Scots pine forest canopy: measurements and modeling. Atmospheric Chemistry and Physics. 7: 2357–2388.
FLUX AND CONCENTRATION FOOTPRINT MODELLING
351
In the model used for the calculation of Figure 11.5, as well as in the model by Strong et al. (2004), a constant chemical lifetime was assumed. However, during transport below, within and above the canopy, air parcels are exposed to different levels of solar short-wave radiation, and to different ambient concentrations of reactive gases. This leads to a varying lifetime, according to the instantaneous location of air parcels. The effect of this on the turbulent transport was initially explored by Rinne et al. (2007). However, further model research with a more sophisticated embedded transport–chemistry interaction is required.
11.7 FUTURE RESEARCH AND OPEN QUESTIONS Future research on footprints should focus on the following. •
•
•
•
Easy-to-use footprint estimates are needed for measurements over forest canopies, for example similar to the parameterised model for the atmospheric boundary layer presented by Hsieh et al. (2000). Markkanen et al. (2003) presented footprint statistics as a function of the structure parameter and density of the forest. The parameterisation by Kljun et al. (2004) is available at http:// footprint.kljun.net. The SCADIS closure model was also simplified (twodimensional domain, neutral stratification, flat topography, etc.) and provided with a user-friendly menu. The operating manual for the set of basic and newly created programs, called Footprint Calculator, was presented by Sogachev and Sedletski (2006) and is available freely by request to the authors, or from the Nordic Centre for Studies of Ecosystem Carbon Exchange (NECC) website (http://www.necc.nu/NECC/home.asp). Regarding the dependence of canopy turbulence on the forest structure, and the influence of stability on footprint functions over forest canopies, Lagrangian simulation studies have raised the question of uncertainties related to forest structure (Rannik et al., 2003). Both measurements and modelling can improve our understanding of canopy turbulence and dispersion processes. Note that flux measurements are often carried out, because of practical reasons, within the roughness sublayer (the layer between the canopy top and the inertial sublayer), where turbulence is enhanced and the footprint function is in general more contracted than that based on the inertial sublayer similarity functions (Lee, 2003). Restrictions of both the applicability and uncertainties of footprint estimates based on the assumption of horizontal homogeneity should be clarified, and changes in surface properties should be determined for actual conditions. Furthermore, the interaction between footprint and internal boundary layers should be discussed, as well as the blending height concept and the extension of footprint models to layers above the surface layer. Numerical studies of flow and scalar dispersion in complex measurement conditions should be performed to evaluate the influence of topography and canopy heterogeneity on flux measurements and ecosystem gas exchange
352
•
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
estimation. Tools, such as LES, capable of dealing with complex flows should be developed and applied to actual measurement conditions. Furthermore, there is a potential in using ensemble-averaged closure models, which have largely been neglected in footprint studies. Footprint models should allow for active scalars, such as chemically reactive gases or aerosol particles (Rinne et al., 2007; Strong et al., 2004). Many of the above-mentioned points (inhomogeneities with high roughness, topography, interest in reactive compounds) are of greater interest for urban environments. As the number of urban flux sites continues to grow, there is also an increasing demand for models capable of tackling the most essential features of the urban environment (Vesala et al., 2008b). Upscaling fluxes at the landscape/urban area level is of high relevance for large urban areas.
ACKNOWLEDGEMENTS Support from ACCENT-BIAFLUX, the IMECC EU project and the ICOS EU project is acknowledged, together with the Academy of Finland Centre of Excellence programme (project number 1118615). M.Y. Leclerc is also grateful to the US Department of Energy, Office of Science, Terrestrial Carbon Processes, grant DEAC02-98CH10886, for partial support leading to the contribution to this chapter. The photographs in Figures 11.2 and 11.3 are used by permission of Chris Hopkinson.
REFERENCES Baldocchi, D. (1997). Flux footprints within and over forest canopies. Boundary-Layer Meteorology. 85: 273–292. Cai, X.H. and Leclerc, M.Y. (2007). Forward-in-time and backward-in-time dispersion in the convective boundary layer: the concentration footprint. Boundary-Layer Meteorology. 123: 201–218. Cai, X., Peng., G., Guo, X. and Leclerc, M.Y. (2008). Evaluation of backward and forward Lagrangian footprint models in the surface layer. Theoretical and Applied Climatology. 93: 207–233. Finnigan, J.J. (2004). The footprint concept in complex terrain. Agricultural and Forest Meteorology. 127: 117–129. Flesch, T.K. (1996). The footprint for flux measurements, from backward Lagrangian stochastic models. Boundary-Layer Meteorology. 78: 399–404. Flesch, T.K., Wilson, J.D. and Yee, E. (1995). Backward-time Lagrangian stochastic dispersion models and their application to estimate gaseous emissions. Journal of Applied Meteorology. 34: 1320–1332. Foken, T. (2008). Micrometeorology. Springer, New York. Foken, T. and Leclerc, M.Y. (2004). Methods and limitations in validation of footprint models. Agricultural and Forest Meteorology. 127: 223–234. Gash, J.H.C. (1986). A note on estimating the effect of a limited fetch on micrometeorological evaporation measurements. Boundary-Layer Meteorology. 35: 409–413. Go¨ckede, M., Thomas, C., Markkanen, T., Mauder, M., Ruppert, J. and Foken, T. (2007). Sensitivity of Lagrangian stochastic footprints to turbulence statistics. Tellus. 59B: 577–586.
FLUX AND CONCENTRATION FOOTPRINT MODELLING
353
Hadfield, M.G. (1994). Passive scalar diffusion from surface sources in the convective boundary layer. Boundary-Layer Meteorology. 69: 417–448. Horst, T.W. and Weil, J.C. (1992). Footprint estimation for scalar flux measurements in the atmospheric surface layer. Boundary-Layer Meteorology. 59: 279–296. Horst, T.W. and Weil, J.C. (1994). How far is far enough? The fetch requirements for micrometeorological measurement of surface fluxes. Journal of Atmospheric and Oceanic Technology. 11: 1018–1025. Hsieh, C.-I. and Katul, G. (2009). The Lagrangian stochastic model for estimating footprint and water vapor flux over inhomogeneous surfaces. International Journal of Biometeorology. 53: 87–100. Hsieh, C.I., Katul, G. and Chi, T. (2000). An approximate analytical model for footprint estimation of scalar fluxes in thermally stratified atmospheric flows. Advances in Water Resources. 23: 765– 772. Klaassen, W. and Sogachev, A. (2006). Flux footprint simulation downwind of a forest edge. Boundary-Layer Meteorology. 121: 459–473. Kljun, N., Rotach, M.W. and Schmid, H.P. (2002). A 3-D backward Lagrangian footprint model for a wide range of boundary layer stratifications. Boundary-Layer Meteorology. 103: 205–226. Kljun, N., Calanca, P., Rotach, M.W. and Schmid, H.P. (2004). A simple parameterisation for flux footprint predictions. Boundary-Layer Meteorology. 112: 503–523. Kormann, R. and Meixner, F.X. (2001). An analytic footprint model for neutral stratification. Boundary-Layer Meteorology. 99: 207–224. Kurbanmuradov, O.A. and Sabelfeld, K.K. (2000). Lagrangian stochastic models for turbulent dispersion in the atmospheric boundary layer. Boundary-Layer Meteorology. 97: 191–218. ¨ ., Sabelfeld, K. and Vesala, T. (1999). Direct and adjoint Monte Carlo Kurbanmuradov, O., Rannik, U algorithms for the footprint problem. Monte Carlo Methods and Applications. 5: 85–112. ¨ ., Sabelfeld, K.K. and Vesala, T. (2001). Evaluation of mean concentraKurbanmuradov, O., Rannik, U tion and fluxes in turbulent flows by Lagrangian stochastic models. Mathematics and Computers in Simulation. 54: 459–476. ¨ ., Sabelfeld, K. and Vesala, T. (2003). Stochastic Kurbanmuradov, O., Levykin, A.I., Rannik, U Lagrangian footprint calculations over a surface with an abrupt change of roughness height. Monte Carlo Methods and Applications. 9: 167–188. Leclerc, M.Y. and Thurtell, G.W. (1990). Footprint prediction of scalar fluxes using a Markovian analysis. Boundary-Layer Meteorology. 52: 247–258. Leclerc, M.Y., Shen, S. and Lamb, B. (1997). Observations and large-eddy simulation modeling of footprints in the lower convective boundary layer. Journal of Geophysical Research. 102: 9323–9334. Lee, X. (2003). Fetch and footprint of turbulent fluxes over vegetative stands with elevated sources. Boundary-Layer Meteorology. 107: 561–579. Luhar, A.K. and Rao, K.S. (1994). Source footprint analysis for scalar fluxes measured over an inhomogeneous surface. In: Gryning, S.E. and Milan, M.M. (Eds), Air Pollution Modeling and its Applications. Plenum Press, New York, pp. 315—323. ¨ ., Marcolla, B., Cescatti, A. and Vesala, T. (2003). Footprints and fetches for Markkanen, T., Rannik, U fluxes over forest canopies with varying structure and density. Boundary-Layer Meteorology. 106: 437–459. Moeng, C.-H. (1984). A large-eddy simulation model for the study of planetary boundary-layer turbulence. Journal of Atmospheric Science. 41: 252–2061. Moeng, C.-H. and Wyngaard, J. (1988). Spectral analysis of large-eddy simulation of the convective boundary layer. Journal of Atmospheric Science. 45: 3573–3587. Mo¨lder, M., Klemedtsson, L. and Lindroth, A. (2004). Turbulence characteristics and dispersion in a forest: verification of Thomson’s random-flight model. Agricultural and Forest Meteorology. 127: 203–222. Pasquill, F. (1972). Some aspects of boundary layer description. Quarterly Journal of the Royal Meteorological Society. 98: 469–494. Pasquill, F. and Smith, F.B. (1983). Atmospheric Diffusion, 3rd edn. Wiley, New York.
354
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Patton, E.G., Davis, K.J., Barth, M.C. and Sullivan, P.P. (2001). Decaying scalars emitted by a forest canopy: a numerical study. Boundary-Layer Meteorology. 100: 91–129. Poggi, D., Katul, G.G., and Cassiani, M. (2008). On the anomalous behavior of the Lagrangian structure function similarity constant inside dense canopies. Atmospheric Environment. 42: 4212–4231. ¨ ., Aubinet, M., Kurbanmuradov, O., Sabelfeld, K.K., Markkanen, T. and Vesala, T. (2000). Rannik, U Footprint analysis for the measurements over a heterogeneous forest. Boundary-Layer Meteorology. 97: 137–166. ¨ ., Markkanen, T., Raittila, J., Hari, P. and Vesala, T. (2003). Turbulence statistics inside and Rannik, U over forest: influence on footprint prediction. Boundary-Layer Meteorology. 109: 163–189. Rinne, R., Taipale, R., Markkanen, T., Ruuskanen, T.M., Helle´n, H., Kajos, M.K., Vesala, T. and Kulmala, M. (2007). Hydrocarbon fluxes above a Scots pine forest canopy: measurements and modeling. Atmospheric Chemistry and Physics. 7: 2357–2388. Sabelfeld, K.K. and Kurbanmuradov, O.A. (1990). Numerical statistical model of classical incompressible isotropic turbulence. Soviet Journal on Numerical Analysis and Mathematical Modelling. 5: 251–263. Sabelfeld, K.K. and Kurbanmuradov, O.A. (1998). One-particle stochastic Lagrangian model for turbulent dispersion in horizontally homogeneous turbulence. Monte Carlo Methods and Applications. 4: 127–140. Sawford, B.L. (1985). Lagrangian statistical simulation of concentration mean and fluctuation fields. Journal of Climate and Applied Meteorology. 24: 1152–1166. Sawford, B.L. (1999). Rotation of trajectories in Lagrangian stochastic models of turbulent dispersion. Boundary-Layer Meteorology. 93: 411–424. Schmid, H.P. (1994). Source areas for scalar and scalar fluxes. Boundary-Layer Meteorology. 67: 293–318. Schmid, H.P. (1997). Experimental design for flux measurements: matching scales of observations and fluxes. Agricultural and Forest Meteorology. 87: 179–200. Schmid, H.P. (2002). Footprint modeling for vegetation atmosphere exchange studies: a review and perspective. Agricultural and Forest Meteorology. 113: 159–183. Schuepp, P.H., Leclerc, M.Y., MacPherson, J.I. and Desjardins, R.L. (1990). Footprint prediction of scalar fluxes from analytical solutions of the diffusion equation. Boundary-Layer Meteorology. 50: 355–373. Shen, S. and Leclerc, M.Y. (1994). Large-eddy simulation of small scale surface effects on the convective boundary layer structure. Atmosphere-Ocean. 32: 717–731. Shen, S. and Leclerc, M.Y. (1995). How large must surface inhomogeneities be before they influence the connective boundary layer structure? A case study. Quarterly Journal of the Royal Meteorological Society. 121: 1209–1228. Shen, S. and Leclerc, M.Y. (1997). Modelling the turbulence structure in the canopy layer. Agricultural and Forest Meteorology. 87: 3–25. Smith, F.B. and Carson, D.J. (1972). Mean wind-direction shear through a forest canopy. BoundaryLayer Meteorology. 3: 178–190. Sogachev, A. (2009). A note on two-equation closure modelling of canopy flow. Boundary-Layer Meteorology. 130: 423–435. Sogachev, A. and Lloyd, J.J. (2004). Using a one-and-a-half order closure model of the atmospheric boundary layer for surface flux footprint estimation. Boundary-Layer Meteorology. 112: 467– 502. Sogachev, A. and Sedletski, A. (2006). SCADIS ‘‘Footprint calculator’’: Operating manual. In: Kulmala, M., Lindroth, A. and Ruuskanen, T. (Eds). Proceedings of bACCI, NECC and FCoE Activities 2005, Book B. Report Series in Aerosol Science 81B, Helsinki, Finland. Sogachev, A., Menzhulin, G., Heimann, M. and Lloyd, J. (2002). A simple three dimensional canopy–planetary boundary layer simulation model for scalar concentrations and fluxes. Tellus. 54B: 784–819. ¨ . and Vesala, T. (2004). On flux footprints over the complex terrain covered Sogachev, A., Rannik, U by a heterogeneous forest. Agricultural and Forest Meteorology. 127: 143–158.
FLUX AND CONCENTRATION FOOTPRINT MODELLING
355
Sogachev A., Panferov, O., Gravenhorst, G. and Vesala, T. (2005a). Numerical analysis of flux footprints for different landscapes. Theoretical and Applied Climatology. 80: 169–185. Sogachev, A., Leclerc, M.Y., Karipot, A., Zhang, G. and Vesala, T. (2005b). Effect of clearcuts on flux measurements made above the forest. Agricultural and Forest Meteorology. 133: 182– 196. Steinfeld, G., Raasch, S. and Markkanen, T. (2008). Footprints in homogeneously and heterogeneously driven boundary layers derived from a Lagrangian stochastic particle model embedded into large-eddy simulation. Boundary-Layer Meteorology. 129: 225–248. Strong, C., Fuentes, J.D. and Baldocchi, D.D. (2004). Reactive hydrocarbon flux footprints during canopy senescence. Agricultural and Forest Meteorology. 127: 159–173. Su, H.-B., Shaw, R.H., Paw, U.K.T., Moeng, C.-H. and Sullivan, P.P. (1998). Turbulent statistics of neutrally stratified flow within and above a sparse forest from large-eddy simulation and field observations. Boundary-Layer Meteorology. 88: 363–397. Thomson, D.J. (1987). Criteria for the selection of stochastic models of particle trajectories in turbulent flows. Journal of Fluid Mechanics. 189: 529–556. ¨ ., Suni, T., Smolander, S., Sogachev, A., Launiainen, S. and Ojala, A. Vesala, T., Huotari, J., Rannik, U (2006). Eddy covariance measurements of carbon exchange and latent and sensible heat fluxes over a boreal lake for a full open-water period. Journal of Geophysical Research. 111: D11101, doi:10.1029/2005JD006365. ¨ ., Rinne, J., Sogachev, A., Markkanen, T., Sabelfeld, K., Foken, Th. Vesala, T., Kljun, N., Rannik, U and Leclerc, M.Y. (2008a) Flux and concentration footprint modelling: state of the art. Environmental Pollution. 152: 653–666. ¨ ., Mammarella, I., Siivola, E., Keronen, Vesala, T., Ja¨rvi, L., Launiainen, S., Sogachev, A., Rannik, U P., Rinne, J., Riikonen, A. and Nikinmaa, E. (2008b) Surface–atmosphere interactions over complex urban terrain in Helsinki, Finland. Tellus. 60B: 188–199. Vila`-Guerau de Arellano, J., Dosio, A., Vinuesa, J.-F., Holtslag, A.A.M. and Galmarini, S. (2004). The dispersion of chemically active species in the atmospheric boundary layer. Meteorology and Atmospheric Physics. 87: 23–38. Vila`-Guerau de Arellano, J., Kim, S.-W., Barth, M.C. and Patton, E.G. (2005). Transport and chemical transformations influenced by shallow cumulus over land. Atmospheric Chemistry and Physics. 5: 3219–3231. Vinuesa, J.-F. and Vila`-Guerau de Arellano, J. (2003). Fluxes and (co-)variances of reacting scalars in the convective boundary layer. Tellus. 55B: 935–949. Wilson, J.D. and Flesch, T.K. (1993). Flow boundaries in random-flight dispersion models: enforcing the well-mixed condition. Journal of Applied Meteorology. 32: 1965–1707. Wilson, J.D. and Flesch, T.K. (1997). Trajectory curvature as a selection criterion for valid Lagrangian stochastic dispersion models. Boundary-Layer Meteorology. 84: 411–426. Wilson, J.D. and Sawford, B.L. (1996). Review of Lagrangian stochastic models for trajectories in the turbulent atmosphere. Boundary-Layer Meteorology. 78: 191–210. Wilson, J.D. and Swaters, G.E. (1991). The source area influencing a measurement in the planetary boundary-layer: the footprint and the distribution of contact distance. Boundary-Layer Meteorology. 55: 25–46.
CHAPTER
12
Past and Future Effects of Atmospheric Deposition on the Forest Ecosystem at the Hubbard Brook Experimental Forest: Simulations with the Dynamic Model ForSAFE Salim Belyazid, Scott Bailey and Harald Sverdrup
12.1 INTRODUCTION The Hubbard Brook Ecosystem Study (HBES) presents a unique opportunity for studying long-term ecosystem responses to changes in anthropogenic factors. Following industrialisation and the intensification of agriculture, the Hubbard Brook Experimental Forest (HBEF) has been subject to increased loads of atmospheric deposition, particularly sulfur and nitrogen. The deposition of these elements, also referred to as acidic deposition because of its acidifying effect on the soil, has inevitably affected the forest ecosystem and streams at HBEF. A particular characteristic of great relevance for studying the effects of acidic deposition at HBEF is the existence of long time-series of field measurements, including forest biomass, soil and runoff water. This study builds on the available data for HBEF to validate the dynamic ecosystem model ForSAFE, with the aim of using the model to reconstruct the historical state of the forest as it has been gradually affected by the increasing atmospheric deposition, and to make future predictions about changes in the forest ecosystem at HBEF.
12.2 A SHORT DESCRIPTION OF THE ForSAFE MODEL The ForSAFE model (Belyazid, 2006; Wallman et al., 2005) is theoretically based on the physical cycles of matter in a forest ecosystem as they were presented by Kimmins Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
358
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
(1997) (Figure 12.1). The biochemical cycle represents the flows of carbon, nutrients and water within the biological compartment of the forest ecosystem. In the context of the ForSAFE model, the biological compartment is represented by the trees. The biochemical cycle includes the allocation of carbon and nutrients to the tree parts, which, in the model, are grouped into foliage, wood and fine roots (roots with a diameter of less than 2 mm used for water and nutrient uptake). The biochemical cycle also includes the retranslocation process, which restores part of the nutrients from the roots and foliage destined for litterfall back into the living biomass. The biogeochemical cycle links the living biomass to the soil (Figure 12.1). This cycle comprises the uptake of water and nutrients, litterfall, and the decomposition of the litter. In ForSAFE, this cycle is the crucial link between the soil and the vegetation. It is through the biogeochemical cycle that water and nutrients are removed from the soil by the trees, and nutrients and carbon are introduced into the soil through litterfall and decomposition. If changes occur in the soil chemistry, it is through the biogeochemical cycle that they affect the living biomass and vice versa. The geochemical cycle includes the input of matter through weathering of soil minerals and atmospheric deposition of gases and particles, as well as precipitation, soil leaching and soil erosion. In the model, the weathering and leaching processes are simulated dynamically, while the mineral composition of the soil, the deposition and the precipitation
Figure 12.1: The biochemical, biogeochemical and geochemical cycles involved in the nutrient dynamics in a forest ecosystem (adapted from Kimmins, 1997). The ForSAFE model attempts to re-create a system in which the three cycles are integrated.
y cl lc ca
he m ic a
Ge oc he m i
Geo c
lc ycl
e Bio le chemical cyc
Bio cle geoch emical cy
e
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
359
are taken as environmental inputs. Soil erosion is not considered in the model, nor is the depletion of the soil minerals through weathering. The three cycles are closely related, and overlap with each other. When translated into the model, the focus is turned from keeping a clear distinction between the three cycles to tracing the flow of the elements. Carbon, nutrients and water are transferred between the three cycles at different stages of their flow within the modelled forest ecosystem, and the boundaries between the cycles become ephemeral. The apparent structure of the model becomes an integrated web of flows between the tree parts, the trees and the soil, and within the soil (Figure 12.2). ForSAFE (Figure 12.2) simulates the biogeochemical cycles of carbon, nitrogen, base cations (Bc) and water in a forest ecosystem, with focus on tree growth, soil chemistry and soil organic matter accumulation and decomposition (Belyazid, 2006; Belyazid et al., 2006; Wallman et al., 2005). The module for tree growth simulates the dynamics of photosynthesis, nutrient uptake, nutrient and carbon allocation and evapotranspiration. This module is based on the PnET model (Aber and Federer, 1992), and uses the nitrogen content of leaves or needles to estimate a potential photosynthesis rate from the solar radiation. The photosynthesis rate is in turn corrected according to nutrient (from the soil chemistry module) and water (from the hydrology module) availability. The soil chemistry module is based on the SAFE model for soil chemistry processes and weathering (Alveteg, 1998; Alveteg et al., 1995). This module estimates the soil solution contents of different chemicals (nitrogen, Bc, aluminium, hydrogen, chloride, sodium) based on the balance between the processes of uptake (estimated in Figure 12.2: ForSAFE is made up of four main central modules, to which the VEG module is annexed. The five modules communicate continuously on a monthly basis. (DOC: dissolved organic carbon.)
Decomposition of soil organic matter
Soil moisture
Mineralisation DOC production Soil acidity, N content, Al content
Hydrology
Evapotrans.
Litter production
Soil moisture
Tree growth and other processes
Soil chemistry and weathering
Soil moisture percolation Nutrient uptake Nutrients and Al contents
360
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
the growth module), mineralisation (estimated in the decomposition module), precipitation, weathering, leaching (estimated in the hydrology module) and cation exchange. The decomposition module, based on the DECOMP model (Wallman et al., 2006; Walse et al., 1998), simulates the accumulation and decomposition of the soil organic matter and litter. The incoming litter (produced by the growth module) is sorted into four different pools with different dispositions to decomposition, and each pool is decomposed at a rate that depends on the soil temperature, moisture content (from the hydrology module), acidity and nitrogen content (from the chemistry module). Finally, the hydrology module, derived from the PULSE model (Wallman et al., 2005), estimates the rates of vertical water percolation and evapotranspiration based on the water-holding capacity and the wilting point of different soil layers.
12.3
INPUT
The model simulation was carried out for watershed 6 (W6) at HBEF, New Hampshire. In accordance with the concept presented in Figure 12.1, the model requires input about the state of the soil and the vegetation, and time-series inputs for the geochemical flows. The soil input data were based on field measurements at W6, available at the HBES website.1 The soil was stratified into six layers representing the six existing horizons at the site, and input for each layer was specified according to measurements (Table 12.1). The values for gibbsite solubility were based on the value reported in Warfvinge and Sverdrup (1995). The mineral composition of the soil (Table 12.2) was reconstructed using the normative back-calculation model UPPSALA (Sverdrup and Warfvinge, 1993; Warfvinge and Sverdrup, 1995), based on the total analysis of the soil bulk chemistry available at HBEF (Scott Bailey, personal communication). The O horizon was assumed to contain no weatherable minerals. Feldspar and plagioclase are the dominant minerals through the mineral soil profile, and the E horizon contains important amounts of these two minerals. Table 12.1: Layer-specific input at the low-level elevation site at HBEF. Thickness/ cm O A E Bhs Bs C
4.0 3.0 3.0 4.0 75.0 11.0
Density/ kg m23 300.0 400.0 1000.0 1400.0 1400.0 1800.0
Field capacity/ m3 m23 0.32 0.38 0.23 0.44 0.25 0.20
Weatherable area/ m2 m23 1.0 3 105 2.5 3 106 8.0 3 105 1.5 3 106 1.0 3 106 1.0 3 106
Gibbsite Base Cation exchange saturation constant (BS)/% capacity 17.5 3 105 5.8 3 105 3.4 3 105 7.5 3 105 5.8 3 105 2.1 3 105
50.0 16.0 18.0 13.0 7.0 6.0
6.5 7.6 7.6 8.2 9.1 9.2
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
361
Table 12.2: Mineral contents per layer. Quartz makes up the remaining fraction not presented in the table. Minerals/%
O A E Bhs Bs C
Fld
Plg
0.00 9.43 13.59 11.65 13.36 15.52
0.00 8.83 14.62 14.51 17.68 20.25
Hrn Pyr Epd Grn
Btt
Msc
Chl
Ver
Apt
Klt
Clc
0.00 0.62 1.28 2.15 2.15 3.39
0.00 0.23 0.00 0.18 0.43 0.57
0.00 0.06 0.00 0.29 0.38 0.73
0.00 0.00 0.00 0.00 0.00 0.00
0.00 2.18 3.01 3.53 4.18 0.84
0.00 0.35 0.10 0.18 0.33 0.36
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.03 0.27 0.35 0.45 0.65
0.00 0.00 0.00 0.00 0.00 0.00
Fld, K-feldspar; Plg, plagioclase; Hrn, hornblende; Pyr, pyroxene; Epd, epidote; Grn, garnet; Btt, biotite; Msc, muscovite; Chl, Fe-chlorite; Ver, Mg-vermiculite; Apt, apatite; Klt, kaolinite; Clc, calcite.
The parametric input for the forest cover was based on the data used for PnET model simulations (Aber et al., 1997, 2002), which were specifically parameterised for the northern hardwoods at HBEF-W6. These data describe the photosynthesis, growth rates, evapotranspiration, litterfall and nutrient allocation in trees in response to light intensity and the nitrogen content of the foliage. A modification was made to this part of the input data, whereby the internal pools of nutrients in the trees were changed from static values to dynamic sizes dependent on the size of the biomass (Figure 12.3). The necessity for modelling the internal nutrient pools dynamically comes from the fact that these pools are used to create an uptake gradient that reflects the deficit between the tree’s nutrient need and the actual nutrient availability in the Figure 12.3: The internal nutrient pools are modelled to dynamically increase or decrease following a growth or decline of the biomass, respectively.
Internal N, Ca, K pools/g m2
20
N, Ca, K pools/g m2 Mg pool/g m2
15
10
5
0
0
5000
1 104
1.5 104
Biomass dry weight/g m2
2 104
2.5 104
362
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
tree for allocation. This gradient is in turn used to create an uptake requirement, which should fill the gap in the trees. The model uses an average root distribution (Figure 12.4) based on field measurements for root uptake and HBEF-W6 (Timothy Fahey, personal communication). Root density is highest in the forest floor, while the majority of roots in total are found in the lower mineral soil (Bhs, Bs and C horizons). Root density is lowest in the E horizon. The root distribution is used in the model to direct the uptake distribution of nutrients and water, and to partition root growth, respiration and litter. At this stage, the root distribution is assumed to be constant with respect to time. Finally, the geochemical cycle inputs to the model – precipitation, light intensity, temperature and deposition – were derived from the data available on the Hubbard Brook website.2 Bulk deposition data were modified to account for dry deposition. Throughfall deposition was first modified for chloride by increasing bulk deposition by a factor of 1.3, which was extended to the other elements. The data report a large increase in the atmospheric deposition that started in the middle of the 1800s (Figure 12.5) and, while the deposition of sulfur, chloride and Bc was considerably reduced by the early 2000s, that of nitrogen continues to be elevated.
Figure 12.4: The average root distribution through the soil profile as used in the model.
0
0
10
Root content/% 20
30
40
Soil horizon O A E Bhs
Soil depth/cm
25
50
Bs
75
C 100
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
363
Figure 12.5: Deposition trends of (a) sulfate, (b) nitrogen, (c) base cations (Bc) and (d) chloride as used in input for the ForSAFE simulation. Note that the scales are not similar on all graphs. 1 105
Total N (NO3 + NH4+ ) deposition/µeq m2, yr
Total SO42 deposition/µeq m2, yr
2 105
1 105
0 1800
1900
Total Cl deposition/µeq m2, yr 1900
1900
2000
2100
Year (c)
2000
2100
2000
2100
Year (b)
4 104
1 105
0 1800
0 1800
2100
Year (a)
2 105
Total Bc (Ca2+ + Mg2+ + K+) deposition/µeq m2, yr
2000
5 104
2 104
0 1800
1900 Year (d)
12.4 MODEL CALIBRATION The ForSAFE model is calibrated on base saturation, soil organic carbon and the C/N ratio in the humus exclusively. The base saturation calibration estimates the size of the pool of exchangeable base cations (EBC) at the start of the simulation, a value usually unknown, as it lies around the year 1800 or earlier. Calibration of the soil carbon pool estimates the size of the organic pool of carbon in the soil at the start of the simulation. The C/N ratio calibration estimates the retention potential of mineralised nitrogen in the soil organic matter • •
Base saturation: The base saturation at the first year (1800) is adjusted up or down until the line passes through the 1983 value. Carbon: The calibration target value was the measured soil organic carbon value for 1983 in the forest floor (corresponding in Table 12.1 to the O horizon) (Huntington et al., 1988). The calibration was carried out by varying the initialisation period to allow the soil carbon to accumulate to the measured point.
364 •
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Nitrogen: The C/N ratio was calibrated by modifying the nitrogen retention coefficient after mineralisation using target C/N ratio values for 1983 in the forest floor (corresponding in Table 12.1 to the O horizon) (Huntington et al., 1988).
12.5
MODEL VALIDATION
To establish the validity of the model predictions at the site, data reproduced by the model were compared with those from field measurements. Time-series of measured data from HBEF-W6 were available through the HBES website. The validation carried out covers aspects of the biomass, soil solution chemistry, and runoff water. The model reproduces the standing wood biomass reasonably well over the period from 1965 to 2002 over which measurements were available (Figure 12.6), although it does not replicate the observed slowdown and levelling off of biomass observed in W6 beginning at about 1980, nor the slight decline of about 1.1% between 1997 and 2002 (see Figure 12.10). The decline in the standing biomass was reported in the field as being driven largely by an increase in mortality of sugar maple (Juice et al., 2006), indicating that species-specific responses may be very important to forest dynamics and model performance (see also Hawley et al., 2006). The
Figure 12.6: Comparison between modelled and measured values for the standing wood biomass (all trees greater than or equal to 2 cm diameter at breast height). Measured values are from the lower half of HBEF-W6. Dead trees are not included in the comparison.
Modelled wood biomass/g m2
2.0 104
1.5 104
1.0 104
5.0 103
0
0
5.0 103
1.0 104
1.5 104
Measured wood biomass/g m2
2.0 104
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
365
ForSAFE model as currently configured does not account for species-specific responses, because of its reliance on PnET for its forest growth component. In cases where forests are co-dominated by species especially sensitive to nutrient imbalances or other environmental factors, this limits the applicability of the model results. Because the decline between 1997 and 2002 is so small, it would be very relevant to pursue the comparison in the future to see whether the decline will persist and, if so, to identify the causes behind it and explain why the decline is not reproduced by the model. A thorough comparison of the measured nutrient contents in the tree parts and the modelled values would also give very valuable information about the change in the trees’ nutrient balance, and give indications about their vulnerability to stresses. The soil solution concentration of chloride and sulfate is used indirectly as a quality control on the deposition input data. A usual assumption is that there is no long-term net source of chloride and sulfate in the soil. For Hubbard Brook, the deposition was adjusted upwards (Figure 12.5c) until the chloride soil solution concentrations matched those observed. The model then reproduced the variation over time of chloride concentrations in the soil solution (Figure 12.7). Because Cl is relatively inactive, its concentrations are dependent largely on the water content at the different soil depths. After modifying the deposition of Cl and finding that a good agreement with the measured values could be found, this gave an indication that sulfate deposition might also have been underestimated, and the modified deposition in Figure 12.5a was therefore adopted. Soil solution pH is reproduced reasonably well by the model, as the measured and modelled values lie within close range of each other (Figures 12.8a and 12.8d). While the slight downward trend in measured pH in the Bs horizon is reproduced by the model, the model produces an upward trend in the Oa horizon, unlike the measured trend between 1983 and 1993 but similar to the measured trend thereafter. Figure 12.7: Modelled and measured chloride concentrations in the soil solution at two different depths. Modelled chloride/µeq L1 Measured chloride/µeq L1
75
50
25
0
100
Soil solution chloride in the Bs horizon/µeqL1
Soil solution chloride in the Oa horizon/µeqL1
100
1985
1989
1993 Year (a)
1998
Modelled chloride/µeq L1 Measured chloride/µeq L1
75
50
25
0
1985
1989
1993 Year (b)
1998
3
4
5
6
7
3
4
5
6
7
1985
1985
1993 Year (a)
1989
1993 Year (d)
Modelled pH Measured pH
1989
Modelled pH Measured pH
1998
1998
300
150
0
150
300
300
150
0
150
300
1985
1985
1993 Year (b)
1998
1989
1993 Year (e)
1998
Modelled ANC/µeq L1 Measured ANC/µeq L1
1989
Modelled ANC/µeq L1 Measured ANC/µeq L1
Figure 12.8: Measured and modelled values of soil solution pH, ANC and SO4 2.
Soil ANC in Oa horizon Soil ANC in Bs horizon
Soil solution pH in Oa horizon
Soil solution pH in Bs horizon
Soil solution sulfate in Oa horizon Soil solution sulfate in Bs horizon
0
100
200
300
400
0
100
200
300
400
1985
1985
1993 Year (c)
1998
1989
1993 Year (f)
1998
Modelled sulfate/µeq L1 Measured sulfate/µeq L1
1989
Modelled sulfate/µeq L1 Measured sulfate/µeq L1
366 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
367
The cause behind the discrepancy between the measured and modelled pH values in the Oa horizon can be traced back to the acid-neutralising capacity (ANC) (Figure 12.8b). The ANC trend changes towards the early 1990s because of a decrease in the concentration of sulfate, which is not reproduced to the same extent by the model (Figure 12.8c). Chloride is relatively chemically inactive in the soil as it percolates down through the soil profile, entering as deposition and leaving with the runoff water. The soil solution concentrations of the sum of Bc in the Bs horizon are slightly overestimated by the model (Figure 12.9d), but slightly underestimated in the Oa horizon (Figure 12.9a). The overestimation of Bc in the Bs horizon may be due to an overestimation of Bc uptake in that layer. Uptake is assumed to be directly proportional to the root distribution in the model. Aluminium is inversely related to the Bc content, and is slightly underestimated by the model, although it still reproduces the trends seen in the measurements (Figure 12.9e). The aluminium concentrations result from the updated gibbsite solubility coefficients adopted in Table 12.1. The Bc/Al ratio is clearly underestimated for the Oa horizon, but lies in the same range as the measured values in the Bs horizon. The modelled Bc/Al is well framed by the measured points (Figures 12.9c and 12.9f), although slightly towards the lower end in the Oa horizon.
12.6 RESULTS 12.6.1
Changes in the trees’ biomass
The forest at W6 has been subject to three consecutive cuttings, visible in Figure 12.10 as vertical dotted lines. Prior to the first cutting, the biomass would have been nearly stable. At this stage, the increasing deposition of nitrogen (Figure 12.5b) was not sufficient to cause any increase in the growth rate of the biomass, probably because the severe nitrogen limitation in the ecosystem meant that nitrogen was rapidly immobilised by the soil biota, and was therefore made inaccessible to the trees. Following the second cutting, the model predicts a rapid recovery of the vegetation, probably as a combination of the increased availability of nitrogen and light as a result of the canopy opening caused by the cutting. The growth after the third cutting is also rapid, and is predicted to slowly approach a plateau significantly higher than the historical level of the 1800s. The model does not predict any severe or lasting shortages of Bc, and no subsequent limitation to growth. On the contrary, the acidification of the soil caused by the elevated acidic deposition may have increased the availability of the Bc for uptake (Section 12.6.2), a process that has been described in many European forest ecosystems. As a result, the increase in growth may have been supported by soil acidification in the short to medium term (within 100 years in the future covered by the simulation). It may be possible, however, that the parameterised nutrient content ranges used for the model are not accurate, and that the forest would need more nutrients (in terms of Bc) than assumed in the model. Results from the watershedscale calcium manipulation at Hubbard Brook suggest this may be true. After an
Soil solution Bc in Oa horizon
Soil solution Bc in Bs horizon
0
100
200
300
400
0
100
200
300
400
1985
1985
1993 Year (a)
1989
1993 Year (d)
1998
1998
Modelled Bc/µeq L1 Measured Bc/µeq L1
1989
Modelled Bc/µeq L1 Measured Bc/µeq L1
Total soil solution Al (inorg.) in Bs horizon
Total soil solution Al (inorg.) in Oa horizon 0
20
40
60
80
0
20
40
60
80
1985
1985
1993 Year (b)
1998
1989
1993 Year (e)
1998
Modelled Al_tot/µmol L1 Measured Al_tot/µmol L1
1989
Modelled Al_tot/µmol L1 Measured Al_tot/µmol L1
0
5
10
15
20
0
5
10
15
20
1985
1985
1993 Year (c)
1989
1993 Year (f)
Modelled Bc/Al Measured Bc/Al
1989
Modelled Bc/Al Measured Bc/Al
1998
1998
Figure 12.9: Measured and modelled levels of Bc and total inorganic aluminium concentrations as well as the Bc/Al ratio in the Oa and Bs horizons.
Bc/Al ratio in Oa horizon Bc/Al ratio in Bs horizon
368 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
369
Figure 12.10: Evolution of the standing wood biomass at HBEF. The vertical lines show cuttings. Measured points represent the biomass of all live trees > 2 cm diameter at breast height on the lower half of W6.
Standing wood biomass/g m2
2 104
1.5 104
1 104
5 103 Modelled wood biomass/g m2 Measured wood biomass/g m2 0 1850
1900
1950
2000
2050
2100
Year
application of 1.2 tonnes of calcium per hectare in the form of a calcium silicate mineral (wollastonite), sugar maple (the tree species driving the levelling off and decline in biomass observed in W6) has responded with decreased mortality and increased crown condition and seedling survival (Juice et al., 2006). Another factor of importance is the absence of phosphorus in the model. The model is unable to predict any growth limitations if phosphorus becomes growth limiting, and may in that case overestimate the growth in the future. Also, the model predicts an increase in the biogeochemical cycling of Bc (litterfall, decomposition, uptake), and ignores the fact that the decomposed nutrients are partially available for uptake by the understorey vegetation. The Bc made available in the upper layers through this process will in reality not be entirely available for uptake by the trees, suggesting that the model may overestimate the growth in the future. The foliage Bc requirement has been increased in this simulation, providing even more material to fuel the biogeochemical cycle, but still the model indicates only short-lived shortages of Bc that do not affect growth in the long term. It should be noted, though, that because of the increased nitrogen availability, the trees may become vulnerable to pathogen attacks, possibly explaining the decline seen in the measurements, but not reproduced by the model.
12.6.2
Soil acidification and nutrient status
The model simulation shows that the soil solution base saturation (BS) has declined through the entire soil horizon (Figure 12.11). The decline in the deeper layers has
0 1900
0.2
0.4
0.6
0.8
1.0
0 1900
0.2
0.4
0.6
0.8
1.0
2000 Year (d)
2000 Year (a)
2100
2100
0 1900
0.2
0.4
0.6
0.8
1.0
0 1900
0.2
0.4
0.6
0.8
1.0
2000 Year (e)
2000 Year (b)
2100
2100
0 1900
0.2
0.4
0.6
0.8
1.0
0 1900
0.2
0.4
0.6
0.8
1.0
2000 Year (f)
2000 Year (c)
2100
2100
Figure 12.11: BS has declined from historic levels at all the modelled soil depths. While the model shows a relatively rapid recovery in the upper three layers (a, b, c), the Bhs layer is expected to recover slowly (d), and no recovery is predicted for the lower two layers (e, f).
BS in the A horizon BS in the Bs horizon
BS in the humus
BS in the Bhs horizon
BS in the E horizon BS in the C horizon
370 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
371
taken the BS below the critical line of 20%, indicating that the soils have been acidified (Figures 12.11d, 12.11e and 12.11f). The decline of BS in the lower three layers is related primarily to the deposition of sulfate. BS in the humus, A and E horizons has declined sharply, but unlike the mineral soil, it is predicted to recover fully within the next 100 years. The high availability of Bc, allowing for the replenishment of BS in the upper layers, is closely linked to the increased foliage Bc contents assumed. The Bc is returned to the soil after litterfall, and made available for uptake and adsorption after the litter decomposes. However, the high demand for uptake also means that less Bc is available to percolate down the soil profile, and therefore BS in the lower layers fails to recover. Acidification is typically accompanied by a loss of EBC from the soil. This process is the result of an enhanced desorption of Bc from the exchange sites on the soil particles by the acid ions, particularly hydrogen and aluminium, in the soil solution. As the soil acidifies, more hydrogen ions are added to the soil solution and more aluminium ions are mobilised into the soil solution. These acid ions compete with the Bc for the exchange sites on the soil particle, and the net result of an increase in the former is that more of the latter are desorbed from the soil exchange sites. This increases the availability of Bc in the soil solution, where they become available for uptake by the trees or leaching. Weathering rates are typically very slow in comparison with the cation exchange process, so the pool of EBC in the soil is considered to be the Bc capital of the soil. A loss of the EBC would mean an impoverishment of the nutrient (Bc) content of the soil. According to the model calculations, the soil lost most of its EBC between the years 1800 and 2000 (Figure 12.12a), primarily because of the acidification process described above, and related to sulfur deposition and, to a lesser extent, nitrogen deposition. Even after a steady recovery between 2000 and 2100, the EBC pool would
Pool of EBC/meq m2
3 104
Rooting zone/mmeq m2 Deep mineral soil/meq m2
2 104
1 104
0 1850
1900
1950 2000 Year (a)
2050
Bc removal paths from soil/meq m2, yr
Figure 12.12: Bc pools and pathways in the soil. For clarity, to avoid the inter- and intraannual variations, the values presented are averages over 10 years.
2100
600
Bc net uptake/µeq m2, yr Bc leaching/µeq m2, yr
400
200
0 1850
1900
1950 2000 Year (b)
2050
2100
372
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
still remain lower than that at the year 1800 by 2100. The loss of EBC occurred both in the rooting zone and beyond in the deeper mineral soil. The major pathways for Bc removal from the soil are leaching and uptake. According to the model, more Bc have historically been lost through leaching than through uptake (Figure 12.12b). This difference was even more evident as sulfur deposition increased, driving an amplification of Bc leaching. However, as sulfur deposition was reduced by the end of the 1990s, leaching declined below the level of uptake. Contrary to the past, more Bc will be removed from the soil through uptake than through leaching in the future, suggesting that uptake may become the principal pathway of Bc removal from the soil in the future. While the net Bc uptake, which is equal to the difference between the gross uptake and mineralisation, may remain unchanged, the gross uptake of Bc will increase considerably (Figure 12.13b). The expected increase in mineralisation will provide most of the Bc for tree uptake in the future, explaining the gradual recovery of the EBC in the future. The increasing gross uptake of Bc will be driven by the increased nitrogen (Figure 12.13a). Nitrogen leaching will remain very low, but will more than double by the year 2100 (Figure 12.13a) as more nitrogen becomes available in the ecosystem. To assess whether damage from the soil acidification has occurred or will occur, the Bc/Al ratio is a commonly used indicator (Nilsson and Grenfelt, 1988). Bc is the sum of phytoactive Bc: Ca + Mg + K in molar equivalent amounts. Al represents the sum of inorganic positively charged aluminium ions, where Al3þ is the most important. The Bc/Al ratio gives an indicator of uptake inhibition, combining root damage by Al and nutrient Bc shortage, which can be directly linked to a limitation in tree growth (Sverdrup and Warfvinge, 1993). A value of 1 is used as a reference for Bc/Al, below which damage in the form of growth limitation is superior to 20%
N uptake and leaching/meq m2, yr
600
N uptake/meq m–2, yr –2
N leach/meq m , yr
300
0 1850
1900
1950 2000 Year (a)
2050
2100
Bc removal paths from soil/meq m2, yr
Figure 12.13: The gross uptake of Bc increases (b) following the increased N gross uptake (a). N leaching remains low, although it follows an upward trend. 600
Bc uptake/µeq m–2, yr Bc leach/µeq m–2, yr
300
0 1850
1900
1950 2000 Year (b)
2050
2100
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
373
(Sverdrup and Warfvinge, 1993). However, in a more specific assessment, the actual response to root function is given by f as a fraction of unimpeded function ( f ¼ 1 at no effect level): f ¼
½Bc n m ½Bc n þ k ½Al
(12:1)
where k is a plant-specific response coefficient; n ¼ m ¼ 1 for all conifers and grasses; and n ¼ 3, m ¼ 2 for most deciduous trees. Values of the coefficient k for about 300 of the most common European and North American trees and plants of the terrestrial ecosystems can be found in Sverdrup and Warfvinge (1993). The model indicates that soil Bc/Al has declined throughout the soil profile (Figure 12.14). The declining pattern is expected to be reversed as sulfur deposition is reduced, and the accelerated biogeochemical cycle of Bc will cause a strong recovery in the A horizon, but a limited recovery in the mineral soil. The model results in Figure 12.14 suggest that the accelerated biogeochemical cycle resulting from a high Bc content in the foliage favours the upper soil layers’ alkalinity at the expense of the lower soil layers.
12.6.3
Soil organic carbon and nitrogen
The model indicates that both soil organic carbon and soil organic nitrogen are expected to grow in the future (Figure 12.15a). However, the time-series soil organic carbon measurements from Hubbard Brook indicate that there has been no increase in
Figure 12.14: The soil Bc/Al ratio has decreased considerably, but is expected to recover slowly in the Bhs and Bs horizons. The weighted Bc/Al is reached by multiplying the ratio for each layer by the fraction of root content in that layer. 1000 Bc/Al in A horizon
Yearly average Bc/Al ratio
Bc/Al in Bhs horizon 100
Bc/Al in Bs horizon
10
1
0.1 1850
1900
1950
2000 Year
2050
2100
374
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 12.15: Under the assumption that growth is not impeded, both soil organic carbon (SOC) and nitrogen are expected to accumulate. The C/N ratio would have increased by the year 2000 above its historical level, and is expected to start declining slowly in the second half of the 2000s. 30.0
Forest floor C/N ratio
Forest floor organic carbon/g m2
6 103
3 103
22.5
2
Modelled SOC/g m 2 Measured SOC/g m
0 1850
1900
1950 2000 Year (a)
2050
2100
15.0 1850
Modelled C/N Measured C/N
1900
1950 2000 Year (b)
2050
2100
soil carbon for the past 30 years, which calls the model predictions into question. The increasing size of the soil organic matter predicted by the model is the result of the increased litterfall, which in turn is a direct consequence of the modelled biomass growth (Figure 12.10). As the data from W6 appear to depart from the model with a flat or slightly declining biomass over the past 25 years, the reason for the difference between the modelled results and the actual data for soil organic carbon is likely to be the same as that which describes the departure of the modelled biomass from the measured values: that is, nutrient requirements for a nutrient-sensitive dominant species are not being captured by the model. The C/N ratio puts the relative accumulations of soil organic carbon and nitrogen in perspective to each other (Figure 12.15b). The ratio would not have changed before the cuttings in the early 1900s, which removed part of the standing biomass and thereby part of the litter source. The cuttings caused a reduction in the ratio, but one that recovered as the forest grew back. The model shows a slight downward trend in the C/N ratio in the future as the elevated nitrogen deposition drives the system towards a more nitrogen-enriched state. The modelled trends of soil carbon and nitrogen are flawed, as explained above. However, these trends point out the potential that the modelled ecosystem has for sequestering carbon in the soil as well as for retaining nitrogen, under the condition that enough carbon is supplied through litterfall. The decline in tree growth, caused by the loss of Bc following soil acidification, also results in a loss of the potential to sequester carbon and retain nitrogen in the soil.
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
375
12.7 CONCLUSIONS The validation of the ForSAFE model at HBEF-W6 shows that the model can be used with reasonable confidence to reconstruct the past and predict the future changes in the soil status at the site. The simulation shows a clear decline in the EBC pool in the soil, but fails to reproduce the effect of this decline on forest health. The results from the model show a clear loss of alkalinity in the soil at HBEF-W6. This acidification of the soil can be linked to the elevated acidic deposition of sulfur and nitrogen. The model shows that the loss of Bc from the soil through leaching, as caused by sulfur deposition, has decreased substantially after the reduction of the latter. However, although the soil alkalinity will gradually recover in the future, it will remain far below the historical levels prior to the acidification, particularly in the lower soil levels. The elevated foliage Bc content assumed produces the effect that the biogeochemical cycle of Bc favours the retention of Bc in the upper layers of the soil. This effect is in accordance with the findings by Schroth et al. (2007). Because of the current limitations in the model related to nutrient balances in the tree species present at the site, the model predicts no decline in tree biomass due to the acidification of the soil in the short and medium term. The measured trends of tree biomass, as well as results from a calcium application experiment on an adjacent watershed, show that negative effects on tree health due to soil acidification can already be seen. Although erroneous, the predicted growth trend shows a substantial potential for carbon sequestration in the tree biomass, and in the soil, if not hampered by nutrition imbalances caused by deficits in Bc.
ACKNOWLEDGEMENTS This work was financed by the Hubbard Brook Research Foundation. The authors want to give special thanks to Charley Driscoll, Andrew Friedland, Tim Fahey, Win Johnson and David Sleeper for their help and support for this work. Amy Bailey and Chris Johnson kindly provided valuable data and advice. Limin Chen and John Aber kindly shared the data used for the PnET model. Geoff Wilsson is warmly acknowledged for his effort to finalise the report. Some data used in this publication were obtained by scientists of the Hubbard Brook Ecosystem Study; this publication has not been reviewed by those scientists. The Hubbard Brook Experimental Forest is operated and maintained by the Northeastern Research Station, US Department of Agriculture, Newton Square, Pennsylvania.
ENDNOTES 1. http://www.hubbardbrook.org 2. http://www.hubbardbrook.org/data/dataset_search.php
376
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
REFERENCES Aber, J.D. and Federer, C.A. (1992). A generalized, lumped-parameter model of photosynthesis, evapotranspiration and net primary production in temperate and boreal forest ecosystems. Oecologia. 92:463–474. Aber, J.D., Ollinger, S.V. and Driscoll, C.T. (1997). Modeling nitrogen saturation in forest ecosystems in response to land use and atmospheric deposition. Ecological Modelling. 101: 61–78. Aber, J.D., Ollinger, SV., Driscoll, C.T., Likens, G.E., Holmes, R.T., Freuder, R.J. and Goodale, CL. (2002). Inorganic nitrogen losses from a forested ecosystem in response to physical, chemical, biotic, and climatic perturbations. Ecosystems. 5: 648–658. Alveteg, M. (1998). Dynamics of Forest Soil Chemistry. PhD thesis, Department of Chemical Engineering II, Lund University, Lund, Sweden. Alveteg, M., Sverdrup, H. and Warfvinge, P. (1995). Regional assessment of the temporal trends in soil acidification in southern Sweden, using the SAFE model. Water, Air and Soil Pollution. 85: 2509–2514. Belyazid, S. (2006). Dynamic Modeling of Biogeochemical Processes in Forest Ecosystems. PhD thesis, Lund University, Lund, Sweden. Belyazid, S., Westling, O. and Sverdrup, H. (2006). Modelling changes in forest soil chemistry at 16 Swedish coniferous forest sites following deposition reduction. Environmental Pollution. 144(2): 596–609. Hawley, G.J., Schaberg, P.G., Eagar, C. and Borer, CH. (2006). Calcium addition at the Hubbard Brook Experimental Forest reduced winter injury to red spruce in a high-injury year. Canadian Journal of Forest Research. 36: 2544–2549. Huntington, T.G., Ryan, D.F. and Hamburg, S.P. (1988). Estimating soil nitrogen and carbon pools in a northern hardwood forest ecosystem. Soil Science Society of America Journal. 52: 1162– 1167. Juice, S.M., Fahey, T.J., Siccama, T.G., Driscoll, C.T., Denny, E.G., Eagar, C., Cleavitt, N.L., Minocha, R. and Richardson, A.D. (2006). Response of sugar maple to calcium addition to northern hardwood forest. Ecology. 87(5): 1267–1280. Kimmins, J.P. (1997). Forest Ecology: A Foundation for Sustainable Management. Prentice Hall, Upper Saddle River, NJ. Nilsson, J. and Grennfelt, P. (Eds) (1988). Critical Loads for Sulphur and Nitrogen: Report of the Skokloster Workshop. Miljo¨rapport 15. Nordic Council of Ministers, Copenhagen. Schroth, A.W., Friedland, A.J. and Bostick, B.C. (2007). Macronutrient depletion and redistribution in soils under conifer and northern hardwood forests. Soil Science Society of America Journal. 71: 457–468. Sverdrup, H. and Warfvinge, P. (1993). Soil acidification effect on growth of trees, grasses and herbs, expressed by the (Ca+Mg+K)/Al ratio. Reports in Environmental Engineering and Ecology. 2(93): 1–165. Wallman, P., Svenssson, M., Sverdrup, H. and Belyazid, S. (2005). ForSAFE: an integrated processoriented forest model for long-term sustainability assessments. Forest Ecology and Management. 207: 19–36. Wallman, P., Belyazid, S., Svensson, M. and Sverdrup H. (2006). DECOMP: a semi-mechanistic model of litter decomposition. Environmental Modelling and Software. 21: 33–44. Walse, C., Berg, B. and Sverdrup, H. (1998). Review and synthesis of experimental data on organic matter decomposition with respect to the effects of temperature, moisture and acidity. Environmental Review. 6: 25–40. Warfvinge, P. and Sverdrup, H. (1995). Critical loads of acidity to Swedish forest soils: methods, data and results. Reports in Environmental Engineering and Ecology. 5(95): 1–104.
PAST AND FUTURE EFFECTS OF ATMOSPHERIC DEPOSITION: SIMULATIONS WITH ForSAFE
377
FURTHER READING Holmqvist, J., Thelin, G., Rosengren, U., Stejernqvist, I., Wallman, P. and Sverdrup, H. (2002). Assessment of sustainability in the Asa Forest Park. In: Sverdrup, H. and Stjernquist, I. (Eds). Developing Principles and Models for Sustainable Forestry in Sweden. Kluwer Academic Publishers, Dordrecht, pp. 21–32. Sverdrup, H. and Warfvinge, P. (1995). Critical loads of acidity for Swedish forest ecosystems. Ecological Bulletins. 44: 75–89. Sverdrup, H., Warfvinge, P., Janicki, A., Morgan, R., Rabenhorst, M. and Bowman, R. (1992). Mapping critical loads and steady state stream chemistry in the state of Maryland. Environmental Pollution. 77: 195–203. Sverdrup, H., Belyazid, S., Nihlga˚rd, B. and Ericsson, L. (2007). Modelling changes in ground vegetation response to acid and nitrogen pollution, climate change and forest management in Sweden, 1500–2100 AD. Water, Air and Soil Pollution Focus. 7: 163–179.
PART V Air Quality Modelling and Sensitivity Analysis
CHAPTER
13
The Origins of 210Pb in the Atmosphere and its Deposition on and Transport through Catchment Lake Systems Peter G. Appleby and Gayane Piliposian
13.1 INTRODUCTION To quite a remarkable extent we may regard lake sediments as natural tape recorders of environmental change. Sediments laid down on the bed of the lake carry a rich diversity of biological and chemical information about the contemporaneous environment. The processes by which this occurs are illustrated in Figure 13.1a. The burial of older sediments under more recent deposits results in the creation of a natural environmental archive that may include diatom frustules from the water column, eroded soils from the catchment, pollen grains from the local vegetation, and atmospheric pollutants emitted by regional industries. Examples of microscopic remains found in lake sediments are shown in Figure 13.1b. The information contained in these archives is of central importance to a wide range of environmental programmes used in reconstructions of local and regional environmental histories: for example, • • •
assessing changing erosion rates in a catchment arising from disturbances such as afforestation, deforestation, changing agricultural practice; determining the history of changes in lake water quality associated with problems such as lake eutrophication, or ‘‘acid rain’’; or monitoring atmospheric pollution by heavy metals (Pb, Hg), organic pollutants (PCBs, PAHs) and radioactive emission from nuclear installations.
Other natural archives of environmental information besides lake sediments include peat bogs and polar ice sheets. The potentially high quality of lake sediment records is shown in the plot of artificial radionuclides in cores from Windermere, a lake in north-west England (Figure 13.2). The 1985 core has just one peak, recording the 1963 fallout maximum Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
382
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 13.1: Environmental records in lake sediments showing: (a) the processes by which they are formed; and (b) examples of microscopic remains, including diatom frustules (used in assessments of lake water quality) and spheroidal carbonaceous particles (used as indicators of atmospheric pollution). Atmospheric pollutants such as SO2, Pb, Hg, DDT Pollen grains
Eroded soils
Diatoms
(a)
Diatom frustules
Spheroidal carbonaceous particles (b)
from the atmospheric testing of nuclear weapons. The 1992 core has acquired a second peak, recording fallout from the 1986 Chernobyl accident in the Ukraine. Human activities have released toxic substances into the environment since ancient times. The historical dimension has been illustrated by studies of a peat bog in the Jura region of Switzerland (Shotyk et al., 1996), which have revealed a record of lead pollution in Europe spanning more than 2000 years. A significant increase in pollution levels during the classical Graeco-Roman period reached a maximum value at about AD 200. Following the fall of the Roman empire in the West they declined to pre-classical values in the latter centuries of the first millennium, before increasing once again
383
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
Depth in sediment core/cm
Figure 13.2: Records of the artificial radionuclide 137 Cs in two sediment cores from Lake Windermere (English Lake District) collected in (a) 1985 and (b) 1992. Whereas both cores have peaks recording the 1963 fallout maximum from the atmospheric testing of nuclear weapons, the later core has a second peak recording fallout from the 1986 Chernobyl accident. 0
0
5
5
10
10
Weapons fallout maximum (1963)
15
15
20
20
25
0
250 137
500
750
1000
Cs activity/Bq kg1 (a)
Chernobyl fallout (1986)
1250
25
Weapons fallout maximum (1963)
0
250 137
500
750
1000
1250
Cs activity/Bq kg1 (b)
during the medieval period, and dramatically so during the past 200 years. The global extent of anthropogenic pollution is illustrated by the record of 137 Cs fallout in a sediment core from Emerald Lake, Signy Island, Antarctica (Appleby et al. 1995). High concentrations at a depth of about 1.5 cm were dated by 210 Pb to the mid-1960s, the time of peak 137 Cs deposition from the atmospheric testing of nuclear weapons.
13.1.1
Interpretation of sediment records: dating and modelling
In interpreting sediment records, it is essential to date the sediments accurately, and to model the extent to which the records may have been modified by transport processes through the atmosphere, catchment and water column. Accurate dating may be likened to finding the correct playback speed of the recording. Among the most important means is from the decay of radionuclides contained in the sediment record. The two principal methods are by 14 C (half-life 5730 years), used for dating on timescales ranging from 500 to 40 000 years, and 210 Pb (half-life 22.3 years), used for dating on timescales ranging from 0 to 150 years. Artificial radionuclides, such as 137 Cs and 241 Am due to fallout from the atmospheric testing of nuclear weapons, can provide
384
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
invaluable supporting evidence. Fallout of these radionuclides began in 1954, and reached a peak in 1963. Where the sediments accurately record atmospheric fallout, depths corresponding to these dates can be identified stratigraphically and used to validate or correct dates determined by radiometric decay. More recently, fallout from the Chernobyl reactor accident has been used to identify the 1986 level. Dating recent sediments spanning the period of time during which anthropogenic impacts have been at their most intense is most usually done via 210 Pb. This is a natural radionuclide that occurs in most soils and sediments as a member of the 238 U decay series (Figure 13.3). In a closed system, after a sufficiently long period of time, each radionuclide in this series reaches an equilibrium value with its immediate precursor. Disequilibrium between 210 Pb and its first long-lived precursor 226 Ra is caused by the mobility of the intermediate gaseous radionuclide 222 Rn. In soils, a fraction of the 222 Rn atoms recoil into the interstices of the soil and then escape into the atmosphere, where they decay through a series of short-lived radionuclides to 210 Pb (Figure 13.4). This is removed from the atmosphere by precipitation or dry deposition, falling onto the land surface or into lakes and oceans. 210 Pb falling directly into lakes is scavenged from the water column and deposited on the bed of the lake with the sediments. Fallout 210 Pb accumulating in lake sediments acts as a natural clock, its concentration decaying with time in accordance with the radioactive decay law CPb ð tÞ ¼ CPb ð0Þ e º t þ CRa ð1 e º t Þ
(13:1)
By measuring the present-day 210 Pb concentration CPb (t ) and the 226 Ra concentration CRa , this equation can be used to date the different layers of sediment, provided that
Figure 13.3: The 238 U decay series. In a closed system, 210 Pb will achieve equilibrium with its long-lived precursor 226 Ra after around 150 years. Disequilibrium in open systems is caused by mobility of the intermediate gaseous radionuclide 222 Rn. 238
U
4.51 109 years 226
Ra 1602 years 222
Rn 3.82 days 210
Pb
22.26 years 206
Pb
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
385
Figure 13.4: The 210 Pb cycle. A fraction of 222 Rn atoms ejected into the interstices of soils as a consequence of 226 Ra decay escape to the atmosphere, where they in turn decay to highly reactive 210 Pb atoms. These become attached to aerosol particles that are removed back to the earth’s surface by wet or dry deposition.
222
210
Rn
222
Pb
Rn atom
α particle 226
Ra atom
reliable estimates can be made of the initial 210 Pb activity in each layer, CPb (0), at the time of its formation. To achieve this precisely, it is essential to develop accurate models of the 210 Pb cycle, starting from its generation by 226 Ra decay in soils. In addition to 210 Pb’s role as a dating tool, it is worth noting that it can also be used as a natural tracer in studies of atmospherically delivered pollutants, such as trace metals and persistent organic pollutants, that have experienced similar transport processes. We can divide the 210 Pb cycle into four key stages: 1 2 3 4
radon emanation from soils and its diffusion into the atmosphere; Pb production, transport in and fallout from the atmosphere; transport of 210 Pb by terrestrial and aquatic pathways to the bed of the lake; and burial within the sediment record.
210
The purpose of this chapter will be to examine each of these stages and their significance for the use of 210 Pb, both as a tracer and as a dating tool.
386
13.2
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
222
RN PRODUCTION AND DIFFUSION INTO THE ATMOSPHERE
The principal mechanism by which 222 Rn atoms enter the interstices of soils is by recoil following ejection of alpha particles produced by 226 Ra decay. Since this can occur only for 226 Ra atoms near the surfaces of soil particles, the fraction of 222 Rn atoms that escape by this means will depend on the specific surface area of the soil, and on the distribution of the parent radionuclide 226 Ra, concentrations of which may have been adsorbed onto the surfaces of particles. Molecular diffusion within the solid soil matrix may also make a significant contribution in some situations. Although theoretical analyses (Flu¨gge and Zimens, 1939; Jost, 1960) indicate escape fractions by this process of less than 1% in all but the finest-grained materials, measurements of radon emanation from powdered minerals have suggested much higher values, possibly because of diffusion through a network of nanopores within the grain crystals. Modelling this process, if we write CRa for the 226 Ra activity per unit volume of soil, the rate of production of 222 Rn activity per unit volume will be ºRn CRa , where ºRn is the 222 Rn decay constant. If fe denotes the fraction escaping into the interstices, then the rate of production of interstitial 222 Rn activity per unit volume will be F ¼ f e ºRn CRa
(13:2)
If we suppose that the transport of interstitial 222 Rn occurs mainly by Fickian diffusion, it is readily shown that the interstitial 222 Rn activity per unit volume, denoted here by C, will then be controlled by the equation @C ¼ ºRn C þ f e ºRn CRa þ divð D=C Þ @t
(13:3)
where D is a diffusion coefficient for 222 Rn through soil. The value of D will depend on the molecular diffusivity of 222 Rn through the interstitial fluid (water and/or air) and on the porosity of soil. Although 222 Rn concentrations on short timescales may vary significantly with the time of year and degree of saturation of the soil, mean concentrations over the longer timescales appropriate to 210 Pb studies are likely to satisfy the steady-state form of this equation. Assuming a homogeneous soil and a simple one-dimensional variation of the interstitial 222 Rn activity with depth z below the soil surface, and that the surface concentration is effectively zero, the solution to this equation may be written pºffiffiffiffiffi
Rn (13:4) C ¼ f e CRa 1 e D z The total be
222
Rn concentration in the soil (including that retained in the particles) will
CRn
pºffiffiffiffiffi
Rn ¼ CRa 1 f e e D z
The flux of 222 Rn across the soil surface to the atmosphere will be
(13:5)
387
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
@C @z z¼0 pffiffiffiffiffiffiffiffiffiffiffi ¼ f e CRa ºRn D
F ¼ D
(13:6)
Factors controlling the 222 Rn emanation rate are thus the 226 Ra concentration of the soil, the 222 Rn escape fraction, and the effective diffusivity of interstitial 222 Rn through the soil. Measurements of 226 Ra activity in soils show that it is typically in the range 30– 50 Bq kg1 . Measurements of the escape fraction give widely varying figures, but suggest a mean value of around 10%. Since the diffusivity depends strongly on the water content of the soil, its value is likely to vary substantially during the course of a year. A best estimate for a reasonably dry soil is 0.06–0.08 cm2 s1 . Calculations using these values suggest 222 Rn emanation rates in the range 900–1800 Bq m2 d1 . These values are reasonably consistent with direct measurements of 222 Rn emanation rates from soils. Data from a wide range of sites reported in the literature yielded a mean value of 1234 Bq m2 d1 . Additional sources of atmospheric 222 Rn include emissions from groundwater and, to a much lesser extent, from the oceans, although these sources are unlikely to amount to more than around 2% of the emissions from soils. An alternative approach to the determination of 222 Rn emanation rates is based on measurements of 222 Rn in the atmosphere. In the interior of a large land mass, since the rate of supply of 222 Rn into the base of a vertical column of air by emanation from the land surface is to a good approximation balanced by the rate of loss of 222 Rn from the column by radioactive decay, the emanation rate can be estimated directly from the 222 Rn inventory. Using published data on vertical 222 Rn profiles at sites in continental USA and Eurasia, Piliposian and Appleby (2003) estimated the mean 222 Rn flux from the land surface to be 1570 Bq m2 d1 . This figure is relatively consistent with the emissions data and, in spite of the large uncertainties, may be regarded as representing a reasonable estimate of the mean exhalation rate from exposed land surfaces free of ice cover.
13.3 PRODUCTION AND FALLOUT OF ATMOSPHERIC 210 PB 222
Rn in the atmosphere decays via a number of short-lived isotopes to 210 Pb. Measurements of 210 Pb concentrations in air indicate that it has a relatively short residence time in the atmosphere. Individual 210 Pb atoms become readily attached to airborne particulate material and are removed both by washout and by dry deposition. The global balance of 222 Rn and 210 Pb in the atmosphere is illustrated in Figure 13.5. If F denotes the mean global flux of 222 Rn from the earth’s surface, the mean supply rate to the atmosphere as a whole will be AL F , where AL is the effective area of the earth’s land surface contributing to this flux. Assuming the atmosphere as a whole to be in a state of near equilibrium (when measured on timescales of a year or more), as
388
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 13.5: The global balance of 222 Rn and 210 Pb in the atmosphere. The 222 Rn inventory is balanced by the exhalation rate (AL F ) from soils and rate of decay to 210 Pb. The 210 Pb inventory is balanced by the rate of production by 222 Rn decay and the rate of loss (AE P) by fallout back to the earth’s surface. Atmosphere
222
Rn inventory QRn
210
Pb inventory QPb
Land surface
222
210
Rn exhalation
from soils (ALF )
Pb fallout from
atmosphere (AEP )
the supply rate will be balanced by the rate of decay, the global 222 Rn inventory in the atmosphere will be given by the equation QRn ¼
AL F ºRn
(13:7)
where ºRn is the 222 Rn decay constant. The global 210 Pb production rate will be ºPb QRn , where ºPb is the 210 Pb decay constant. Since the rate of loss of atmospheric 210 Pb by radioactive decay will be ºPb QPb , where QPb is the global inventory, the rate of loss by fallout to the earth’s surface must satisfy the balance equation AE P ¼ ºPb ð QRn QPb Þ
(13:8)
where P is the mean global 210 Pb flux and AE is the total area of the earth’s surface. Assuming that the 210 Pb fallout rate is directly proportional to the inventory QPb , so that AE P ¼ kQP , where k is a constant of proportionality, it follows that the mean global 210 Pb inventory is QPb ¼
ºPb QRn ºPb þ k
and that the mean global flux of 210 Pb is therefore
(13:9)
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
P¼
1 kºPb AL F AE ºPb þ k ºRn
AL Rn F ¼ AE Pb þ
389
(13:10)
where Rn and Pb are the radioactive half-lives of 222 Rn and 210 Pb, and is the geochemical half-life of 210 Pb in the atmosphere. Since estimates using a variety of different methods (Carvalho, 1995; Moore et al., 1973; Poet et al., 1972) suggest that the geochemical half-life is measured in days, and so is negligible compared with the radioactive half-life, to a close approximation this equation reduces to P¼
AL Rn F AE Pb
(13:11)
Assuming a mean 222 Rn flux of ,1570 Bq m2 d1 from the c. 76% of the earth’s land surface free of ice sheets and permafrost and ,10 Bq m2 d1 from the oceans, and given that ,70% of the ice-free land is in the Northern Hemisphere, the mean 210 Pb flux from the atmosphere is calculated to be ,83 Bq m2 yr1 in the Northern Hemisphere and ,38 Bq m2 yr1 in the Southern Hemisphere. These values, based on global mass balance considerations, are relatively consistent with our admittedly incomplete global database on 210 Pb fallout, which suggests a mean 210 Pb flux of ,70 Bq m2 yr1 in the Northern Hemisphere and ,41 Bq m2 yr1 in the Southern Hemisphere.
13.3.1
Spatial distribution of 210 Pb fallout
Since the factors governing 210 Pb production and deposition do not change significantly from year to year, fallout at any given site can normally be assumed to be constant when measured on timescales of a year or more. The flux may, however, vary spatially by up to an order of magnitude, depending on factors such as rainfall and geographical location. Figure 13.6 plots measured values of the mean annual 210 Pb flux (normalised against rainfall) against longitude at sites in North America, Europe, Asia, and also at Oceanic and Arctic sites remote from sources of 222 Rn emission. At the remote sites, the 210 Pb flux appears to have a baseline value of c. 30– 40 Bq m2 yr1 per metre of rain. Over the major land masses there is a consistent west to east increase, from c. 50–80 Bq m2 yr1 at the western seaboard to as much as 200–270 Bq m2 yr1 at the eastern seaboard. This pattern can be explained in general terms as a consequence of the mean annual global circulation from west to east at northern mid-latitudes. For a more precise explanation, it is necessary to consider quantitative mathematical models of the vertical and horizontal distribution of 222 Rn and 210 Pb in the atmosphere. Here we consider a relatively simple model (Piliposian and Appleby, 2003), which nonetheless captures some of the essential features of the observed fallout data. Although three-dimensional global models have been developed, they are highly complex, computer intensive and, in view of the absence of data from large parts of the globe, difficult to validate.
390
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 13.6: Measured values of 210 Pb fallout per metre of mean annual rainfall for sites in the Northern Hemisphere against longitude, summarised from data reported in the literature. Fallout varies from ,30 Bq m2 yr1 at mid-ocean sites to more than 200 Bq m2 yr1 at the eastern margin of the Eurasian land mass. The solid markers indicate terrestrial sites and the open markers oceanic or arctic sites. 300
200
150
100
210
Pb flux per m rain/Bq m2 yr1
250
N. America Europe Asia Oceanic Arctic
50
0 180 150 120 90
13.3.2
60
30 0 30 Longitude/degrees
60
90
120
150
180
Vertical distribution of 222 Rn in the atmosphere
The distribution of 222 Rn gas entering the atmosphere via exhalation from the land surface is controlled by processes of advection, diffusion and radioactive decay. On short timescales (hours or days), these processes can be described with any accuracy only by using detailed three-dimensional models, although, as indicated above, these models suffer from two major disadvantages: they are very computer intensive, and there are very few data against which they can be validated. On longer timescales (time-averaged over months or years), results obtained by a simple model in which conservation principles are applied to vertical columns of air moving horizontally over the earth’s surface, with little net transfer between adjacent columns (Jacobi and Andre´, 1963; Piliposian and Appleby, 2003; Turekian et al., 1977), have been shown to be consistent with much of the available empirical data. Since 222 Rn is an inert gas it can be removed from the column only by radioactive decay. Assuming that the vertical distribution within the column is controlled mainly by turbulent diffusion, the
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
391
222
Rn concentration with altitude z will be governed by the partial differential equation @CRn @ @CRn D ¼ ºRn CRn (13:12) @z @t @z where CRn (z,t ) denotes the 222 Rn concentration (in Bq m3 ) at altitude z and time t, D is an effective vertical diffusivity in the atmosphere, and ºRn is the 222 Rn radioactive decay constant. The boundary conditions for this equation are @CRn D ¼F @z (0, t) (13:13) CRn ð z, tÞ ! 0 as z ! 1 where F denotes the 222 Rn flux (in Bq m2 d1 ) into the base of the column. For an air column moving over land, assuming a constant flux F , a constant diffusivity D, and zero initial concentration, the solution to this equation can be written ð t ºRn s F e z2 pffiffiffi e 4 Ds ds C ð z, tÞ ¼ pffiffiffiffiffiffiffi (13:14) s D 0 Figure 13.7a shows how the 222 Rn concentration in the column develops over time, using the empirically determined values F ¼ 1570 Bq m2 d1 and D ¼ 2.7 km2 d1 (Piliposian and Appleby, 2003). After about 15 days the profile reaches an equilibrium distribution pffiffiffiffiffiffiffiffiffiffiffiffi F CRn ¼ pffiffiffiffiffiffiffiffiffiffiffi e ðºRn = DÞ z (13:15) ºRn D For a column moving over the ocean, the 222 Rn flux into the base of the column is effectively zero, and the profile starts to decay. The initial and boundary conditions are now CRn ð z, 0Þ ¼ C0 (z) @CRn ð0, tÞ ¼ 0 @z
(13:16)
where C0 (z) is the profile as the column crosses over the seaboard. The solution in this case is ð (z )2 (zþ )2 eºRn t 1 C ð z, tÞ ¼ pffiffiffiffiffiffiffiffiffiffiffi C0 ð Þ(e 4 Dt þ e 4 Dt )d (13:17) 4Dt 0 Figure 13.7b shows how such a profile decays as it moves out over the ocean. Comparison with empirical results from mid-continental sites in North America (Bradley and Pearson, 1970; Jonassen and Wilkening, 1970; Larson and Hoppel, 1973; Moore et al., 1973; Wilkening, 1970) and Russia (Kirichenko, 1970; Nazarov et
392
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 13.7: Development of a 222 Rn profile as an air column moves over (a) a large land mass and (b) the ocean, calculated using the simple model described in Piliposian and Appleby (2003). Over land, the concentrations increase with time until an equilibrium distribution is reached, after a period of about 15 days. Over the ocean, the established profile decays by around two orders of magnitude over a period of around 10 days. Also shown are measured concentrations reported in the literature from inland sites (open circles) in (a) continental USA and (b) Eurasia, where the profiles are expected to be near equilibrium, and at a mid-ocean site (triangles) where the profile is likely to have been substantially depleted. 16
16
t 2.5 t 4.5 t 8.5 t ⬇ 15
12
Altitude/km
Altitude/km
12
t0 t 2.5 t 4.5 t 8.5
8
8
4
4
0
0 0.1 222
1 Rn concentration/Bq m3 (a)
10
0.1 222
1 Rn concentration/Bq m3 (b)
10
al., 1970), also plotted in Figure 13.7, and from a site in the Pacific near Hawaii (Larson, 1974), show that the modelled profiles are reasonably consistent with the measured concentrations.
13.3.3
Vertical distribution of 210 Pb in the atmosphere
The processes controlling the vertical distribution of 210 Pb in the atmosphere are more complicated in that, as well as turbulent diffusion, they also include the creation of 210 Pb by 222 Rn decay and the loss of 210 Pb by fallout onto the earth’s surface. The governing differential equation can be written @CPb @ @CPb D ¼ þ ºPb ð CRn CPb Þ ¸ð CPb Þ @z @t @z
(13:18)
where D is the turbulent diffusivity and ¸(CPb ) is a term characterising the rate at which 210 Pb condenses from the aerosol state (dominated by turbulent diffusion) to incipient precipitation (dominated by gravity). As there is zero diffusive flux at ground level, the boundary conditions are
393
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
@CPb ¼0 @z (0, t)
(13:19)
CPb ð z, tÞ ! 0 as z ! 1 Assuming that the condensation of 210 Pb takes place principally in the troposphere, and that the rate at which it does so is proportional to the 210 Pb concentration, the term ¸(CPb ) can be written kCPb z < z1 ¸ð CPb Þ ¼ (13:20) 0 z . z1 where z1 is the height of the tropopause. The reciprocal of the parameter k characterises the residence time of 210 Pb in the troposphere. Estimates using a variety of different methods (Piliposian and Appleby, 2003) suggest a residence time of around 5 days (k ¼ 0.2 d1 ). Figure 13.8a shows the development of a 210 Pb profile in the atmosphere for a column of air moving over land. It also plots measured profiles from sites in the continental USA and the Asian/Pacific boundary, where such profiles are likely to be relatively fully developed. Figure 13.8b shows the decay of a profile that has developed over the land during a period of 20 days as it then moves out over the ocean. Although it is difficult to connect the measured profiles with specific theoretical profiles corresponding to a particular number of days’ travel, there is a general level of agreement between the empirical and theoretical results. Figure 13.8: Development of a 210 Pb profile in an air column moving over (a) a continental land mass and (b) the ocean, calculated using the simple model described in Piliposian and Appleby (2003). Also shown are measured profiles from continental USA (˜), the Asian/ Pacific boundary (), and the mid-Pacific (= and O) reported in the literature. 16
8
t 20.5 t 24.5 t 28.5 t 32.5 t 36.5
12
Altitude/km
Altitude/km
12
8
4
4
0
16
t 4.5 t 8.5 t 12.5 t 16.5 t 20.5
10 210
100 1000 Pb concentration/µBq m3 (a)
0
10 210
100
1000
Pb concentration/µBq m3 (b)
394
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
13.3.4
Flux of 210 Pb from the atmosphere
Assuming that 210 Pb condensing out of the troposphere is quickly removed by wet or dry deposition, the flux of 210 Pb from a vertical column of air can be calculated using the formula P ¼ kATPb
(13:21)
where ATPb ¼
ð z1 CPb ð z, tÞdz
(13:22)
0
is the inventory in the troposphere (and z1 is the height of the tropopause). Figure 13.9 shows the total and tropospheric inventories for a column of air moving over (a) a large land mass and (b) a large ocean. For the column moving over a land mass, the 210 Pb flux (Figure 13.9c) develops from a low value at the initial point to a near equilibrium value of more than 240 Bq m2 yr1 after a period of 30 days. Moving out over the ocean, the 210 Pb flux declines to a value of around 20 Bq m2 yr1 after a period of 25 days (Figure 13.9d).
Figure 13.9: Calculated values of the 210 Pb inventory and flux against time for an air column moving over (a, c) a large land mass and (b, d) the ocean. 5
4
Pb inventory/Bq m2
Total inventory Tropospheric inventory
3 2 1 0
210
210
Pb inventory/Bq m2
5
0
5
10 15 20 Time/days (a)
25
Pb flux/Bq m2 yr1
250 200 150 100 50
210
Pb flux/Bq m2 yr1 210
2 1 0
5
10 15 20 Time/days (b)
25
30
0
5
10 15 20 Time/days (d)
25
30
300
300
0
3
0
30
Total inventory Tropospheric inventory
4
0
5
10 15 20 Time/days (c)
25
30
250 200 150 100 50 0
395
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
Comparisons between measured 210 Pb profiles, inventories and fluxes and those determined from the above model suggest that, at northern mid-latitudes, the movement of air masses is predominantly from west to east with a mean global circulation velocity of ,4.68 longitude per day (Piliposian and Appleby, 2003). By applying the above calculations to such a column, it is possible to calculate the longitudinal variations in 210 Pb fallout at northern mid-latitudes over the North American and Eurasian land masses. The results of these calculations, shown in Figure 13.10, indicate that the 210 Pb flux has its lowest values (20–50 Bq m2 yr1 ) at the eastern margins of the Pacific and Atlantic, and its greatest values (170–240 Bq m2 yr1 ) at the eastern continental seaboards. These calculations also suggest that a significant fraction of 210 Pb fallout in Western Europe is due to 222 Rn emissions from North America, and also that a significant fraction of 210 Pb originating in 222 Rn emissions from Eurasia is deposited in the Pacific.
13.4 POST-DEPOSITIONAL TRANSPORT OF ATMOSPHERICALLY DEPOSITED 210 PB Although much of the 210 Pb deposited on the earth’s land surface will be retained in situ in the surface layers of the soil, a fraction will be mobilised via runoff and erosion, entering water bodies such as rivers and lakes, where it may accumulate as part of the sediment record, along with 210 Pb deposited directly from the atmosphere into these bodies. Where no redistribution takes place, the fallout 210 Pb inventory A in Figure 13.10: Calculated values of the 210 Pb flux against longitude for an air column moving from west to east at northern mid-latitudes. The modelled values shown by the solid curve are reasonably consistent with measured data from North American, Eurasian and Oceanic sites. The solid lines along the base of the graph indicate the locations of the North American and Eurasian land masses.
210
Pb flux/Bq m2 yr1
300
200
100
0 180 150 120 90 60 30 0 30 60 Longitude/degrees
90
120
150
180
396
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
a soil or sediment core, calculated by numerical integration of the measured activity versus depth record, will be related to the local atmospheric flux P by the equation A¼
P ºPb
(13:23)
where ºPb is the 210 Pb radioactive decay constant. Deviations of the measured inventory from this nominal value can be used to measure the extent to which 210 Pb has been mobilised by soil erosion, for example. Because of its important role in dating environmental records, our main interest in this final section will, however, be on post-depositional transport processes leading to the accumulation of fallout 210 Pb on the bed of a lake, and its incorporation in the sediment record.
13.4.1
Transport of 210 Pb through catchment/lake systems
Figure 13.11 presents a simple schematic model of transport processes for fallout 210 Pb in a catchment/lake system. The radionuclide can enter the water column of a lake directly via fallout from the atmosphere onto the surface of the lake, or indirectly by transport from the catchment either in solution or attached to eroding particles. It can leave the water column via the outflow, or by burial in the sediment record. Fallout directly onto the surface of the lake can be written AL P, where P is the local atmospheric flux and AL is the lake area. Deposition on the catchment of the lake can similarly be written AC P, where AC is the area of the catchment. Since the atmospheric flux can be assumed to be constant when measured on annual timescales, it is reasonable to suppose that transport rates from the catchment to the lake will
Figure 13.11: Transport of fallout 210 Pb through a catchment/lake system. 210 Pb enters the water body directly through fallout onto the surface of the lake or indirectly via mobilisation of fallout onto the surface of the lake. A part of the input accumulates on the bed of the lake in the sediment record, the balance being lost from the lake through its outflow. Atmospheric flux ACP Atmospheric flux ALP
Catchment inventory QC(t)
ΨC(t) Transport from catchment
ΨO(t) Lake waters inventory Q(t)
To sediment record
Loss via outflow
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
397
normally be in a steady state and, being driven by the constant atmospheric flux, can be written C ¼ A C P
(13:24)
where is a catchment/lake transport coefficient. Total inputs to the water column of the lake can thus be written in ¼ AL P þ AC P ¼ AL ð1 þ ÆÞP
(13:25)
where Æ ¼ AC /AL is the catchment/lake area ratio. Losses from the water column will be controlled by the water column inventory Q. Losses via the outflow can be written O ¼
Q TW
(13:26)
where TW is the lake water residence (or turnover) time. Since transport to the bed of the lake occurs mainly on settling particles, the mean flux to the sediments can be written AL P ¼
fDQ TS
(13:27)
where fD is the fraction of 210 Pb in the water attached to particulates, and TS is a typical settling time for particulates. Total losses from the water column (including those due to radioactive decay) can therefore be written out ¼
Q fDQ þ þ ºPb Q TW TS
Q þ ºPb Q ¼ TL
(13:28)
where TL is a geochemical residence time for 210 Pb in the water column, defined by 1 1 fD ¼ þ TL TW TS
(13:29)
Since TL is in most cases measured in weeks or months, and so is small compared with the 210 Pb half-life (22.26 years), we may generally suppose that out ¼
Q TL
(13:30)
Since the inputs are constant, over the long term the water column inventory may be supposed to be in a steady state, with total inputs being balanced by total losses, and so determined by the equation Q ¼ TL AL ð1 þ ÆÞP
(13:31)
398
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
The mean 210 Pb flux (or supply rate) to the sediment record can thus be written P¼
fDQ T S AL
(13:32)
¼ FPb ð1 þ ÆÞP where FPb ¼
f D TL TS
(13:33)
is a water column to bottom sediments transport coefficient. Results from a number of studies suggest that the bulk of 210 Pb fallout onto the catchment remains locked in catchment soils, and that catchment/lake transport processes deliver as little as 2% of annual fallout to the lake (Appleby et al., 2003; Dominik et al., 1987; Lewis, 1977; Scott et al., 1985). Further, calculations of the transport coefficient FPb suggest that, as 210 Pb in lake waters is relatively insoluble and strongly bound to particulates, the bulk of inputs to the water column are transferred to the sediment record (Appleby 1997). Except in lakes with large catchments, it seems likely therefore that the mean rate of supply of 210 Pb to the sediment record is dominated by the atmospheric flux. Even where this has been significantly modified by the transport processes, the above equation suggests that the supply rate to the sediment record will remain relatively constant, provided that there are no major changes in the catchment or to the hydrology of the lake. Fallout 210 Pb will not, however, be uniformly distributed over the bed of the lake. Catchment inputs may be retained near the points of entry to the lake, and processes such as sediment focusing may redistribute material away from the margins of the lake towards the deeper basins. Figure 13.12 shows the effect of these processes on the distribution of 210 Pb over the bed of Blelham Tarn, a small lake adjacent to Windermere in the English Lake District (Appleby et al., 2003). Very high sedimentation rates at one end of the lake adjacent to the point of entry of one of the input streams, and attributed to erosion from the catchment, resulted in localised 210 Pb supply rates nearly three times higher than the atmospheric flux. In other areas of the lake, direct atmospheric fallout dominated, although the distribution was clearly modified by sediment focusing.
13.4.2 210
Application to 210 Pb dating
Pb in sediments accumulating on the bed of the lake includes both fallout 210 Pb and supported 210 Pb created by radioactive decay of in situ 226 Ra contained in the sediment particles. Because of the greatly reduced mobility of 222 Rn in saturated sediments caused by the much lower diffusivity, disequilibrium between 226 Ra and the in situ produced 210 Pb is negligible and, in most circumstances, supported 210 Pb activity can be assumed to be equal to the 226 Ra activity. Fallout 210 Pb, usually called unsupported or excess 210 Pb, is calculated by subtracting the 226 Ra activity from the measured total 210 Pb activity. Unsupported 210 Pb activity in older sediment layers cut off from further
399
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
Figure 13.12: The distribution of fallout 210 Pb over the bed of Blelham Tarn. The contour lines show the mean 210 Pb supply rate determined from a suite of cores collected from the bed of the lake. The mean annual atmospheric flux was estimated to be 147 Bq m2 yr1. The very high supply rates at one end of the lake were associated with very high sedimentation rates caused by erosive inputs from the catchment.
150
0
20
0 20
0
15
150
0
0
40
35
0
35
200
0
250
15
30 300
0
15
0
0
200
25
150
200
exposure to fallout by more recently deposited material reduces with time in accordance with the radioactive decay equation Cuns ð tÞ ¼ Cuns ð0ÞeºPb t
(13:34)
where t is the age of the sediment layer, Cuns (t ) is its present activity, and Cuns (0) is the initial activity at the time of its formation. Figure 13.13 shows a typical plot of 210 Pb activity against depth, in a 1997 sediment core from Windermere. In this core, total 210 Pb activity (Figure 13.13a) reaches equilibrium with the supporting 226 Ra at a depth of about 33 cm below the current sediment/water interface. In lakes where environmental conditions have remained relatively unchanged over many decades, sediment accumulation is likely to have been a uniform process. Sediments from different years are likely to have experienced the same initial conditions, and unsupported activity will decline exponentially with depth. The gradient of the decline will be a measure of the sediment accumulation rate (Appleby, 2001). The dramatic environmental changes that have taken place over the past 150 years are, however, such that, at many sites, sedimentation rates are likely to have varied significantly during this period of time. Where this has occurred, unsupported 210 Pb activity will vary with depth in a more complicated way, as shown in Figure 13.13b. Although concentrations in older sediments deeper in the core are generally lower than those in younger sediments nearer to the sediment/water interface, changes in the initial concentrations and the net accumulation rate can result in significant deviations of the profile of activity against depth from a simple exponential relationship.
400
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 13.13: Records of 210 Pb fallout in a sediment core from Lake Windermere showing (a) total and supported and (b) unsupported concentrations against depth. 1000
Unsupported 210Pb activity/Bq kg1
Total 210Pb activity/Bq kg1
1000
100
Supported 210Pb
10
0
5
10
15 20 25 Depth/cm (a)
30
100
10
35
0
5
10
15 20 25 Depth/cm (b)
30
35
Since the wet bulk density of sediments generally increases with time as sediments become compacted under the weight of more recent deposits, depths in a core are often measured in terms of the cumulative dry mass, defined as ðx m ¼ rdx (13:35) 0
where r is the dry bulk density (the dry mass per unit wet volume), and x is the linear depth from the sediment/water interface. Using the dry mass depth measure, the distance between two layers is unaffected by compaction. Considering a layer of sediment currently of depth m, age t and thickness m deposited during a small time interval t, the amount of 210 Pb originally deposited in this layer would have been PS t, where PS is the 210 Pb supply rate at the core site at that time. Radioactive decay since then will have reduced this to a present-day value of PS t eºt . Noting that m ¼ r(t )t, where r(t ) is the sedimentation rate t years ago, the present unsupported 210 Pb concentration of sediments in this layer will thus be Cuns ð mÞ ¼
PS teºPb t m
¼
PS ºPb t e rð tÞ
The most widely used model for dating sediments assumes that the
(13:36)
210
Pb supply rate
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
401
at the core site has remained constant through time, regardless of any changes in the sedimentation rate. Where this assumption is valid, the 210 Pb inventory in sediments below the layer of depth m satisfies the equation ð1 A(m) ¼ Cuns ð mÞdm m
¼
ð1
PS eºPb t dt
(13:37)
t
¼
1 PS eºPb t ºPb
Since the 210 Pb inventory in the whole core is ð1 Cuns ð mÞdm A(0) ¼ 0
(13:38)
1 PS ¼ ºPb it follows that the unsupported satisfy
210
Pb inventory below sediments of depth m will
Að mÞ ¼ A(0)e ºPb t
(13:39)
Since A(m) and A(0) can both be calculated by numerical integration of the activity versus depth profile, this equation can be used to calculate the age t. This is called the CRS (Constant Rate of 210 Pb Supply model: Appleby and Oldfield, 1978; Robbins, 1978). The mean 210 Pb supply rate is calculated using the equation PS ¼ ºPb A(0)
(13:40)
Although the assumption underlying this model has been shown to be relatively well satisfied at many sites, there are also many cases where the local supply rate has varied significantly, even when the mean value for the lake as a whole has remained constant. Causes of local variations include changes to the pattern of sediment focusing, sediment slumps, and turbidity currents. Where this has occurred it may be necessary to apply the CRS model in a piecewise way to different sections of the core (Appleby, 2001). A detailed discussion of dating models is, however, outside the purpose of this chapter.
13.4.3
Post-depositional redistribution
In the above discussion, it has been assumed that sediment records remain intact following burial by later deposits. In practice, fallout 210 Pb delivered to the bed of the lake may be redistributed within the sediment column in the short term by physical or biological mixing at or near the sediment/water interface, or over the long term by chemical diffusion or advection within the pore waters. The effects of physical mixing,
402
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
which typically results in a flattening of the profile of 210 Pb activity against depth towards the top of the core, have been documented in many studies, and a number of models have been developed to take account of it (Robbins et al., 1977; Oldfield and Appleby, 1984). Chemical diffusion is more difficult to detect, although the observed maintenance of very steep concentration gradients in many cores with very low sedimentation rates suggests that it is not significant in the majority of cases. As other processes can have similar effects on sediment records, an essential problem in modelling post-depositional redistribution is the difficulty of making an objective determination of the mixing parameters. Where there is any doubt, the possibility that 210 Pb profiles might have been influenced by post-depositional remobilisation can be tested by comparing 210 Pb dates with those determined by independent dating methods, such as stratigraphic records of artificial radionuclides such as 137 Cs or 241 Am (Appleby et al., 1991). The excellent agreement between 210 Pb dates and stratigraphic dates in a large number of cases suggests that, in most circumstances, this is not a significant problem.
13.5
DISCUSSION
A clear understanding of the processes by which 210 Pb enters the atmosphere, is transported across continents, and is deposited on the landscape is crucial to its role in a wide range of environmental studies. In spite of the fact that it has now been used for dating environmental records stored in thousands of lake sediment cores, it is unlikely that this procedure will ever be totally routine. Although the widely used CRS model has been shown to have some theoretical justification when applied to a lake as a whole, the complexities of transport processes are such that it cannot be presumed to apply to any individual core without independent validation. As this is particularly so at sites where there are large differences between the atmospheric flux and the supply rate to the sediments, a good knowledge of the atmospheric flux is crucial to reliable dating. At sites where there is a large difference, it is unlikely that the mean 210 Pb supply rate to the core site is dominated by the atmospheric flux. Reasons for this could include high rates of surface soil erosion from the catchment, or post-depositional redistribution of sediments over the bed of the lake. Discrepancies between 210 Pb dates and those determined from well-defined chronostratigraphic features are evidence of changes in the supply rate. Such changes can be highly localised, as in the case of a sediment slump, or more general in the case of a major disruption to the lake or its catchment. Each 210 Pb record must be assessed for its consistency with the assumptions of the dating model, and corrections made wherever there are significant deviations from those assumptions. Although the main application of 210 Pb has been to dating, because of its welldefined origins and unique signature, it also has considerable value as a tracer for studying transport processes via both atmospheric and terrestrial pathways. Measurements of 222 Rn exhalation rates and 210 Pb fluxes can be used to validate models of atmospheric transport over large distances and long timescales. Calculations using the simple model outlined above have been used to estimate long-range intercontinental
THE ORIGINS AND TRANSPORT OF 210 PB AND ITS APPLICATIONS
403
transport rates for 210 Pb, and fallout rates at different locations across large land masses. In the future it is planned to develop regional models capable of estimating fallout at specific sites. Once validated by 210 Pb, such models can be used to study transport processes for other pollutants with similar chemical characteristics. The large and rapidly increasing dataset on 210 Pb records in sediments, coupled with good estimates of 210 Pb fallout either from direct measurements or from atmospheric models, can similarly be used to validate catchment/lake transport models. As environmental records stored in natural archives such as lake sediments have usually been modified by mediating transport processes from the catchment and through the water column, such models are essential in using these records to make accurate historical reconstructions of recent environmental change.
ACKNOWLEDGEMENTS The studies presented in this chapter were funded by many different bodies, including the European Union EMERGE (Contract No. EVK1-CT 1999-00) and Transuranics (Contract No. F14PCT960046) projects, and their support is gratefully acknowledged, together with that of the many colleagues who have contributed to the work of the Liverpool University Environmental Radioactivity Research Centre over many years.
REFERENCES Appleby, P.G. (1997). Sediment records of fallout radionuclides and their application to studies of sediment–water interactions. Water, Air & Soil Pollution. 99: 573–586. Appleby, P.G. (2001). Chronostratigraphic techniques in recent sediments. In: Last, W.M. and Smol, J.P. (Eds). Tracking Environmental Change Using Lake Sediments. Volume 1: Basin Analysis, Coring, and Chronological Techniques. Kluwer Academic, Dordrecht, pp. 171–203. Appleby, P.G. and Oldfield, F. (1978). The calculation of 210 Pb dates assuming a constant rate of supply of unsupported 210 Pb to the sediment. Catena. 5: 1–8. Appleby, P.G., Richardson, N. and Nolan, P.J. (1991). 241 Am dating of lake sediments. Hydrobiologia. 214: 35–42. Appleby, P.G., Jones, V.I. and Ellis-Evans, J.C. (1995). Radiometric dating of lake sediments from Signy Island (maritime Antarctic): evidence of recent climatic change. Journal of Palaeolimnology. 13: 179–191. Appleby, P.G., Haworth, E.Y., Michel H., Short, D.B., Laptev, G. and Piliposian, G.T. (2003). The transport and mass balance of fallout radionuclides in Blelham Tarn, Cumbria (UK). Journal of Paleolimnology. 29: 459–473. Bradley, W.E. and Pearson, J.E. (1970). Aircraft measurements of the vertical distribution of radon in the lower atmosphere. Journal of Geophysical Research. 75: 5890–5894. Carvalho, F.P. (1995). Origins and concentrations of 222 Rn, 210 Pb, 210 Bi and 210 Po in surface air at Lisbon, Portugal, at the Atlantic edge of the European continental landmass. Atmospheric Environment. 29: 1809–1819. Dominik, J., Burns, D. and Vernet, J.-P. (1987). Transport of environmental radionuclides in an alpine watershed. Earth and Planetary Science Letters. 84: 165–180. Flu¨gge, S. and Zimens, K.E. (1939). Die Bestimmung von Korngro¨ßen und von Diffusionkonstanten aus dem Emaniervermo¨gen. Die Theorie der Emanier-methode. (‘‘The determination of grain
404
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
size and diffusion constant by the emanating power. The theory of the emanation method’’) Zeitschrift fu¨r Physikalische Chemie. B42: 179–220. Jacobi, W. and Andre´, K. (1963). The vertical distribution of radon 222, radon 220 and their decay products in the atmosphere. Journal of Geophysical Research. 68: 3799–3814. Jonassen, N. and Wilkening, M.H. (1970). Airborne measurements of radon 222 daughter ions in the atmosphere. Journal of Geophysical Research. 75: 1745–1752. Jost, W. (1960). Diffusion in Solids, Liquids, Gases. Academic Press, New York. Kirichenko, L.V. (1970). Radon exhalation from vast areas according to vertical distribution of its short-lived decay products. Journal of Geophysical Research. 75: 3639–3649. Larson, R.E. (1974). Radon profiles over Kilauea, the Hawaiian islands and Yukon Valley snow cover. Pure and Applied Geophysics. 112: 204–208. Larson, R.E. and Hoppel, W.A. (1973). Radon 222 measurements below 4 km as related to atmospheric convection. Pure and Applied Geophysics. 105: 900–906. Lewis, D. M. (1977). The use of 210 Pb as a heavy metal tracer in Susquehanna River system. Geochimica et Cosmochimica Acta. 41: 1557–1564. Moore, H.E., Poet, S.E. and Martell, E.A. (1973). 222 Rn, 210 Pb, 210 Bi and 210 Po profiles and aerosol residence times versus altitude. Journal of Geophysical Research. 78: 7065–7075. Nazarov, L.E., Kuzenkov, A.F., Malakhov, S.G., Volokitina, L.A., Gaziyev, Ya.I. and Vasil’yev, A.S. (1970). Radioactive aerosol distribution in the middle and upper troposphere over the USSR in 1963–1968. Journal of Geophysical Research. 75: 3575–3588. Oldfield, F. and Appleby, P.G. (1984). Empirical testing of 210 Pb dating models. In: Haworth, E.Y. and Lund, J.G. (Eds). Lake Sediments and Environmental History. Leicester University Press, Leicester, pp. 93–124. Piliposian, G. and Appleby, P.G. (2003). A simple model of the origin and transport of 222 Rn and 210 Pb in the atmosphere. Continuum Mechanics and Thermodynamics. 15: 503–518. Poet, S.E., Moore, H.E. and Martell, E.A. (1972). Lead 210, bismuth 210 and polonium 210 in the atmosphere: accurate ratio measurement and application to aerosol residence time determination. Journal of Geophysical Research. 77: 6515–6527. Robbins, J.A. (1978). Geochemical and geophysical applications of radioactive lead. In: Nriagu, J.O. (Ed.). Biogeochemistry of Lead in the Environment. Elsevier Scientific, Amsterdam, pp. 285– 393. Robbins, J.A., Krezoski, J.R. and Mozley, S.C. (1977). Radioactivity in sediments of the Great Lakes: post-depositional redistribution by deposit feeding organisms. Earth and Planetary Science Letters. 36: 325–333. Scott, M. R., Rotter, R. J. and Salter, P. F. (1985). Transport of fallout plutonium to the ocean by the Mississippi River. Earth and Planetary Science Letters. 75: 321–326. Shotyk, W., Cheburkin, A.K., Appleby, P.G., Fankhauser, A. and Kramers, J.D. (1996). Two thousand years of atmospheric arsenic, antimony and lead deposition recorded in a peat bog profile, Jura Mountains, Switzerland. Earth and Planetary Science Letters. 145: E1–E7. Turekian, K.K., Nozaki, Y. and Benninger, L.K. (1977). Geochemistry of atmospheric radon and radon products. Annual Review of Earth and Planetary Sciences. 5: 227–255. Wilkening, M.H. (1970). Radon 222 concentration in the convective patterns of a mountain environment. Journal of Geophysical Research. 75: 1733–1740.
CHAPTER
14
Statistical Modelling and Analysis of PM2:5 Control Levels Sun-Kyoung Park and Seoung Bum Kim
14.1 INTRODUCTION Particulate matter (PM) is frequently the most obvious form of air pollution, because it reduces visibility and soils surfaces (Wark et al., 1997). In 1997, the National Ambient Air Quality Standard (NAAQS) for PM2:5 (particulate matter with an aerodynamic diameter of less than 2.5 m) was promulgated in response to an increasing number of scientific studies linking elevated fine particle concentrations with adverse health effects. The long-term standard set the allowable 3-year average of the annual mean PM2:5 concentrations at less than or equal to 15 g m3 ; the shortterm standard sets the 3-year average of the 98th percentile of the 24-hour PM2:5 concentrations at 65 g m3 . Because of the adverse health effects of PM2:5 , its control strategies are important and need to be addressed. Metropolitan Atlanta, Georgia, is one such area in which the annual PM2:5 level is higher than the annual NAAQS, and so the control of PM2:5 is necessary. The objective of the present study is to quantify the required emission reductions for PM2:5 to meet the annual NAAQS in Atlanta. The property that PM2:5 concentrations follow a specific probability distribution is used in developing the control strategy. The distributional property of pollutant concentrations has been studied for decades. Various probability distributions of air pollutant concentrations have been reviewed (Georgopoulos and Seinfeld, 1982), and a computer program has been developed for fitting probability distributions to air pollution data using maximum likelihood estimation (Holland and Fitzsimons, 1982). The accuracy of the probability distribution fit to the extreme values of pollutant concentrations was also analysed, and the result revealed that the very extreme values have large uncertainties and are very sensitive to the choice of distribution (Chock and Sluchak, 1986). The frequency distributions of the source contributions of PM10 in the South Coast Air Basin (SoCAB) were used to analyse the different sources that have similar Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
406
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
chemical profiles (Kao and Friedlander, 1995). The probability distribution of personal exposure to PM10 levels was modelled to quantify how much the ambient concentrations affect human exposure (Ott et al., 2000). Results showed that, when the ambient concentrations were controlled, the median of the personal exposure to PM10 ranged from 32.0 g m3 (Toronto) through 34.4 g m3 (Phillipsburg) to 48.8 g m3 (Riverside). The probability density function (pdf) of PM10 concentrations measured in Taiwan was used to estimate the probabilities of exceeding the air quality standard, and to determine the emission source reductions of PM10 concentrations necessary to meet the air quality standard (Lu, 2002). Microenvironment concentrations and the contributions of indoor sources were approximated by a log-normal distribution, and the time spent in a microenvironment and the penetration factor were approximated by a beta distribution (Kruize et al., 2003). Because of the usefulness of the frequency distribution of pollutant concentrations, a statistical method to predict the frequency distribution of PM10 and PM2:5 at specific wind speeds was developed with measured data collected at the Sha-Lu station in Taiwan (Lu and Fang, 2002). The temporal change over 40 years in the probability distribution of sulfur dioxide at 10 monitors in the UK was also analysed (Hadley and Toumi, 2003). Moreover, the possibilities of developing a general probability model for fitting environmental quality data have been explored (Singh et al., 2001). Errors associated with the fitted distribution, when less frequently sampled data were used, have also been investigated (Rumburg et al., 2001) . In the present study the property that pollutant concentrations follow specific probability distributions is used to account for the temporal variations of pollutant levels when control strategies are developed. In addition, the bivariate Koehler– Symanowski distribution (Koehler and Symanowski, 1995) is used to estimate the required emissions. The benefit of using a joint bivariate pdf is that the correlation between PM2:5 species is taken into account for estimating the required emission reductions. The main goal of this study is to calculate the emission reductions required to reduce PM2:5 levels in Atlanta below the NAAQS, and quantify how specific sources impact on PM2:5 levels. Emission reductions were calculated based on the rollback method using PM2:5 mass and species concentrations measured from 1999 to 2003 in Atlanta. Monitoring data were collected from four locations: Fort McPherson (FTM), South DeKalb (SDK), Tucker (TUC) and Jefferson Street (JST) (Figure 14.1). These data are collected as part of the Assessment of Spatial Aerosol Composition in Atlanta (ASACA) project and the South-Eastern Aerosol Research and Characterisation (SEARCH) study (Butler, 2000; Butler et al., 2003; Hansen et al., 2003).
14.2 14.2.1
METHOD Univariate probability density functions
The univariate distributions used in this study include the log-normal, Weibull and gamma distributions, which have been proven to be particularly useful in representing
407
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
Figure 14.1: ASACA PM2:5 monitoring stations. The State of Georgia Metro Atlanta
JST FTM
TUC
SDK
Legend Metro Atlanta County boundary
air quality data (Georgopoulos and Seinfeld, 1982). The pdfs of the log-normal (14.1), Weibull (14.2) and gamma (14.33) distributions are as follows: 1 2 1 f ð x; , Þ ¼ pffiffiffiffiffiffi e2 2 [ln (x) ] ; x > 0, > 0 and > 0 2 x
f ð x; Æ, Þ ¼ Æx 1 eÆx ; f ð x; Æ, Þ ¼
1 x Æ1 ex= ; Æ ˆð ÆÞ
x . 0, Æ . 0 and . 0 x . 0, Æ . 0 and . 0
(14:1) (14:2) (14:3)
Parameters of the distributions were estimated using the method of moments (MM) and the maximum likelihood estimation (MLE) method. The goodness-of-fit (the index estimating how well the distribution fits the raw data) is checked using two statistical tests: the chi-square ( 2 ) test and the Kolmogorov–Smirnov test.
14.2.2
Koehler–Symanowski distribution
The bivariate probability distribution used in this study is the Koehler–Symanowski distribution (Koehler and Symanowski, 1995). The Koehler–Symanowski distribution has been applied to characterise atmospheric wind fluctuations (Manomaiphiboon and Russell, 2003). In our study, the Koehler–Symanowski distributions are applied to
408
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
PM2:5 species concentrations (e.g., sulfate and ammonium; organic carbon, OC; and elemental carbon, EC). The advantage of using the Koehler–Symanowski distribution for air quality data is that it incorporates correlations between datasets, and strictly conserves the original shape of each marginal distribution. In this respect, the Koehler–Symanowski distribution is useful to characterise PM2:5 species, which are correlated with each other. We briefly explain the Koehler–Symanowski distribution. Let X (X1 , X2 , . . ., X n ) be a multivariate of n components and x (x1 , x2 , . . ., x n ) be the real-valued vector of X. Let FX (x) be the cumulative density function (cdf) of X and F X i be the marginal cdf of the random variable Xi . Also, let fX (x) be the pdf of X and f X i be the marginal pdf of Xi . The joint Koehler–Symanowski pdf is then expressed as (Koehler and Symanowski, 1995)
1 1 1 1 2 1=Æ1þ 1=Æ2þ 12 f ð X 1 , X 2 Þ ¼ f X 1 f X 2 D1 D2 C Æ 1 þ Æ Æ Æ D D C F F 12 1þ 2þ 1 X2 2 12 X 1 12 (14:4) where C12 ¼ F X 1 1=Æ1þ þ F X 2 1=Æ2þ F X 1 1=Æ1þ F X 2 1=Æ2þ h
i 1=Æ2þ 1 D1 ¼ Æ1 Æ þ Æ F C 11 12 X 2 1þ 12 h
i 1=Æ2þ 1 C 12 D2 ¼ Æ1 2þ Æ22 þ Æ12 F X 1 Æ1þ ¼ Æ11 þ Æ12 and Æ2þ ¼ Æ12 þ Æ22 In this study, the parameters of the distribution, Æ ij , are estimated by MLE using the sequential quadratic programming (SQP) method (Fletcher, 2000). The goodness-offit test was conducted by the bootstrap method (Efron and Tibshirani, 1994).
14.3
RESULTS AND DISCUSSION
Speciated PM2:5 concentrations in Atlanta from 1 March 1999 to 31 December 2003 are summarised in Figure 14.2. Average measured PM2:5 levels in Atlanta for that period are 19.0 g m3 (FTM), 18.2 g m3 (SDK), 18.7 g m3 (TUC) and 18.6 g m3 (JST). Sulfate and OC are the two major PM2:5 species, which account for 50% of PM2:5 mass. PM2:5 levels in Atlanta exceed the long-term standard of 15 g m3 , but not the 24-hour standard of 65 g m3 (Figure 14.3). When annual mean concentrations are a concern, the required emission reductions are often calculated using the following rollback equation (de Nevers et al., 1977; Georgopoulos and Seinfeld, 1982):
409
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
Figure 14.2: Average PM2:5 species concentrations from 1999 to 2003. 20 Other PM2.5
µg m3
15
EC OC
10
Sulfate
5
Nitrate Ammonium
0 FTM
SDK
TUC
JST
Figure 14.3: (a) Annual average PM2:5 mass concentrations; (b) 98th percentile of daily average PM2:5 mass from 1999 to 2003. 75 65
µg m3
µg m3
30
15
0
50 25 0
1999 2000 2001 2002 2003
1999 2000 2001 2002 2003
(a)
(b) FTM
JST
SDK
TUC
½ Eð cÞ cb Eð cÞs cb R¼ E ð cÞ cb
(14:5)
E ð cÞ E ð cÞs ¼ E ð cÞ cb where E(c) is the annual average pollutant concentration, and cb is the background concentration (,0.8 g m3 ) (Baumann et al., 2005). E(c)s is the air quality standard for annual PM2:5 mass (15 g m3 ), and R is the emission reduction required to meet the standard. The rollback method is valid when pollutant concentrations respond linearly to the overall emissions. However, secondary pollutant concentrations may not respond linearly to emission strengths. Responses of major secondary pollutant concentrations (e.g., sulfate, nitrate, and secondary OC) to emission strengths (e.g., SO2, NO x and volatile organic compound (VOC) emissions) were checked using air quality modelling for July 2001 and January 2002 in a separate study (Park and Russell, 2003). The Community Multiscale Air Quality (CMAQ) model was run three times:
410
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
using base level emissions, after 30% of emissions were reduced, and after 60% of emissions were reduced. The difference between pollutant levels with base emissions and those with 60% reduced emissions is twice the difference between pollutant levels with base emissions and those with 30% reduced emissions (Figure 14.4). This result suggests that major secondary species of PM2:5 can be assumed to respond linearly to precursor emissions.
0.6 0.4 0.2 0 0
0.2
0.4
Sulfate (base emissions) sulfate (60% decrease of SO2 emissions)
y 1.998x R2 0.999
0.8
Nitrate (base emissions) nitrate (60% decrease of NOx emissions)
Sec. OC (base emissions) sec OC (60% decrease of VOC emissions)
Figure 14.4: Sensitivity of secondary species of PM2:5 to emissions in (a) July 2001 and (b) January 2002: (i) sensitivity of secondary organic carbon to VOC emissions; (ii) sensitivity of nitrate to NO x emissions; (iii) sensitivity of sulfate to SO2 emissions. y 2.017x R2 0.950
10 7.5 5 2.5 0 0
2.5
5
y 2.089x R2 0.994
16 12 8 4 0
0
4
8
(i)
(ii) (a)
(iii)
y 1.891x R2 0.989
2 1.5 1 0.5 0
0
0.5
1
Sec. OC (base emissions) sec. OC (30% decrease of VOC emissions) (i)
16
y 2.412x R2 0.912
12 8 4 0 0
4
8
Nitrate (base emissions) nitrate (30% decrease of NOx emissions) (ii) (b)
Sulfate (base emissions) sulfate (60% decrease of SO2 emissions)
Sulfate (base emissions) sulfate (30% decrease of SO2 emissions)
Nitrate (base emissions) nitrate (60% decrease of NOx emissions)
Nitrate (base emissions) nitrate (30% decrease of NOx emissions)
Sec. OC (base emissions) sec OC (60% decrease of VOC emissions)
Sec. OC (base emissions) sec. OC (30% decrease of VOC emissions)
y 2.200x R2 0.720
6 5 4 3 2 1 0 0
1
2
3
Sulfate (base emissions) sulfate (30% decrease of SO2 emissions) (iii)
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
411
The amounts of emission reduction required to meet the annual standard are 22% (FTM), 18% (SDK), 21% (TUC) and 20% (JST), based on Equation 14.5 and the average of the annual mean PM2:5 mass for 5 years (Figure 14.2). The amount of the reduction is calculated assuming that the future emission strength is the same as the present emission strength in the absence of control. Thus the calculation was based on the average of the annual mean concentrations. However, future emission strengths vary, and so the effectiveness of the PM2:5 control is uncertain if emission reductions are calculated based on the average of annual mean concentrations. The variation of annual mean concentrations cannot be calculated via annual mean concentrations alone, because the number of data is not sufficient to estimate the parameters of the distribution. Thus the abundant daily PM2:5 data are used to estimate the temporal variation of the annual mean PM2:5 level using the distributional property of PM2:5 concentrations. The underlying distributional fit to the daily PM2:5 level was selected, and the average of 365 randomly selected daily PM2:5 concentrations was used to derive an annual mean concentration. Raw daily PM2:5 data were fitted using the log-normal, Weibull and gamma distributions separately for each station and for each year, and the parameters were estimated using the MLE and MM methods. The best-fit distribution was selected based on the goodness-of-fit test (Park, 2005). Data for SDK in 1999 and 2000 and data for TUC in 2003 were not available. Thus 17 datasets were fitted to the distributions. Goodness-of-fit was checked using both Kolmogorov–Smirnov and chi-square test statistics (Park, 2005). Among 34 test statistics, 18 datasets were fitted best using a log-normal distribution, three datasets a Weibull distribution, and 13 datasets a gamma distribution. Thus the log-normal distribution was selected for the analysis of the PM2:5 data in Atlanta. Note that, in most cases, test statistics for gamma and Weibull distributions were within the critical point of the 95% confidence interval (CI). Thus gamma and Weibull distributions also fitted the PM2:5 data well. Three hundred and sixty-five random numbers following the log-normal distribution of daily PM2:5 mass were generated 1000 times for each year. The average of the 365 random numbers was considered as the annual mean PM2:5 mass concentration. Five thousand annual mean concentrations were calculated for FTM and JST, because 1000 annual mean concentrations for each year from 1999 to 2003 are available. For SDK 3000 and for TUC 4000 annual mean concentrations were calculated, because daily PM2:5 data are available only for 3 years in SDK, and for 4 years in TUC. The annual mean levels simulated based on the pdfs of daily PM2:5 mass are illustrated in Figure 14.5a. The 95th, 50th and 5th percentiles of the annual mean PM2:5 levels are shown. The emission reduction required to decrease the 95th percentile of the yearly PM2:5 level below the NAAQS (¼ 15 g m3 ) is calculated by replacing the yearly mean concentration with the 95th percentile concentration in Equation 14.5. The amounts of reduction required are 30.4% (FTM), 21.8% (SDK), 42.0% (TUC) and 32.4% (JST). Consequently, additional reductions from 3.4% to 21.3% are necessary when allowance is made for variability. Figure 14.5b represents yearly pollutant levels after the control is applied.
412
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
30
30
25
25
20
20
µg m3
µg m3
Figure 14.5: Pollutant levels (top 95th, middle 50th and bottom 5th percentiles): (a) PM2:5 concentrations (g m3 ) before control is applied; (b) PM2:5 concentrations (g m3 ) after control is applied.
15
15
10
10
5
5
0
0 FTM
SDK
TUC
JST
FTM
(a)
SDK
TUC
JST
(b)
Required PM reductions can be estimated by species using these values. Analysis was conducted for sulfate and OC, the two major species of PM2:5 . The reduction in emissions for PM2:5 mass meeting the NAAQS at a 95% CI, R (%), is given by R ¼ Roc Foc þ Rsulfate Fsulfate
(14:6)
where Roc is the amount of reduction in OC source and precursor (%); Foc is the OC fraction of the PM2:5 mass; Rsulfate is the amount of reduction in sulfate source (%); and Fsulfate is the sulfate fraction of the PM2:5 mass. Based on the PM2:5 species and mass concentrations (Figure 14.2) and Equation 14.6, the amounts of reduction in the emission sources and precursors for sulfate and OC were calculated (Figure 14.6). Reductions were calculated assuming that the change in the amount of emissions for sulfate or OC does not affect other PM2:5 species concentrations. However, OC shares emission sources with EC, such as wood-burning fireplaces and furnaces, and meat-cooking combustion processes (Hawthorne et al., 1989; Hildemann et al., 1994; Mulhbaier and Williams, 1982). Changes in SO2 emissions, the major source of sulfate, also affect ammonium and nitrate concentrations, because these three species form ammonium nitrate, ammonium sulfate, ammonium bisulfate, and so on. Thus the amount of reduction needs to be modified, based on the above information. Here, information on the correlation between species was used to take into account the response of other PM2:5 species concentrations to sulfate or OC concentrations. This can be achieved by expressing multiple PM2:5 species with bivariate joint pdfs. The parameters of the Koehler–Symanowski distribution were estimated for pairs of OC and EC, and of sulfate and ammonium. SO2 emission precursors affect the nitrate concentrations, but the joint pdf was not calculated for nitrate, because nitrate concentrations are low in Atlanta (Figure 14.2). The Koehler–Symanowski distribution conserves marginal distributions. Thus pairs of sulfate and ammonium, and of OC and EC, concentrations from FTM, SDK, TUC and JST were fitted to univariate
413
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
Figure 14.6: Reductions needed for sulfate and OC to meet the NAAQS. 100
Sulfate reduction /%
TUC 75 JST 50 FTM 25
SDK
0 0
25
50 OC reduction/%
75
100
distributions, and then the Koehler–Symanowski distribution was obtained. The Koehler–Symanowski distribution for sulfate and ammonium from FTM and TUC failed the goodness-of-fit test, and so only data from SDK and JST were analysed. The univariate distributions that best fit PM2:5 species data and their corresponding parameters are shown in Table 14.1. Parameters of the Koehler–Symanowski distribution estimated based on the univariate distributions are illustrated in Table 14.2, and the goodness-of-fit test results performed by the bootstrap method are given in Table 14.3. The bootstrap method is a resampling technique for assessing statistical accuracy (Efron and Tibshirani, 1994). The basic idea is to draw a sample with a replacement from the original dataset, where
Table 14.1: Univariate distributions that best fit PM2:5 species data. Monitoring station
Species
Correlation
Fitted distribution
Parameters by MLE a
SDK
SO4 2þ NH4 þ OC EC SO4 2þ NH4 þ OC EC
0.77
Log-normal Gamma Weibull Weibull Log-normal Log-normal Log-normal Log-normal
1.25 1.92 1.64 1.29 1.35 0.49 1.34 0.24
JST
0.32 0.94 0.82
0.72 0.78 6.29 1.26 0.67 0.59 0.5 0.62
a The two parameters estimated are and for the log-normal distribution, and Æ and for the Weibull and gamma distributions.
414
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Table 14.2: Estimated parameters of the Koehler–Symanowski distribution. Monitoring station
JST SDK
Species
2þ
Parameter
þ
SO4 , NH4 OC, EC SO4 2þ , NH4 þ OC, EC
Æ11
Æ12
Æ22
0.008 0.037 0.036 0.029
0.113 0.31 0.078 0.128
0.02 0.02 0.038 0.251
Table 14.3: Goodness-of-fit tests for cross moments xy, x 2 y and xy 2 using the bootstrap method. SDK
M(xy) from the Koehler–Symanowski distribution 2.5th percentile of M(xy)a 97.5th percentile of M(xy)a M(x 2 y) from the Koehler–Symanowski distribution 2.5th percentile of M(x 2 y)a 97.5th percentile of M(x 2 y)a M(xy 2 ) from the Koehler–Symanowski distribution 2.5th percentile of M(xy 2 )a 97.5th percentile of M(xy 2 )a a
JST
SO4 2 , NH4
OC, EC
SO4 2 , NH4
OC, EC
8.7 8.6 10.4 23.5 25.4 39.9 79.5 74.1 123.2
7.4 7.2 8.2 63.8 61.6 75.9 14.6 14.0 19.0
12.3 12.2 13.9 45.1 40.6 49.9 121.7 109.1 134.9
7.9 8.1 9.2 49.2 55.3 72.8 19.9 21.4 28.3
Based on the measured data using the bootstrap method.
each sample size is the same as the original dataset. This is repeated, say, K times, rendering K bootstrap samples. Then the behaviour of the K bootstrap samples is examined. In the present study, sampling replication (K) was performed 30 000 times. The Koehler–Symanowski distribution fits well the PM2:5 species at 95% CI for the EC–OC pair at JST for the xy and xy 2 moments, and for the SO4 2þ –NH4 þ pair at SDK for the x 2 y moment. The deviation of the moment from its sampling is small, and other moments are within the 95% CI for these pairs, so all four pairs illustrated in Table 14.3 are used in this study. The average EC (or ammonium) concentration is determined by the conditional Koehler–Symanowski distribution given the OC (or sulfate) of interest. Thus this method allows the average EC concentrations to be quantified when the amounts of the emission sources and precursors of OC change. The result showed that the amount of reduction needed for sulfate and OC to meet the standard decreases by up to 8% for SDK and 20% for JST (Figure 14.7). The relatively significant reduction in JST
415
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
compared with that at SDK is due to the higher correlation between species at JST than at SDK (Table 14.1).
14.4 CONCLUDING REMARKS PM2:5 mass measurements in Atlanta, Georgia, show that PM2:5 levels exceed the annual NAAQS. The amount of the reduction required to meet the NAAQS in Atlanta was calculated using the rollback method. The linearity assumption, implicit in the rollback equation, was checked via sensitivity analysis using an air quality model. Based on the average of the annual mean PM2:5 level, 22% (FTM), 18% (SDK), 21% (TUC) and 20% (JST) of the emission sources and precursors of PM2:5 mass should decrease for annual PM2:5 to meet the NAAQS. These results are valid when the annual mean PM2:5 does not change over time. Temporal variation was taken into account using the property that daily PM2:5 data follow an underlying distribution. The log-normal distribution was found to be best for the daily PM2:5 mass in Atlanta. Three hundred and sixty-five random numbers were generated from the log-normal distribution of daily PM2:5 mass, and the number of replications was 1000 for each year. The average of the 365 random numbers is taken as the annual mean PM2:5 mass concentration. Based on this analysis, 30% (FTM), 22% (SDK), 42% (TUC) and 32% (JST) of emissions should be decreased for annual PM2:5 levels to meet the NAAQS. The analysis was extended to calculate the reductions for emissions of each PM2:5 species to meet the NAAQS. Because some species have similar emission sources, and there are chemical interactions, the reduction of emissions of one species may affect the concentrations of other species. In this case, the concentrations of these Figure 14.7: Reductions needed for sulfate and OC to meet the NAAQS at 95% CI when the correlation between PM2:5 species is taken into account at (a) SDK and (b) JST. 1.00 Sulfate reduction/%
Sulfate reduction/%
1.00 0.75 0.50 0.25
0.75 0.50 0.25 0.00
0.00 0
0.25
0.50
0.75
OC reduction/% (a)
1.00
0
0.25
0.50
0.75
OC reduction/% (b)
Correlation between species not considered Correlation between species considered
1.00
416
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
species are correlated with each other. The correlation between species is taken into account using the Koehler–Symanowski distribution, and the amount of the reduction in the emissions can be recalculated. The results showed that the reduction of the emission sources and precursors for sulfate and OC to meet the NAAQS decreases by up to 8% for SDK and 20% for JST. The relatively significant reduction at JST compared with that at SDK is due to the higher correlation between species at JST than at SDK. The results of these analyses can be a useful reference in developing a PM2:5 control strategy in the Atlanta area.
REFERENCES Baumann, K., Chang, M.E., Russell, A.G. and Edgerton, E.S. (2005). Seasonal variability of organic mass contribution to PM2:5 within metro Atlanta and further downwind. AAAR International Specialty Conference, Particulate Matter Supersites Program and Related Studies, Atlanta, GA. Available at http://www.aaar.org/misc_content/2005_PM_abstracts.pdf Butler, A.J. (2000). Temporal and Spatial Analysis of PM2:5 Mass and Composition in Atlanta. PhD Thesis, School of Civil and Environmental Engineering, Georgia Institute of Technology. Butler, A.J., Andrew, M.S. and Russell, A.G. (2003). Daily sampling of PM2:5 in Atlanta: results of the first year of the assessment of spatial aerosol composition in Atlanta study. Journal of Geophysical Research: Atmospheres. 108: 8415–8425. Chock, D.P. and Sluchak, P.S. (1986). Estimating extreme values of air-quality data using different fitted distributions. Atmospheric Environment. 20: 989–993. de Nevers, N., Neligan, R.E. and Slater, H.H. (1977). Air quality management, pollution control strategies, modeling, and evaluation. In: Stern, A.C. (Ed.), Air Pollution, Volume 5: Air Quality Management. Academic Press, New York, pp. 3–40. Efron, B. and Tibshirani, R.J. (1994). An Introduction to the Bootstrap. Chapman & Hall/CRC, Boca Raton, FL. Fletcher, R. (2000). Practical Methods of Optimization. Wiley, Chichester. Georgopoulos, P.G. and Seinfeld, J.H. (1982). Statistical distributions of air pollutant concentrations. Environmental Science & Technology. 16: A401–A416. Hadley, A. and Toumi, R. (2003). Assessing changes to the probability distribution of sulphur dioxide in the UK using a lognormal model. Atmospheric Environment. 37: 1461–1474. Hansen, D.A., Edgerton, E.S., Hartsell, B.E., Jansen, J.J., Kandasamy, N., Hidy, G.M. and Blanchard, C.L. (2003). The southeastern aerosol research and characterization study. Part 1: Overview. Journal of the Air & Waste Management Association. 53: 1460–1471. Hawthorne, S.B., Krieger, M.S., Miller, D.J. and Mathiason, M.B. (1989). Collection and quantitation of methoxylated phenol tracers for atmospheric pollution from residential wood stoves. Environmental Science & Technology. 23: 470–475. Hildemann, L.M., Klinedinst, D.B., Klouda, G.A., Currie, L.A. and Cass, G.R. (1994). Sources of urban contemporary carbon aerosol. Environmental Science & Technology. 28: 1565–1576. Holland, D.M. and Fitzsimons, T. (1982). Fitting statistical distributions to air-quality data by the maximum-likelihood method. Atmospheric Environment. 16: 1071–1076. Kao, A.S. and Friedlander, S.K. (1995). Frequency distribution of PM10 chemical components and their sources. Environmental Science & Technology. 29: 19–28. Koehler, K.J. and Symanowski, J.T. (1995). Constructing multivariate distributions with specific marginal distributions. Journal of Multivariate Analysis. 55: 261–282. Kruize, H., Hanninen, O., Breugelmans, O., Lebret, E. and Jantunen, M. (2003). Description and demonstration of the EXPOLIS simulation model: two examples of modeling population exposure to particulate matter. Journal of Exposure Analysis and Environmental Epidemiology. 13: 87–99.
STATISTICAL MODELLING AND ANALYSIS OF PM2:5 CONTROL LEVELS
417
Lu, H.C. (2002). The statistical characters of PM10 concentration in Taiwan area. Atmospheric Environment. 36: 491–502. Lu, H.C. and Fang, G.C. (2002). Estimating the frequency distributions of PM10 and PM2:5 by the statistics of wind speed at Sha-Lu, Taiwan. Science of the Total Environment. 298: 119–130. Manomaiphiboon, K. and Russell, A.G. (2003). Formulation of joint probability density function of velocity for turbulent flows: an alternative approach. Atmospheric Environment. 37: 4917– 4925. Mulhbaier, J.L. and Williams, R.L. (1982). Fireplaces, furnaces, and vehicles as emission sources of particulate carbons. In: Wolff, G.T. and Klimsch, R.L. (Eds), Particulate Carbon, Atmospheric Life Cycle. Plenum, New York, pp. 185–205. Ott, W., Wallace, L. and Mage, D. (2000). Predicting particulate (PM10 ) personal exposure distributions using a random component superposition statistical model. Journal of the Air & Waste Management Association. 50, 1390–1406. Park, S.-K. (2005). Particulate Modeling and Control Strategy for Atlanta, Georgia. PhD thesis, Georgia Institute of Technology. Park, S.-K. and Russell, A.G. (2003). Sensitivity of PM2:5 to emissions in the Southeast. Proceedings of the Models-3 User’s Workshop, Research Triangle Park, North Carolina. Available at http:// www.cmascenter.org/conference/2003/session3/park_abstract.pdf Rumburg, B., Alldredge, R. and Claiborn, C. (2001). Statistical distributions of particulate matter and the error associated with sampling frequency. Atmospheric Environment. 35: 2907–2920. Singh, K.P., Warsono, Bartolucci, A.A. and Bae, B. (2001). Mathematical modeling of environmental data. Mathematical and Computer Modelling. 33: 793–800. Wark, K., Warner, C.F. and Davis, W.T. (1997). Air Pollution: Its Origin and Control, 3rd edn. Addison-Wesley, Menlo Park, CA.
CHAPTER
15
Integration of Models and Observations: A Modern Paradigm for Air Quality Simulations Adrian Sandu, Tianfeng Chai and Gregory R. Carmichael
15.1 INTRODUCTION The chemical composition of the atmosphere has been (and is being) significantly perturbed by emissions of trace gases and aerosols associated with a variety of anthropogenic activities. This changing of the chemical composition of the atmosphere has important implications for urban, regional and global air quality, and for climate change. In the USA alone, 474 counties, with nearly 160 million inhabitants, are currently in some degree of non-attainment with respect to the 8-hour National Ambient Air Quality Standards (NAAQS) standard for ground-level ozone (80 ppbv). Because air quality problems relate to immediate human welfare, their study has traditionally been driven by the need for information to guide policy. Over the last decade our ability to predict air quality has improved, thanks to significant advancements in our ability both to measure and to accurately model atmospheric chemistry, transport and removal processes. Modern sensing technologies allow us to measure at surface sites and on mobile platforms (such as vans, ships and aircraft), with fast response times and wide dynamic range, many of the important primary and secondary atmospheric trace gases and aerosols (e.g., carbon monoxide, ozone, sulfur dioxide, black carbon), and many of the critical photochemical oxidising agents (such as the OH and HO2 radicals). Not only is our ability to characterise a fixed atmospheric point in space and time expanding, but the spatial coverage is also expanding through growing capabilities to measure atmospheric constituents remotely, using sensors mounted at the surface and on satellites. Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
420
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Chemical transport models (CTMs) have become an essential tool for providing science-based input into best alternatives for reducing urban pollution levels, for designing cost-effective emission control strategies, for the interpretation of observational data, and for assessments into how we have altered the chemistry of the global environment. Computational power and efficiencies have advanced to the point where CTMs can simulate pollution distributions in an urban airshed with a spatial resolution of less than a kilometre, and can cover the entire globe with a horizontal grid spacing of less than 50–100 km. The use of CTMs to produce air quality forecasts has become a new application area, providing important information to the public, decision-makers and researchers. Currently, hundreds of cities worldwide are providing real-time air quality forecasts. In addition, national weather services throughout the world are broadening their traditional role of mesoscale weather prediction to also include prediction of other environmental phenomena (e.g., plumes from biomass burning, volcanic eruptions, dust storms, and urban air pollution) that could potentially affect the health and welfare of their inhabitants. For example, the United States National Oceanic and Atmospheric Administration (NOAA), in partnership with the United States Environmental Protection Agency (US EPA), developed an air quality forecasting (AQF) system that became fully operational in September of 2004, providing forecasts of ozone (O3 ) over the north-eastern USA (Eder et al., 2006). The AQF system has since been upgraded and expanded to include the continental US domain, providing National Air Quality Forecast Guidance.1 Air quality forecasts built upon CTM predictions (in contrast to other techniques such as statistical methods) contain components related to emissions, transport, transformation and removal processes. Since the four-dimensional distribution of pollutants in the atmosphere is heavily influenced by the prevailing meteorological conditions, air quality models are closely aligned with weather prediction. Air quality forecast models are driven by meteorological models (global and/or mesoscale), and this coupling is performed in offline or online modes, referring to whether the air quality constituents are calculated within the meteorological model itself (online, e.g., WRF/Chem; Grell et al., 2005) or calculated in a separate model, which accepts the meteorological fields as inputs (offline). The close integration of models and observations through data assimilation is expected to be as important in air quality applications as it has proved to be in weather applications (Carmichael et al., 2008; Dabberdt et al., 2004). Data assimilation is the process by which model predictions utilise measurements to produce an optimal representation of the state of the atmosphere. In this chapter, we discuss state-of-theart methodologies developed to assimilate chemical observations, including adjoint modelling for sensitivity analysis and four-dimensional variational data assimilation with CTMs.
INTEGRATION OF MODELS AND OBSERVATIONS: A MODERN PARADIGM
421
15.2 INTEGRATION OF DATA AND MODELS: A MODERN SIMULATION PARADIGM The close integration of observational data is recognised as essential in weather/ climate analysis, and it is accomplished by a mature experience/infrastructure in data assimilation – the process by which models use measurements to produce an optimal representation of the state of the atmosphere. This is equally desirable in CTMs, which must be better constrained through the use of observational data. Data assimilation combines information from three different sources: the physical and chemical laws of evolution (encapsulated in the model); the reality (as captured by the observations); and the current best estimate of the distribution of tracers in the atmosphere (all with associated errors). As more chemical observations in the troposphere are becoming available, chemical data assimilation is expected to play an essential role in air quality forecasting, similar to the role it has in numerical weather prediction. Modern assimilation techniques fall within the general categories of variational (3D-Var, 4D-Var) and Kalman filter-based methods, which have been developed in the framework of optimal estimation theory. Ensemble filters are based theoretically on the Kalman filter (Kalman, 1960), and use a Monte Carlo approach to sample the probability density space; these techniques are commonly referred to as the ensemble Kalman filter (EnKF) (Evensen, 2006). Practical EnKF techniques make use of a small number of samples (Evensen, 2006) and therefore are suboptimal for systems with large state spaces (typical chemical transport models have tens of millions of variables). Nevertheless, EnKF techniques have proved useful in applications with realistic atmospheric models and observations. In this chapter, we focus on variational data assimilation techniques. The underlying mathematical formulation is based on control theory and variational calculus. In the variational approach, the data assimilation problem is posed as an optimisation problem, where one has to minimise a cost functional that measures the distance of the analysis from both the measurements and the ‘‘background’’ estimate of the true state. In the 3D-Var approach (Le Dimet and Talagrand, 1986; Lorenc, 1986; Talagrand and Courtier, 1987), the observations are processed sequentially in time. The 4D-Var approach (Courtier et al., 1994; Elbern and Schmidt, 2001; Fisher and Lary, 1995; Rabier et al., 2000) generalises this method by considering observations that are distributed in time. Variational methods have been successfully applied in meteorology and oceanography (Navon, 1998), but are only just beginning to be used in non-linear atmospheric chemical models (Elbern and Schmidt, 2001; Menut et al., 2000; Sandu et al., 2005). Compared with data assimilation in numerical weather prediction, chemical data assimilation poses new challenges stemming from the additional processes associated with emissions, chemical transformations, and removal. Because many important pollutants (e.g., ozone and fine particulate sulfate) are secondary in nature (i.e., formed via chemical reactions in the atmosphere), air quality models must include a rich description of the photochemical oxidant cycle. As a result of these processes, air quality models typically include hundreds of chemical variables (including gas phase constituents and aerosol species distributed by compo-
422
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
sition and size). The resulting system of equations is stiff and highly coupled, which adds greatly to the computational burden of air quality forecasting. It is also important to note that the chemical and removal processes are highly coupled to meteorological variables (e.g., temperature and water vapour), as are many of the emission terms (directly in the case of wind-blown soils, whose emission rates correlate with surface winds, and evaporative emissions that correlate with temperature, and indirectly in the case of emissions associated with heating and cooling demand that respond to ambient temperatures). Other important computational challenges in chemical data assimilation include increased memory requirements (,100 concentrations of various species at each grid point, checkpointing required) and the stiffness of the differential equations (.200 various chemical reactions coupled together, lifetimes of different species varying from seconds to months). The characterisation of uncertainties is difficult owing to the limited number of chemical observations (compared with meteorological data) and to the poorly quantified uncertainties in emission inventories (which are often outdated).
15.3
MATHEMATICAL FRAMEWORK FOR AIR QUALITY MODELLING
An atmospheric CTM solves for the mass balance equations for concentrations yi of tracer species 1 < i < n: @ yi 1 1 ¼ ~ u = yi þ =ðrK = yi Þ þ f i ðr yÞ þ E i , r r @t yi t 0 , x ¼ y0i ð xÞ, t 0 < i < t F yi ð t, xÞ ¼ yIN i ð t, xÞ
on ˆIN
K
@ yi ¼0 @n
K
@ yi ¼ V DEP yi Qi i @n
1 1 Then the adjoint variables are the gradients of the cost function with respect to changes in the state at earlier time: "
#T @ł y N º ¼ @ yk k
¼ =ykł y
(15:12)
N
Note that adjoint sensitivity analysis is a receptor-orientated approach. The same adjoint variables are used to obtain the sensitivities of a receptor cost function with respect to all parameters; a single backward integration of the adjoint model is sufficient. The mathematical foundations of adjoint sensitivity for general non-linear systems and various classes of response functionals are presented in Marchuk (1995) and Cacuci (2003). In the 4D-Var data assimilation context, the gradient of the cost functional (Equation 15.6) is computed efficiently by a single backward integration with the adjoint model as follows:
INTEGRATION OF MODELS AND OBSERVATIONS: A MODERN PARADIGM
427
N N º N ¼ H TN R1 N H N y yobs for k ¼ N 1, . . ., 0 do !T @ y kþ1 k k º kþ1 þ H Tk R1 H k y k yobs º ¼ k k @y
(15:13)
end = y 0 ł p, y 0 ¼ B1 y 0 y B þ º0
15.4.3
Continuous adjoint models vs. discrete adjoint models
As explained above, the adjoint model (Equation 15.13) was obtained from the mass balance equations (Equation 15.1) by first discretising them (Equation 15.2), and then taking the adjoint of the discretisation (applying a linearization and the transposed chain rule). For this reason, the model (Equation 15.13) is called the discrete adjoint model and is obtained as the ‘‘adjoint of the numerical solution’’ approach. The adjoint variables ºk (Equation 15.12) represent the sensitivities of the discrete cost function (Equation 15.5) with respect to the numerical solution y k . A different approach (Sandu et al., 2005) is to study directly the sensitivities of the cost functional (Equation 15.6) defined on the solution of the continuous mass balance equations (Equation 15.1). It can be shown that the sensitivities of the cost functional with respect to the continuous solution (at time t and location x) º i ð t, xÞ ¼
@ł @ yð t, xÞ
(15:14)
are the solutions of the following continuous adjoint PDE: h i dº i ºi ¼ = ð~ uº i Þ = rK = J T ðr yÞ º i i dt r º i t F , x ¼ º iF ð xÞ, t F > t > t 0
~ uº i þ rK rK
º i ð t, xÞ ¼ 0
on ˆIN
@ ðº i =rÞ ¼0 @~ n
on ˆOUT
@ ðº i =rÞ ¼ V DEP ºi i @~ n
(15:15)
on ˆGROUND
The continuous adjoint PDE is solved numerically backwards in time from t F down to t0 . Here J(y) ¼ @f (y)/@y is the Jacobian of the chemical equation rates. The forcing functions i depend on the particular form of the cost functional ł. Note that the formulation of the continuous adjoint PDE is based on the concentrations – that is,
428
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
on the solution of the forward model PDE (Equation 15.1). Therefore the model (Equation 15.1) needs to be solved first, and its solution stored for all concentrations, times and locations. The ‘‘continuous adjoint’’ sensitivities (Equation 15.14) are obtained via a ‘‘numerical solution of the adjoint PDE’’ (Equation 15.15). The discrete and continuous approaches to adjoint construction typically lead to different results (Liu and Sandu, 2008). In practice, the adjoint model for a complex CTM is constructed using a hybrid approach. Specifically, the forward CTM is solved by an operator split approach, where different physical processes are solved in succession. The corresponding adjoint model is also operator split, with adjoints of individual processes solved for in succession (and in reverse order from the forward model). Different adjoint approaches are used for different processes. It is not uncommon to use a continuous adjoint approach for the linear advection–diffusion equation. For highly non-linear processes, such as chemical kinetics, a discrete adjoint approach is employed (Sandu et al., 2003).
15.4.4
Second-order adjoint models
Adjoint sensitivity analysis can be extended to obtain second-order derivatives of the cost function. For example, when the parameters are the initial conditions, complete second-order derivative information is provided by the Hessian of the cost function with respect to the initial state, =2y 0 , y 0 ł (Le Dimet et al., 2002; Ozyurt and Barton, 2005; Wang et al., 1992). This Hessian is an enormous n 3 n matrix, where n is the number of model states. The evaluation of the full Hessian is impractical in real problems. However, the solution of the second-order adjoint model backward in time provides Hessian–vector products: ¼ (=2y 0 , y 0 ł) u for any vector u. Specifically, the forward model (Equation 15.2) and the tangent linear model (Equation 15.9) are solved together forward in time. The tangent linear model is initialised with the usersupplied vector S0 ¼ u. Both the forward solution y k and the tangent linear solution Sk are saved at all steps. Next the first-order adjoint model (Equation 15.13) is solved backward in time together with the second-order adjoint model: N N ¼ H TN R1 N HN S
for k ¼ N 1, . . ., 0 do !T " # @ y kþ1 @ 2 y kþ1 kþ1 kþ1 k kþ1 k þ H Tk R1 º ,S ¼ þ k HkS @ yk ð@ y k Þ2 end =2y 0 , y 0 ł u ¼ B1 S 0 þ 0 to obtain both the gradient and the Hessian times vector product.
(15:16)
429
INTEGRATION OF MODELS AND OBSERVATIONS: A MODERN PARADIGM
15.5 CHEMICAL SENSITIVITY ANALYSIS AND DATA ASSIMILATION WITH THE CMAQ MODEL The Community Multiscale Air Quality (CMAQ) is the US EPA’s premier air quality modelling system.2 The primary goals for the CMAQ modelling system are: •
•
to improve the environmental management community’s ability to evaluate the impact of air quality management practices for multiple pollutants at multiple scales; and to improve the scientist’s ability to probe, understand, and simulate chemical and physical interactions in the atmosphere.
The construction of a CMAQ adjoint and 4D-Var system is discussed in Hakami et al. (2007). The adjoint of the chemical subsystem is implemented using the Kinetic PreProcessor, KPP (Sandu and Sander, 2006).
15.5.1
Adjoint sensitivity
A sensitivity test using CMAQ version 4.5 and its adjoint is performed in the eastern US domain shown in Figure 15.1. The region is a subset of the continental US domain currently used in the National Air Quality Forecast Guidance run at the United States’ National Centers for Environmental Prediction (Tang et al., 2009). The grid has a 22 sigma pressure hybrid vertical layer spanning from surface to 100 hPa, and a
Figure 15.1: Computational domain, grid, and AIRNOW stations. 42
Latitude
40
38
36
85
80 Longitude
75
430
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
horizontal grid of 101 3 61 with a resolution of 12 km. The CMAQ model employs the carbon bond IV chemical mechanism (Gery et al., 1989). The 2001 National Emission Inventory with recent updates is used. The initial concentrations at 1200Z on 5 August 2007 and the lateral boundary concentrations during the simulation came from the operational CMAQ predictions. A receptor for the sensitivity run is defined as the sum of the ozone concentrations in the selected region shown in Figure 15.2a (see colour insert): that is, 9 3 9 grid cells covering the Washington DC area, extending from layer 1 to layer 4, at 2000Z on 6 August 2007. Figures 15.2b–15.2d (see colour insert) show the influence regions of surface ozone at earlier times, that is, 8, 16 and 32 hours before the target time. Similarly, Figures 15.3a–15.3d (see colour insert) show the influence regions of surface NO2 at earlier times. The regions of influence areas resemble each other. This reflects the same air mass affecting the DC region at the target time. However, the magnitude of the sensitivity to NO2 is much larger than the sensitivity to ozone, except for the final time step. The sensitivities of the receptor with respect to the emissions of various species are also calculated using the CMAQ adjoint. Figure 15.4 (see colour insert) shows the sensitivities to the grid-based emissions of NO, HCHO, olefin, and isoprene. Other than the local emissions close to the Washington DC area, the volatile organic compound (VOC) emissions close to the Indiana–Kentucky border region have a great impact on the ozone concentrations in the target area for the final time step. This is consistent with Figures 15.2 and 15.3, which show that the air mass coming from the Indiana–Kentucky border on 5 August 2007 would affect the DC area ozone on the next day. Figure 15.4a indicates that the increase of NO emissions in the north-east corner of the domain, i.e., east Pennsylvania and New Jersey, would reduce the DC area ozone concentrations at the target time. Figure 15.4 also shows that the DC ozone concentrations in the current case are more sensitive to isoprene emissions than to the other VOC emissions.
15.5.2
4D-Var data assimilation
A 16-hour 4D-Var data assimilation case is further tested using the CMAQ adjoint model with the same computational domain and grid as shown in Figure 15.1. In the current case, the error covariance matrices B and R are assumed to be diagonal, with the background and observational root mean square errors estimated as 14.3 ppbv and 3.3 ppbv respectively, using the observational method detailed in Chai et al. (2007). The initial ozone concentrations at 1200Z on 5 August 2007 are chosen as the only control parameters to be adjusted. The background ozone concentrations are the ozone predictions from the CMAQ operational forecasts, that is, the values before the chemical data assimilation. In the current case, AIRNOW hourly observations are represented by the modelled ozone values at the start of each hour in the surface grid cells covering the stations (shown in Figure 15.1). Quasi-Newton limited memory LBFGS (Byrd et al., 1995; Zhu et al., 1997) is used to minimise the cost functional. The maximum number of iterations is set to be 25.
INTEGRATION OF MODELS AND OBSERVATIONS: A MODERN PARADIGM
431
Figure 15.5 (see colour insert) shows the predicted surface ozone at 1800Z on 5 August 2007, by the base case and the assimilation test, along with observations by AIRNOW. The base case overpredicted ozone at most locations, and the assimilation test matches observations better at many locations. The improved ozone predictions evaluated by the assimilated observations should be expected. The effect of the 4DVar after the 16-hour data assimilation time window is tested by letting the CMAQ run for 48 hours with the adjusted initial conditions. Figure 15.6 (see colour insert) shows the comparison between the 4D-Var test and the base case at 1800Z on 6 August 2007, that is, 14 hours after the AIRNOW observations are assimilated. While the differences are not as pronounced as that shown in Figure 15.5, the improvement can still be seen, mainly in the eastern half of the domain. With westerly winds dominant in the region, the ozone predictions are less affected by the initial ozone values 30 hours earlier than by the west boundary conditions, which are identical for the two cases. Figure 15.7 shows the ozone prediction biases calculated against the hourly AIRNOW observations before and after data assimilation. With data assimilation, the bias is reduced significantly in the first 10 hours. The positive effect of the assimilation is apparent up to 32 hours into the simulation, that is, 16 hours after the assimilation time window. However, the night time ozone prediction starting from hour 34 (2200Z, 1800EDT) until hour 40 (0400Z, 0000EDT) is worse with 4D-Var adjusted initial ozone than the base case. The assimilation case appears to perform better immediately after the local midnight. Model predictions before and after data assimilation at a specific site, Shenandoah National Park (78.458W, 38.558N), are shown in Figure 15.8, along with the AIRNOW hourly ozone observations. While basically exemplifying what is shown in Figure 15.7 in the first 30 hours, the base case and 4D-Var test generate almost identical results at later times. Figure 15.7: Ozone prediction biases before and after data assimilation. Hourly AIRNOW observations were assimilated during the first 16-hour assimilation time window. 20
Ozone bias/ppbv
15
10
5
0
Base case 4D-Var
5 0
10 20 30 40 Hours starting from 1200Z, 5 August 2007
432
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 15.8: AIRNOW hourly ozone observations and CMAQ ozone predictions before and after data assimilation at Shenandoah National Park (78.458W, 38.558N). AIRNOW observations are assimilated during the first 16-hour assimilation time window. 80 Observation Base case 4D-Var
Ozone/ppbv
60
40
0
15.6
10 20 30 40 Hours starting from 1200Z, 5 August 2007
CONCLUDING REMARKS
As more observations are becoming available and new measurement networks are being planned, it is of critical importance to develop the capabilities to make best use of the data, to manage the sensing resources better, and to design more effective field experiments and networks to support atmospheric chemistry and air quality studies. A closer integration of measurements with models is critical for improved predictions of air quality, for atmospheric chemistry analyses, for the interpretation of data obtained during intensive field campaigns, and for the design of more effective emission control strategies. This is accomplished through data assimilation, the process by which observations (reality within errors) are used along with the model (imperfect representation of the processes and their connections) to produce a better estimate of the state of the atmosphere. In this chapter, we present state-of-the-art computational tools for fusing data and models to improve air quality predictions. Specifically, we discuss the formulation of direct and adjoint sensitivity analysis and their application to 4D-Var chemical data assimilation. Sensitivity analysis and data assimilation results are shown for experiments with the US EPA’s CMAQ model and with real datasets.
INTEGRATION OF MODELS AND OBSERVATIONS: A MODERN PARADIGM
433
ACKNOWLEDGEMENTS The work presented in this chapter was supported in part by NSF, NASA, NOAA and HARC.
ENDNOTES 1. http://www.weather.gov/aq/ 2. http: //www.epa.gov/AMD/CMAQ
REFERENCES Byrd, R., Lu, P. and Nocedal, J. (1995). A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific and Statistical Computing. 16(5): 1190–1208. Cacuci, D.G. (2003). Sensitivity and Uncertainty Analysis. Volume I: Theory. Chapman & Hall/CRC Press, Boca Raton, FL. Carmichael, G.R., Sandu, A., Chai, T., Daescu, D., Constantinescu, E.M. and Tang, Y. (2008). Predicting air quality: improvements through advanced methods to integrate models and measurements. Journal of Computational Physics. 227(7): 3540–3571. Chai, T., Carmichael, G.R., Sandu, A., Tang, Y. and Daescu, D.N. (2006). Chemical data assimilation of transport and chemical evolution over the Pacific (TRACE-P) aircraft measurements. Journal of Geophysical Research. 111, D02301, doi: 10.1029/2005JD005883. Chai, T., Carmichael, G.R., Tang, Y., Sandu, A., Hardesty, M., Pilewskie, P., Whitlow, S., Browell, E.V., Avery, M.A., Thouret, V., Nedelec, P., Merrill, J.T. and Thomson, A.M. (2007). Four dimensional data assimilation experiments with ICARTT (International Consortium for Atmospheric Transport and Transformation) ozone measurements. Journal of Geophysical Research. 112: D12S15. Cohn, S.E. (1997). An introduction to estimation theory. Journal of the Meteorological Society of Japan. 75(1B): 257–288. Courtier, P., Thepaut, J.N. and Hollingsworth, A. (1994). A strategy for operational implementation of 4D-Var, using an incremental approach. Quarterly Journal of the Royal Meteorological Society. 120(519): 1367–1387. Dabberdt, W.F., Carroll, M.A., Baumgardner, D., Carmichael, G., Cohen, R., Dye, T., Ellis, J., Grell, G., Grimmond, S., Hanna, S., Irwin, J., Lamb, B., Madronich, S., McQueen, J., Meagher, J., Odman, T., Pleim, J., Schmid, H.P. and Westphal, D.L. (2004). Meteorological research needs for improved air quality forecasting: Report of the 11th Prospectus Development Team of the US Weather Research Program. Bulletin of the American Meteorological Society. 85(4): 563– 586. Eder, B., Kang, D.W., Mathur, R., Yu, S.C. and Schere, K. (2006). An operational evaluation of the Eta-CMAQ air quality forecast model. Atmospheric Environment. 40(26): 4894–4905. Elbern, H. and Schmidt, H. (2001). Ozone episode analysis by 4D-Var chemistry data assimilation. Journal of Geophysical Research. 106(D4): 3569–3590. Evensen, G. (2006). Data Assimilation: The Ensemble Kalman Filter. Springer, New York. Fisher, M. (2003). Background error covariance modelling. Proceedings of the ECMWF Workshop on Recent Developments in Data Assimilation for Atmosphere and Ocean, Reading, UK, pp. 45–64. Fisher, M. and Lary, D.J. (1995). Lagrangian four-dimensional variational data assimilation of chemical species. Quarterly Journal of the Royal Meteorological Society. 121: 1681–1704. Gery, M., Whitten, G.Z., Killus, J.E. and Dodge, M.C. (1989). A photochemical kinetics mechanism
434
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
for urban and regional scale computer modeling. Journal of Geophysical Research. 94: 925– 956. Grell, G.A., Peckham, S.E., Schmitz, R., McKeen, S.A., Frost, G., Skamarock, W.C., Eder, B., Petron, G., Granier, C., Khattatov, B., Yudin, V., Lamarque, J.F., Emmons, L., Gille, J. and Edwards, D.P. (2005). Fully coupled ‘‘online’’ chemistry within the WRF model. Geophysical Research Letters. 39(37): 6957–6975. Hakami, A., Henze, D.K., Seinfeld, J.H., Singh, K., Sandu, A., Kim, S., Byun, D. and Li, Q. (2007). The adjoint of CMAQ. Environmental Science & Technology. 41(22): 7807–7817. Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME: Journal of Basic Engineering. 82D: 35–45. Le Dimet, F.X. and Talagrand, O. (1986). Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus Series A: Dynamic Meteorology and Oceanography. 38(2): 97–110. Le Dimet, F.X., Navon, I.M. and Daescu, D.N. (2002). Second-order information in data assimilation. Monthly Weather Review. 130(3): 629–648. Liu, Z. and Sandu, A. (2008). Analysis of discrete adjoints of numerical methods for the advection equation. International Journal for Numerical Methods in Fluids. 56(7): 769–803. Lorenc, A.C. (1986). Analysis methods for numerical weather prediction. Journal of the Royal Meteorological Society. 112: 1177–1194. Marchuk, G.I. (1995). Adjoint Equations and Analysis of Complex Systems. Kluwer Academic Publishers, Norwell, MA. Menut, L., Vautard, R., Beekmann, M. and Honor, C. (2000). Sensitivity of photochemical pollution using the adjoint of a simplified chemistry-transport model. Journal of Geophysical Research: Atmospheres. 105-D12(15): 15379–15402. Navon, I.M. (1998). Practical and theoretical aspects of adjoint parameter estimation and identifiability in meteorology and oceanography. Dynamics of Atmospheres and Oceans. 27(1–4): 55–79. Ozyurt, B.D. and Barton, P.I. (2005). Cheap second order directional derivatives of stiff ODE embedded functionals. SIAM Journal on Scientific Computing. 26(5): 1725–1743. Parrish, D.F. and Derber, J.C. (1992). National Meteorological Center’s spectral statistical-interpolation analysis system. Monthly Weather Review. 120: 747–1763. Rabier, F., Jarvinen, H., Klinker, E., Mahfouf, J.F. and Simmons, A. (2000). The ECMWF operational implementation of four-dimensional variational assimilation. I: Experimental results with simplified physics. Quarterly Journal of the Royal Meteorological Society. 126: 1148–1170. Sandu, A. and Sander, R. (2006). Simulating chemical systems in Fortran90 and Matlab with the Kinetic PreProcessor KPP-2.1. Atmospheric Chemistry and Physics. 6: 187–195. Sandu, A., Daescu, D. and Carmichael, G.R. (2003). Direct and adjoint sensitivity analysis of chemical kinetic systems with KPP. I: Theory and software tools. Atmospheric Environment. 37: 5083–5096. Sandu, A., Daescu, D., Carmichael, G.R. and Chai, T. (2005). Adjoint sensitivity analysis of regional air quality models. Journal of Computational Physics. 204: 222–252. Talagrand, O. and Courtier, P. (1987). Variational assimilation of meteorological observations with the adjoint vorticity equation. Part I: Theory. Quarterly Journal of the Royal Meteorological Society. 113: 1311–1328. Tang, Y.H., Lee, P., Tsidulko, M., Huang, H.C., McQueen, J.T., DiMego, G.J., Emmons, L.K., Pierce, R.B., Thompson, A.M., Lin, H.M, Kang, D., Tong, D., Yu, S., Mathur, R., Pleim, J.E., Otte, T.L., Pouliot, G., Young, J.O., Shere, K.L., Davidson, P.M. and Stajner, I. (2009). The impact of chemical lateral boundary conditions on CMAQ predictions of tropospheric ozone over the continental United States. Journal of Environmental Fluid Mechanics. 9(1): 43–58. Wang, Z., Navon, I.M., Le Dimet, F.X. and Zou, X. (1992). The second order adjoint analysis: theory and applications. Meteorology and Atmospheric Physics. 50(1–3): 3–20. Zhu, C., Byrd, R.H. and Nocedal, J. (1997). L-BFGS-B Fortran routines for large scale bound constrained optimization. ACM Transactions on Mathematical Software. 23(4): 550–560.
CHAPTER
16
Airshed Modelling in Complex Terrain Peyman Zawar-Reza, Andrew Sturman and Basit Khan
16.1 INTRODUCTION Environmental modelling of atmospheric systems is a major approach used in air quality management. It is frequently applied to airsheds, where an airshed is essentially a region within which an air pollution problem is largely ‘‘contained’’ because of the combined effect of the topography and local atmospheric conditions. Mountain basins are clear examples of airsheds, as their topography helps to contain the air pollution emitted inside them. Any pollutants emitted within the boundary of an airshed region tend to contribute to the air quality of only that region, although there is often some downstream leakage of pollution. It is also often assumed that there is no import of polluted air into the defined area, although again there may be some long-distance transport of pollutants from one airshed to another. In this idealised situation there is no reason to go outside the airshed to solve the air pollution problem, and any amelioration strategies should be applied to emissions that originate inside it. An airshed is essentially a three-dimensional concept (a volume), having both a horizontal and a vertical extent. Airshed models therefore need to be threedimensional, and designed to represent atmospheric conditions (particularly airflow and vertical temperature structure) within the boundaries of the airshed under consideration. Airshed modelling has become increasingly important for addressing local air pollution problems (Reid et al., 2007). It is able to provide estimates of particulate and gaseous pollution exposure over urban areas, evaluate the impact of proposed industrial development, predict effects of proposed air quality management strategies, and assess the representativeness of existing air pollution monitoring sites. A range of different models has been used over the past two decades, from simple box models, Gaussian models and Lagrangian/Eulerian models (Holmes and Morawska, 2006; Zawar-Reza et al., 2005) to more complex models based on Modelling of Pollutants in Complex Environmental Systems, Volume II, edited by Grady Hanrahan. # 2010 ILM Publications, a trading division of International Labmate Limited.
436
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
advanced three-dimensional atmospheric modelling systems and computational fluid dynamics (e.g., Lee et al., 2007; Titov et al., 2007a, 2007b). The fundamental components of an advanced airshed modelling system include a part that predicts the three-dimensional meteorological fields (such as wind, temperature, turbulence intensity and moisture) and another that simulates atmospheric chemistry processes involving both aerosols and gases: it is the former component that will be the focus of this chapter. In many cases, this requires the coupling of a mesoscale meteorological model with an atmospheric chemistry model. These modelling systems are becoming increasingly sophisticated, but rely heavily on good input data (of both meteorology and emissions), as well as accurate boundary conditions (such as surface characteristics, topography and land use). This chapter provides an overview of recent research on the application of airshed models to the solution of a variety of air quality problems in countries where complex terrain is a significant issue, such as New Zealand and Iran. Section 16.2 provides a brief background to contemporary airshed models, and illustrates their usefulness and limitations for investigating the role of meteorology in the dispersion of air pollution in regions of complex terrain. Section 16.3 describes several case studies involving the application of airshed models to specific air quality issues, and illustrates how novel approaches can be taken to address such issues. These case studies focus on regions of complex terrain and proximity to the sea, and as a result they examine the air pollution meteorology of environments dominated by both land–sea-breeze circulations and topographically generated local wind systems.
16.2
BACKGROUND
Prognostic airshed models are basically numerical weather prediction codes tuned to provide air quality information. The numerical weather component is typically scaled to represent mesoscale atmospheric processes, where the mesoscale lies between the synoptic and local scales of activity (Figure 16.1). The function of the mesoscale model component is to provide high-resolution data for the mean wind and turbulence characteristics in the boundary layer, so that subsequent dispersion and chemical transformation of pollutants can be predicted by the air pollution component. Mesoscale models solve the three-dimensional Navier–Stokes’ primitive equations on a grid system, but also consider physical processes such as boundary layer processes, moist processes and radiative transfer processes. Because of computational limitations, Reynolds’ averaging is performed on primitive equations where the atmospheric flow is decomposed into mean and turbulent components. The mean component is explicitly resolved, and the turbulent (subgrid) scale is parameterised.
16.2.1
Boundary layer control on air quality
An important consideration for airshed models is the treatment of the atmospheric boundary layer, where the daily fluctuation in mixing depth and temperature controls pollutant dispersion and chemistry. The thickness of the boundary layer is quite
437
AIRSHED MODELLING IN COMPLEX TERRAIN
Figure 16.1: Temporal and spatial scales of atmospheric motion according to Orlanski (1975) and Oke (1978). Micro γ
Meso β
α
γ
Macro
β
α
β
α
Macro Meso Local Micro
Scale labels after Orlanski (1975)
Scale labels after Oke (1978)
Synoptic Years Long waves Jet streams Anticyclones Cyclones Hurricanes Fronts Local winds Squall lines Thunderstorms Large cumulus Lee waves Tornadoes Small cumulus Thermals Dust devils Building effects Turbulence
Months
Characteristic timescale
Weeks Days Hours
Minutes
Seconds
Roughness Small-scale turbulence
10 mm
1m
1 km
1000 km
Characteristic horizontal distance scale
variable in space and time, ranging from tens of metres at night to 1 km or more during the day (Figure 16.2). A strong stable layer (called a ‘‘capping inversion’’) traps turbulence and pollutants, and inhibits surface friction from being felt by the free atmosphere. In mid-latitudes during fair weather anticyclonic conditions, it is cool and calm at night, and warm and gusty during the daytime. The boundary layer is unstable whenever the surface is warmer than the air (i.e., during the day, or when cold air is advected over a warmer surface). Therefore a state of free convection exists, with vigorous thermal updrafts and downdrafts. In contrast, the boundary layer is stable when the surface is colder than the air on a clear night, or when warm air is advected over a colder surface, resulting in suppression of vertical motion.
z
ML
FA
Noon
0
1000
2000
θv
z
SBL
RL
FA
S1
Surface layer
Convective mixed layer
θv
Sunset S2
z
SBL
RL
FA
Entrainment zone
θv
Noon
z
RL SBL ML
FA
θv
Surface layer
S3
z
ML
RL
FA
Sunrise
θv
Stable (nocturnal) boundary layer
Residual layer Mixed layer
S4 S5
z
ML
FA
S6
θv
Noon
Surface layer
Entrainment zone ng inv ersion
Cappi
Free atmosphere
Source: adapted from Stull, R.B. (1988). An Introduction to Boundary Layer Meteorology. Kluwer Academic, Dordrecht.
Height/m
Figure 16.2: Schematic of the evolution of the atmospheric boundary layer.
438 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
AIRSHED MODELLING IN COMPLEX TERRAIN
439
Turbulent transfer allows quick communication of characteristics between the boundary layer and the underlying surface. Turbulence is controlled largely by the net all-wave (Q ) radiation balance and its diurnal evolution. It is convenient to define the net all-wave radiation balance as Q ¼ S# S" þ L# L"
(16:1)
where S# is the magnitude of the flux of downwelling solar (short-wave) radiation that reaches the surface. The surface reflects some of the short-wave radiation back upwards (S:). L# is the long-wave radiation emitted by the atmosphere, and L: is the long-wave radiation emitted by the surface. The sum of these terms yields the net allwave radiation flux, which is the energy available for distribution by the surface. It turns out that, during the day, net all-wave radiation is positive (i.e., the surface has a surplus of energy) and, at night, it is negative (i.e., the surface has a deficit). In addition to the radiative fluxes at the Earth’s surface, the fluxes of sensible and latent heat also need to be taken into account. The sensible heat flux heats the air in the boundary layer directly. The latent heat flux (i.e. the flux of water vapour multiplied by L, the latent heat of vaporisation) is not converted to sensible heat and/ or potential energy until the water vapour condenses in clouds. The net all-wave energy gain or loss is partitioned among sensible heat flux (QS ), latent heat flux (QE ), and the conduction of heat down into the ground (QG ): Q ¼ QS þ QE þ QG
(16:2)
The exchange of heat through QS between the surface and the atmosphere creates buoyant plumes during the day, where warmed air from the surface is mixed upwards, while at night the tendency is for the surface to cool the air, causing the formation of nocturnal inversion layers that inhibit vertical motion. The simplistic depiction of boundary layer evolution shown in Figure 16.2 applies only to extensive flat terrain with homogeneous surface cover. However, most airsheds are situated over sloping terrain and/or have heterogeneous surface cover, including water–land discontinuities. Most of the Earth’s surface is like this, collectively referred to as ‘‘complex topography’’. Over sloping terrain, during the day the tendency is for airflow to be towards higher topography (wind systems such as anabatic winds, up-valley winds and up-slope winds belong to this category), while at night the cold air drains towards lower terrain (down-slope winds, katabatic winds and cold air drainage winds; Figure 16.3). If each individual slope generates its own wind circulation, complicated airflow patterns can be formed in airsheds. Figure 16.4 (see colour insert) captures the nocturnal flow over Christchurch, New Zealand, using a cold air drainage model developed by the Environment and Climate Consultancy Department of the German national weather service (Deutscher Wetterdienst) called KLAM_21 (Kossmann and Sievers, 2007). Here, KLAM_21 is used to simulate the nocturnal surface wind field during winter smog nights. In this case, the drainage winds from the steep topography surrounding the city lead to a stagnation/convergence zone over the city centre, which exacerbates a reduction in air quality. A successful prediction of ground-level
440
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.3: Schematic of anabatic and katabatic flows, showing vertical temperature structure. Day
Night Anabatic wind (up-slope)
Katabatic wind (down-slope)
Co
r
Wa rm
Wa rm
Co
ole
ole
r
er
er
concentration of pollutants by an airshed model depends on adequate replication of such a complex airflow.
16.2.2
Local winds
There are numerous papers that discuss the dynamics of local wind systems, such as the sea-breeze and mountain–valley circulations, and their significance for air pollution dispersion. For example, Lu and Turco (1994) used a two-dimensional atmospheric model to study boundary layer dynamics for the coastal city of Los Angeles, and especially the role played by the coastal mountain ranges. In mountainous terrain, a hierarchy of circulations can be generated that act together to either disperse or concentrate pollutants by modifying surface winds and low-level stability (Whiteman, 2000). These local winds may enhance or inhibit dispersion of air pollution, although, as they tend to be closed circulations forming in light wind conditions, their effect is often to exacerbate air pollution problems. To illustrate the complexity of boundary layer development in complex terrain, and its significance for air pollution dispersion, examples from the high-resolution Weather Research and Forecasting model (WRF; Skamarock et al., 2007) for a typical sea-breeze day are presented. The wind vectors in Figure 16.5 show the predicted pattern of sea-breeze development from the east and west coasts of the Auckland region (New Zealand) at 1200 local time (LT) on 18 March 2008. The injection of cold air by the sea breeze is seen to suppress the mixing height (or planetary boundary layer height) over the land, with this suppression being evident for the areas adjacent to the coast where the sea breeze has encroached (Figure 16.6a – see colour insert). The collapse of the mixed layer should lead to higher pollutant concentrations, except that the sea breeze brings higher wind speeds, increasing the horizontal dilution of pollutants. Therefore the concentration of pollutants is dependent on a balance between the depth of the mixing layer after the passage of the sea breeze and the increase in wind speed, both of which have to be simulated successfully by any airshed model. The two-dimensional example in the next section examines this point more extensively.
AIRSHED MODELLING IN COMPLEX TERRAIN
441
Figure 16.5: Modelled wind field at noon on 18 March 2008 for the Auckland region, New Zealand. The thick black line indicates the coast. Opposing sea-breeze circulations are converging towards the middle of the peninsula to the north, while a diverging flow has established in the Manukau Harbour (middle of the plot) owing to the coastal configuration.
0
20 km
Areas where the sea breeze has not yet reached experience vigorous convective activity under clear sky conditions. At sunset, the only region with a significant mixing height is the Auckland urban area (Figure 16.6b – see colour insert). Here, the anthropogenic sensible heat flux from the urban fabric inhibits the formation of the nocturnal inversion layer.
16.3 APPLICATION OF AIRSHED MODELS This section first covers recent research on the importance of proper initialisation of airshed models, and then reviews airshed modelling experiments performed for two cities in extremely complex terrain: Christchurch, New Zealand, and Tehran, Iran.
16.3.1
Importance of initialisation
Proper specification of surface boundary conditions is probably one of the most important determinants of how the model meteorological fields will develop, yet its
442
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
effects are rarely examined. A quasi-two-dimensional version of The Air Pollution Model (TAPM, version 3; Hurley et al., 2005; Luhar and Hurley, 2004) is applied to examine the effect of soil moisture initialisation in the development of a sea-breeze circulation over flat terrain, and its effect on tracer concentrations. The two-dimensional framework provides a powerful analytical tool, as much of the complexity of three-dimensional flow is stripped away. Because computational demand is considerably reduced, ensemble testing of various ‘‘tuning parameters’’, such as soil moisture, becomes an attractive option. Some of the pioneering work in numerical analysis of airflow in complex terrain used the two-dimensional framework, and interested readers are referred to Mannouji (1982), who first formulated the concept of scale separation of thermally generated winds. It is known that soil moisture affects meteorological model predictions. It affects ground temperatures, which affect vertical temperature profiles of the boundary layer. Hence it exerts a control on boundary layer depth, air pressure, and wind speed. The effects of soil moisture variations on planetary boundary layer depth and circulation patterns have been studied by Zhang and Anthes (1982), Mahfouf et al. (1987), Lanicci et al. (1987) and Jacobson (1999). Lanicci et al. (1987), for example, found that low soil moisture over the Mexican plateau permitted the formation of a deep mixed layer as a result of strong surface heating and vertical mixing. Results presented here are obtained from a series of quasi-two-dimensional TAPM runs. The domain has 10 grid points in the north–south direction (with no variation in land use) and 100 grid points in the east–west direction (where roughly the western half of the domain is land, and the eastern half is water): hence the use of the term quasi-two-dimensional. The airshed model provides predictions for meteorological variables and pollutant(s) for each grid point. Topography is flat, with a height of zero metres above sea level, and each grid has a resolution of 1 km2 . TAPM also needs three-dimensional initial and boundary conditions for the meteorological module to function. The atmosphere is assumed to be at rest initially, so that any wind is generated solely by terrain heating and cooling, and the lateral boundary conditions use the zero gradient option. The simulated domain is centred at mid-latitudes in the Southern Hemisphere: this is an important specification, as variation in Coriolis force and solar radiation are latitude dependent and result in different sea-breeze characteristics (Bossert and Cotton, 1994). The time of the year is set for January (Austral summer). Soil type is set to sandy clay-loam (uniform medium), and vegetation parameterisations are turned off. Soil moisture was increased incrementally from a dry value of 0.05 m3 m3 to an almost saturated value of 0.4 m3 m3 to test its effects on boundary layer development. To examine the subsequent effect on air quality, a simple emission profile for a tracer was employed. A 100 km2 area source is placed near the centre of the domain on the coast, with a tracer emission strength of 200 g s1 . The emission source starts at 0700 LT in the morning, and has a constant strength for the rest of the day. All simulations produced an easterly sea-breeze density current, albeit with different characteristics. Figure 16.7 provides an example of a cross-sectional perspective through the modelled domain, showing the westward propagation of the sea breeze (from east to west) for the case where the soil moisture was initialised with a
443
AIRSHED MODELLING IN COMPLEX TERRAIN
Figure 16.7: Isopleths of the u-component of the wind speed at 1400 local time for the simulation initialised with a soil moisture content of 0.2 m3 m3 . The negative (positive) values indicate easterly (westerly) flow. The thick black line (labelled Z ) indicates the height of the mixed layer. Black arrow indicates the sea-breeze direction.
Height/km ASL
1.0
0
1
2
0
1
2
0.5
2
Z 0
1 0 1 2 3 4
2
2 4
0 Land
East–west distance/km
1 0
1
3 2 Sea
value of 0.2 m3 m3 . Above the sea breeze, a countercurrent or return flow is evident, which, combined with the cold air advection at the surface, depresses the mixed-layer height. Such a mixed-layer height depression is also observed by other researchers, including Lu and Turco (1994). The mixed-layer height in front of the breeze tends to be higher owing to mechanical turbulence generated by the sea-breeze front. Figure 16.8 illustrates the time-series of wind speed, temperature, sensible and latent heat fluxes obtained for a model run with soil moisture set at 0.15 m3 m3 . Sensible heat flux from the surface controls the air temperature, which is the primary forcing factor for the strength of the sea breeze. The warmer the air temperature over land, the stronger the developed flow will be. The magnitude of the sensible heat flux is in turn dependent on latent heat flux, because there is limited available energy to drive the fluxes. So if the soil is wet, energy will partition mostly into latent heat flux at the cost of sensible heat flux. It is interesting that there appear to be two different responses to variation in soil moisture initialisation, demarcated by a soil moisture value of 0.15 m3 m3 . For simulations initialised with the dry values (below 0.15 m3 m3 ), wind speeds are higher, with the sea-breeze intrusion occurring an hour earlier (Figure 16.8a). The maximum intensity of wind reaches about 6 m s1 , almost twice as much as in the wet cases. Winds tend to remain stronger at night also. Temperature responds as expected, with drier simulations reaching higher maximum daily values. The temperature trace for the 0.15 m3 m3 case is interesting, where the peak lags by almost 4 hours compared with other runs (Figure 16.8b). This is because the sensible heat flux does not peak for this run until the latent heat flux has decreased sharply, after the moisture has evaporated from the top layer of soil (Figures 16.8c and 16.8d). The partitioning of net radiation into sensible and latent heat fluxes changes as
8
8
14
Time (a)
12
16
18
20
22
10 12 14 16 18 20 22 24 Time (c)
10
24
100
200
300
400
500
600
700
100
6
6
10
12
14
16
18
20
22
24
100
4
4
28 26
0
2
2
0.05 0.10 0.15 0.20 0.25 0.35
0
100
200
300
400
500
600
700
0
1
2
3
4
5
6
2
2
4
4
6
6
8
8
12 14 Time (b)
16
18
20
22
10 12 14 16 18 20 22 24 Time (d)
10
24
Figure 16.8: Time-series of (a) wind speed, (b) temperature, (c) sensible heat flux, and (d) latent heat flux at 10 m above surface. Thick black line indicates results for the simulation initialised with an intermediate soil moisture content of 0.15 m3 m3 . Dry runs are in grey, wet runs are in black.
Wind speed/m s1
Sensible heat flux/W m2
Temperature/°C Latent heat flux/W m2
444 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
445
AIRSHED MODELLING IN COMPLEX TERRAIN
soil moisture is increased. The amount of energy that goes into evaporating surface water increases with higher soil moisture, leaving less energy for sensible heat flux. Figure 16.9 illustrates the time-series of tracer concentration, with the general trend in daily evolution of tracer concentration remaining the same between all runs. The peak in the morning suddenly drops off due to the incursion of the sea breeze, which increases the wind intensity. Therefore, although the mixed-layer height is depressed because of cold air advection, the stronger winds more than offset this effect, leading to smaller tracer concentrations for the remainder of the daytime. Throughout the day, tracer levels stay more or less constant until close to sunset, when the weakening sea breeze and the increase in static stability near the surface cause a dramatic increase in concentration. In general, the dry runs exhibit faster wind speeds and higher mixing layer heights (not shown here), leading to reduced tracer levels (Figure 16.9a). There is as Figure 16.9: Time-series of (a) tracer concentration and (b) percentage difference against the baseline simulation for the runs initialised with soil moisture contents of 0.05, 0.1, 0.15, 0.2, 0.25 and 0.35 m3 m3 . 250
Tracer/µg m3
200 150
0.05 0.10 0.15 0.20 0.25 0.35
100 50 0
2
4
6
8
10
12 14 Time (a)
16
18
20
22
24
2
4
6
8
10
12 14 Time (b)
16
18
20
22
24
Difference from baseline/%
200 150 100 50 0 50
100
446
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
much as a twofold difference between the dry and wet runs before the sea-breeze incursion (before 1000 LT). Between 1200 and 1800 LT, with the wetter soil initialisation, the sea-breeze intensity and the mixing layer depth are smaller, leading to higher ground-level concentrations. The evolution of tracer ground-level concentrations for the simulation that was initialised with the value of 0.15 m3 m3 is rather interesting. Until 1100 LT the levels match the wet simulations: then, in response to the top layer of soil drying out, which leads to the subsequent increase (decrease) in sensible (latent) heat flux, the concentration levels match the dry regimes (Figure 16.9a). It should also be noted that the nocturnal concentrations tend to be twice as high with the wetter soils. This highlights the importance of proper soil moisture initialisation in correctly modelling pollutant concentrations in coastal areas. Such sensitivity analysis is rarely performed. If the 0.15 m3 m3 run is considered as a baseline, the percentage difference in tracer concentration with other runs can highlight the importance of soil moisture initialisation (Figure 16.9b). At the initial stages, before the sea breeze is developed, the differences are up to 30%, mostly influenced by how soil moisture affects mixedlayer height alone. However, as the boundary layer becomes dominated by the cold airflow of the sea breeze, differences of as much as 100% can occur between the dry and wet runs. Considering that the emission strength is the same for all cases, this again highlights the importance of soil moisture initialisation. The percentage difference is even greater at night, reaching a maximum of 150%.
16.3.2
Tracing the movement of air
Christchurch (New Zealand) and nearby small towns have an ongoing air pollution problem resulting from the combustion of wood and coal for domestic heating. The diurnal variation of air pollution concentrations in the city is the result of a combination of fluctuations in emissions and the daily evolution of atmospheric conditions (Figure 16.10). The establishment of clean air zones is a major part of the regional air quality management strategy, outlining areas within which emissions from solid fuel burners are more tightly controlled. The application of airshed modelling has assisted in the delimitation of the clean air zone for Christchurch through the application of backtrajectory techniques based on the predicted wind field under worst-case air pollution conditions (Sturman and Zawar-Reza, 2002). This involved tracing the movement of air parcels across the city during the build-up of air pollution on highly polluted nights, as shown in Figure 16.11. The Regional Atmospheric Modelling System (RAMS) was used to predict the wind field under worst-case conditions, and kinematic Lagrangian back-trajectories were based on the predicted winds. Similar airshed model applications have been used to define buffer zones around nearby small towns for the control of pollution emissions. Figure 16.12 (see colour insert) illustrates the application of back-trajectory techniques to the delimitation of the buffer zone around Kaiapoi, a small town near Christchurch also affected by nocturnal cold air drainage during air pollution build-up.
447
AIRSHED MODELLING IN COMPLEX TERRAIN
Figure 16.10: Average diurnal cycle of PM10 and CO concentrations for Christchurch during May to August. 90
4.0
80
3.5 CO
70
PM10/g m3
2.5
50 2.0 40 1.5
30 20
PM10
CO/mg m3
3.0
60
1.0 0.5
10 0 00:00 02:00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 Time/h
0
Source: data obtained from Environment Canterbury, New Zealand.
16.3.3
Airshed modelling for a mega-city in a semi-enclosed basin
Tehran, the capital of the Islamic Republic of Iran, is situated in a semi-enclosed inland basin at the foothills of the Alborz Mountain Range (with an average height of 2000 m ASL). The 11 million inhabitants of the city are exposed to poor air quality for most of the year, owing to high emission levels from motor vehicles and the poor ventilation characteristics of the atmosphere. Episodically in winter the situation gets much worse, owing to the formation and persistence of nocturnal inversion layers during stagnant anticyclonic conditions. Analysis of data shows the following: the prevalence of a diurnally reversing up-slope/down-slope circulation system; morning and evening peaks in PM10 (particulate matter with an aerodynamic diameter of less than 10 m) associated with the rush-hour traffic and the stagnation period during the transition from down-slope to up-slope flow in the morning and its reversal in the evening. TAPM is used in a novel way to simulate mesoscale flow and PM10 dispersion by simulating only terraininduced flows. It is found that the rush-hour traffic is mostly responsible for the morning and evening PM10 peaks, while the contribution associated with stagnation in the morning is approximately 5 g m3 , but could be twice as high for the evening transition period, especially in autumn. The modelling methodology adopted for this research is based on idealised scenarios to help understand how peak emissions and local meteorology interact. Idealised scenarios have a certain appeal, as much of the complexity due to day-to-day variation in synoptic meteorology is removed, making their application more
448
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.11: Back-trajectories showing the predicted movement of air parcels across the Christchurch area between 1800 and 2200 LT under worst-case air pollution conditions. 80
North–south distance/km
60
40
20
0 0
20
40 60 West–east distance/km
80
Source: Sturman, A. and Zawar-Reza, P. (2002). Application of back-trajectory techniques to the delimitation of urban clean air zones. Atmospheric Environment. 36: 3339–3350.
appropriate, especially in coastal and mountainous regions that experience persistent diurnally reversing circulations. Such techniques have been applied successfully by Lu and Turco (1994) for the Los Angeles Basin and Whiteman et al. (2000) for Mexico City Basin. Simulations presented here use four domains with grid spacings of 27, 9, 3 and 1 km, respectively. Each domain has 30 zonal and meridional grid nodes, with 50 vertical levels stacked on top to provide high resolution in the vertical axis. Two days, one in July and one in October, were chosen to test the effect of seasonality on localscale circulation. Two experiments were performed for each season, the first one using hourly varying values for PM10 emissions derived from hourly aggregate traffic volumes (Halek et al., 2004) to obtain a diurnal breakdown of the estimated daily emissions of 11 t due to transport (Shirazi and Harding, 2001). The second experiment is used to study the effect of meteorology on air quality, so that a constant hourly emission of 128 g s1 is specified.
AIRSHED MODELLING IN COMPLEX TERRAIN
449
Figure 16.13 illustrates the hourly statistics of PM10 and wind speed for July and October 2005. Two peaks, one in the morning and one in the evening, are evident, corresponding to the rush-hour traffic. The morning peak occurs at 0800 Local Standard Time (LST; GMT+3.5) and the evening peak is much sharper and occurs at 2100 LST in summer, but has a gradual rise from about 1600 LST in winter. This is probably because in October schools were in session, changing the traffic volume profile, but could also be due to the earlier onset of a stable nocturnal profile. There is a marked reduction in concentrations outside the rush-hour periods during daytime, which seems to last longer in summer. The wind direction frequency distribution over Tehran shows a strong bimodal behaviour in both seasons (not shown). The surface winds are mostly from the south during the day (up-slope) and from the north at other times (down-slope). Considering that Tehran is located on sloping terrain near the foothills of a significant mountain range, this can be attributed to the existence of slope flow circulations (Whiteman, 2000). In general, the wind speeds over Tehran are relatively light, with the median speed barely exceeding 3 m s1 during the day, and is even lower at night (Figure 16.13). However, the passage of low-pressure systems can occasionally increase wind speeds. The low wind speeds are probably the result of sheltering by the surrounding topography. It is unfortunate that a mega-city such as Tehran developed in a geographic location with poor ventilation characteristics. A closer examination reveals a minimum in wind speed at 0700 LST in July and an hour later in October, which could be linked to the shift in wind direction. This stagnation during the transition period could be associated with a mesoscale front-like feature (i.e. formed through the interaction between the up-slope and down-slope air masses) that needs further research, but has been described previously by Hunt et al. (2003). The evening stagnation (transition) is evident at 2000 LST in summer, but not so discernible in the autumn case. Brazel et al. (2005) provided an analysis of the evening transition observations for Phoenix, Arizona, which is a city in a similar complex topographic setting. The importance of the morning and evening transition periods for Tehran’s case is that they may adversely affect air quality. This aspect is examined next. As synoptic effects are neglected, modelled airflow is caused only by mesoscale horizontal gradients in pressure, which in turn are driven by gradients in temperature. In this case, the only factor that can generate mesoscale flow is the sloping mountainous terrain, which tends to act as a heat source during the day, drawing air from lower levels, and as a heat sink at night, reversing the daytime flow. As simulations are initialised at midnight, after a 3-hour period of adjustment, the wind turns northerly, simulating the down-slope flow (Figure 16.14). From then on, the flow oscillates between a daytime up-slope and a nocturnal down-slope flow. Surprisingly, there is little difference in the intensity of the daytime and night-time winds, but previous experience with TAPM has shown a tendency for overestimation of night-time flow over sloping terrain (Zawar-Reza and Sturman, 2008). The circulation in October is about 0.5 m s1 weaker, which is related to the intensity of shortwave energy received at the surface. The simulated stagnation period is much stronger than that observed (Figure 16.14), with the wind intensity dropping by about 2 m s1
Wind speed/m s1
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
2
2
4
4
6
6
8
8
10
10
12 14 Hour
12 14 Hour
16
16
18
18
20
20
22
22
24
24
PM10/µg m3
7
Source: data kindly provided by Air Quality Control Company of Tehran, Iran.
Wind speed/m s1
PM10/µg m3 0
100
200
300
400
500
0
100
200
300
400
500
2
2
4
4
6
0
500
1000
6
8
8
4
12 14 Hour
16
10
12 14 Hour
16
8 12 16 20 24
10
18
18
20
20
22
22
24
24
Figure 16.13: Hourly statistics for PM10 and wind speed at 14 m for July (top panels) and October (bottom panels) for Tehran. (The inset is the estimated hourly varying PM10 emission due to transport.)
Emission/g s1
450 MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
451
AIRSHED MODELLING IN COMPLEX TERRAIN
Figure 16.14: Simulated (a) wind direction and (b) wind speed from the idealised runs with TAPM. 360
Wind direction/degrees
Day 1
Day 2
270
180
90
0 0
6
12
18
24 Hour (a)
30
36
42
48
7 July October
Wind speed/m s1
6 5 4 3 2 1 0 0
6
12
18
24 Hour (b)
30
36
42
48
for each oscillation. This is most likely because the modelled data represent conditions for a volume of air (1 km 3 1 km 3 10 m for this case), whereas measurements are conditions for a point. Similar to measured data, the duration of the up-slope flow is longer in the summer, with initiation at 0700 LST and cessation at 1900 LST. Figure 16.15 presents the results from time-varying and constant emission profile experiments. In the time-varying experiment, two morning and evening peaks of PM10 are present, corresponding to periods with maximum emission strengths (Figure 16.15a). As the hourly emission profile of PM10 is the same for both seasonal runs, the
452
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.15: Time-series of modelled PM10 using (a) time-varying emissions and (b) constant hourly emission profile. 80 Day 1
Day 2
PM10/µg m3
60
40
20 July October 0 0
6
12
18
24 Hour (a)
30
36
42
48
0
6
12
18
24 Hour (b)
30
36
42
48
80
PM10/µg m3
60
40
20
0
difference should be due to meteorological conditions only. The morning peaks in October are prolonged by 2 hours, showing the poorer ventilation characteristics in this season, and the afternoon peaks start much earlier. This predicted behaviour of PM10 is similar to what is observed. The effect of meteorology on air quality is much clearer in the second experiment (Figure 16.15b). Any diurnal change in concentrations with a constant hourly emission profile can only be due to the simulated dispersive capability of the modelled
AIRSHED MODELLING IN COMPLEX TERRAIN
453
atmosphere. After the initial adjustment period, the concentrations settle down to about 35 g m3 , and return to this value in the second night also. The morning peaks are insignificant (an increase of only 2 g m3 ) in both seasons, but in October elevated concentrations last for 2 hours longer. The daytime concentrations drop significantly with the growth in the mixed layer and increased wind speeds, which is the same as is observed. Similar to measurements, the build-up of the evening peaks happens much earlier in autumn, with higher maxima. This probably highlights the influence of meteorology in controlling the evening peak rather than a seasonal change in traffic behaviour.
16.4 CONCLUDING COMMENTS This chapter has provided a brief introduction to airshed models, and has covered the physical mechanisms that control boundary layer height and pollution concentrations. Recent research for two cities situated in complex airsheds was then presented. Christchurch, New Zealand, and Tehran, Iran, suffer from poor air quality resulting from excessive local emission of pollutants. Airshed models have provided significant information on the nature of the air quality problems in both locations and, in the case of Christchurch, clear air zones have been determined by relying solely on results provided by airshed models. The importance of appropriate application of airshed models was illustrated by investigating their sensitivity to soil moisture initialisation. Airshed models were also shown to have utility when used in an idealised framework for the case of Tehran, where diurnally reversing wind systems repeatedly dominate low-level meteorology.
REFERENCES Bossert, J.E. and Cotton, W.R. (1994). Regional-scale flows in mountainous terrain. Part II: Simplified numerical experiments. Monthly Weather Review. 122: 1472–1489. Brazel, A.J., Fernando, H.J.S., Hunt, J.C.R., Selover, N., Hedquist, B.C. and Pardyjak, E. (2005). Evening transition observations in Phoenix, Arizona. Journal of Applied Meteorology. 44: 99– 112. Halek, F., Kavouci, A. and Montehaie, H. (2004). Role of motor-vehicles and trend of airborne particulate in the Great Tehran area, Iran. International Journal of Environmental Health Research. 14: 307–313. Holmes, N.S. and Morawska, L. (2006). A review of dispersion modelling and its application to the dispersion of particles: an overview of different dispersion models available. Atmospheric Environment. 40: 5902–5928. Hunt, J.C.R., Fernando, H.J.S. and Princevac, M. (2003). Unsteady thermally driven flows on gentle slopes. Journal of the Atmospheric Sciences. 60: 2169–2182. Hurley, P.J., Physick, W.L. and Luhar, A.K. (2005). TAPM: a practical approach to prognostic meteorological and air pollution modelling. Environmental Modelling & Software. 20: 737– 752. Jacobson, M.Z. (1999). Effects of soil moisture on temperatures, winds, and pollutant concentrations in Los Angeles. Journal of Applied Meteorology. 30: 607–616. Kossmann, M. and Sievers, U. (2007). KLAM_21 drainage wind modelling of wintertime air
454
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
pollution events in Christchurch, New Zealand. Proceedings of the 29th International Conference on Alpine Meteorology, Chambe´ry, France, Vol. 1, pp. 29–32. Lanicci, J.M., Carlson, T.N. and Warner, T.T. (1987). Sensitivity of the Great Plains severe-storm environment to soil-moisture distribution. Monthly Weather Review. 115: 2660–2673. Lee, S.-M., Fernando, H.J.S. and Grossman-Clarke, S. (2007). MM5-SMOKE-CMAQ as a modelling tool for 8-h ozone regulatory enforcement: application to the state of Arizona. Environmental Modelling and Assessment. 12: 63–74. Lu, R. and Turco, R.P. (1994). Air pollutant transport in a coastal environment. Part I: Twodimensional simulations of sea-breeze and mountain effects. Journal of Atmospheric Sciences. 51: 2285–2308. Luhar, A.K. and Hurley, P.J. (2004). Application of a prognostic model TAPM to sea-breeze flows, surface concentrations, and fumigating plumes. Environmental Modelling & Software. 19: 591–601. Mahfouf, J.-F., Richard, E. and Mascart, P. (1987). The influence of soil and vegetation on the development of mesoscale circulations. Journal of Climate and Applied Meteorology. 26: 1483–1495. Mannouji, N. (1982). A numerical experiment on the mountain and valley winds. Journal of Meteorological Society of Japan. 60: 1085–1105. Oke, T.R. (1978). Boundary Layer Climates. Methuen, London. Orlanski, I. (1975). A rational subdivision of scales for atmospheric processes. Bulletin of the American Meteorological Society. 56: 527–530. Reid, N., Misra, P.K., Amman, M. and Hales, J. (2007). Air quality modelling for policy development. Journal of Toxicology and Environmental Health. 70: 295–310. Shirazi, M.A. and Harding, A.K. (2001). Ambient air quality levels in Tehran, Iran, from 1988 to 1993. International Journal of Environment and Pollution. 15: 517–527. Skamarock, W.C., Klem, J.B., Dudhia, J., Gill, D.O., Barker, D.M., Wang, W. and Powers, J.G. (2007). A description of the advanced research WRF version 3. NCAR, Boulder, CO, USA, NCAR/TN-468+STR. Stull, R.B. (1988). An Introduction to Boundary Layer Meteorology. Kluwer Academic, Dordrecht. Sturman, A. and Zawar-Reza, P. (2002). Application of back-trajectory techniques to the delimitation of urban clean air zones. Atmospheric Environment. 36: 3339–3350. Titov, M., Sturman, A. and Zawar-Reza, P. (2007a). Improvement of predicted fine and total particulate matter (PM) composition by applying several different chemical scenarios: a winter 2005 case study. Science of the Total Environment. 385: 284–296. Titov, M., Sturman, A.P. and Zawar-Reza, P. (2007b). Application of MM5 and CAMx4 to local scale dispersion of particulate matter for the city of Christchurch, New Zealand. Atmospheric Environment. 41: 327–338. Whiteman, C.D. (2000). Mountain Meteorology. Oxford University Press, New York. Whiteman, C.D., Zhong, S., Bian, X., Fast, J. D. and Doran, J.C. (2000). Boundary layer evolution and regional-scale diurnal circulations over the Mexico Basin and Mexican plateau. Journal of Geophysical Research. 105: 10081–10102. Zawar-Reza, P. and Sturman, A. (2008). Application of airshed modelling to the implementation of National Environmental Standards for air quality: a New Zealand case study. Atmospheric Environment. 42: 8785–8794. Zawar-Reza, P., Sturman, A. and Hurley, P. (2005). Prognostic urban-scale air pollution modelling in Australia and New Zealand: a review. Clean Air and Environmental Quality. 39: 41–45. Zhang, D., and Anthes, R.A. (1982). A high-resolution model of the planetary boundary layer: sensitivity tests and comparisons with SESAME-79 data. Journal of Applied Meteorology. 21: 1594–1609.
EPILOGUE
The use of innovative mathematical modelling tools in modern environmental analysis is a reflection of need, advanced computing capabilities, pioneering software developments, and the drive and enthusiasm demonstrated by investigators in this rapidly expanding field. Current models are being designed to be able to effectively predict and simulate the dynamics of environmental systems, and are routinely used in environmental decision-making and management. Since completion of this volume, and reflecting back upon Volume I, I cannot stress enough how impressed I am by the breadth of expertise of the contributors, and by the variety and depth of the applications presented. This volume expands the knowledge base of Volume I and contributes to the comprehensive nature of the Advanced Topics in Environmental Science Series published by ILM Publications. Both volumes present unique modelling approaches to studying the transport and behaviour of contaminants in complex environmental systems. Moreover, detailed discussion on such concepts as model accuracy, generalisability, uncertainty, sensitivity analysis, and validation complement application-specific content and provide readers with a greater understanding of the model assessment process. Such topics detail the complexity of environmental models and the limitations to their use. I believe the future of environmental modelling lies in the ability to address a number of key considerations, including complexity, variability, scale, and purpose. Model performance criteria must be better defined and measured. Uncertainty, both model- and data-based, must be routinely incorporated into simulation and predictive protocols. There is great support and movement towards hybrid systems – the combination of two or more techniques (paradigms) to realise powerful problemsolving strategies. The suitability of individual techniques is case specific, each with advantages and potential drawbacks. Ideally, hybrid systems will combine two or more techniques, with the ultimate goal of gaining the strengths and overcoming the weaknesses of single approaches. Consider, for example, the integration of symbolic (e.g., fuzzy systems, evolutionary computation) and connectionist (e.g., artificial neural networks) systems. This combination, toward a neuro-fuzzy system, is likely to provide an effective and efficient approach to environmental problem-solving. Finally, I challenge researchers in academia, industry and government-related bodies to work in collaboration in developing environmental models, and efficiently employing them with the emphasis on preserving environmental quality in the course of regional, national, and global development activities. Grady Hanrahan
INDEX
3D-Var and 4D-Var approaches 421, 424, 426–7, 429–31 3MRA model 71, 112, 114 absolute principal components scores (APCS) analysis 160–1 adjoint models, air quality simulations continuous vs discrete 427–8 evaluation of gradients 425–7 receptor-orientated approach 426 advection, solute transport in soils 316 AERMOD model 69, 72, 73, 74, 82 AIRNOW 429–30 ozone observation 432 airshed modelling in complex terrain 435–54 anabatic and katabatic flows 440 applications 441–53 initialisation 441–6 Tehran, Iran, modelling in a semienclosed basin 447–53 tracing movement of air 446–7 atmospheric boundary layer evolution (schematic) 438 background 436–41 boundary layer control on air quality 436–40 isopleths of wind speed 443 local winds 440–1 mesoscale, synoptic and local scales 436–7 temporal and spatial scales of atmospheric motion 437 time-series modelled PM10 452 tracer concentration and percentage difference 445 wind speed, temperature, sensible heat flux and latent heat flux 444 wind direction and speed 451 air/water interface, colloid-facilitated contaminant transport 274–5 air flow modelling 315–16 governing equations of fluid flow 321–3
air quality models 8–13 CMAQ model 8, 10–13, 69, 72–3, 76, 82, 409 air quality simulations xiii, 419–34 AIRNOW 429–30, 432 chemical data assimilation 423–8 adjoint sensitivity 429–30 chemical sensitivity analysis and data assimilation 429–32 CMAQ model 429–32 continuous adjoint models vs discrete adjoint models 427–8 evaluation of gradients via adjoint sensitivity analysis 425–7 four-dimensional variational (4D-Var) approach 424, 430–2 second-order adjoint models 428 CTM predictions 420, 432 dispersion models 8, 72–82 future directions 13 integration of data and models 421–2 ozone prediction biases 431 Shenandoah National Park 432 and sensitivity analysis 381–455 algorithms alternating least squares (ALS) algorithm 162 batch training algorithm (BTA) 165 linkage algorithms, cluster analysis 156–7 self-organising maps (SOM) 163–7 sequential training algorithm (STA) 165–6 241 Am records, lake sediments 383 anabatic and katabatic flows 440 Antarctica, artificial radionuclides, 137 Cs records 383 APEX model 70, 86–8, 91, 98, 103–4, 113 aquatic systems biological oxygen demand (BOD) 253 modelling 135–262 persistent organic pollutants (POPs) 215–42
458
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
river water quality modelling 243–59 see also groundwater; lakes; surface water AQUATOX, simulation model for aquatic ecosystems 22 ArcGIS system 295–6 ARCing 299, 306 Aroclors see PCBs (polychlorinated biphenyls) artificial intelligence (AI), decision support 45 ASPEN model 69, 72, 73, 74, 82 Assessment of Spatial Aerosol Composition in Atlanta (ASACA) project 406–7 Atlanta, Georgia, particulate matter PM2:5 control levels 405–17 atmospheric boundary layer evolution (schematic) 438 SCADIS (Scalar Distribution) model 348 atmospheric chemical transport model (CTM), GEOS-Chem 13 atmospheric deposition and effects on forest ecosystems 357–77 biochemical, biogeochemical and geochemical cycles 358 ForSAFE model 357–60 input 360–3 average root distribution through soil profile 362 deposition trends 363 internal nutrient pools 361 layer specific 360 mineral contents per layer 361 modelled and measured values 367–75 base cation (Bc) pools and pathways in soil 371 Bc and Al concentrations in soil solution 368 Cl– concentrations in soil solution 365 decline of base saturation 370 evolution of standing wood biomass 369 gross uptake of Bc and N 372 modelled and measured Bc and total inorganic aluminium concentrations 368 organic carbon and nitrogen accumulation 374 pH, ANC and SO4 concentrations in soil solution 366 soil acidification and nutrient status 369–73 soil Bc/Al ratio 373 soil organic C and N 373–4 standing wood biomass 364, 369 atmospheric fate and transport modelling 6–13, 72–83
air quality models 8–13 regulatory applications 10–13 CMAQ model 8, 10–13, 69, 72–3, 76, 82, 409 model selection and evaluation criteria 7–10 regulatory background, Clean Air Act 6–7 Atmospheric Modeling and Analysis Division (AMAD) 13 atmospheric motion, temporal and spatial scales 437 Auckland region, New Zealand, airshed modelling 441 bacteria in aquifers, biodegration studies 312 BAF-QSAR model, non-ionic organic chemicals 21 BASINS (Better Assessment Science Integrating point and Non-point Sources) 17 BASS (Bioaccumulation and Aquatic System Simulator) 22–3 batch training algorithm (BTA) 165 Bayes’ rule 423 Bayesian-based inverse method (BBIM) 247–50 Bayesian model 248–50 distributed-source model (DSM) 247–8 benzene exposure, US Supreme Court 4 bioaccumulation factor (BAF) 20 bioaccumulation modelling 20–3 AQUATOX 22 AQUAWEB 23 BASS 22–3 chemical screening 21–2 expert elicitation use in modelling assessments 32–4 future research and regulatory directions 23 nested Monte Carlo approach schematic 30 regulatory background 20–1 research models used 22–3 terrestrial food webs 23 bioconcentration factor (BCF) 20 biodegration studies, bacteria in aquifers 312, 332 biological oxygen demand (BOD) correlation analysis 253 and NH4 þ , vs water flow 253 bivariate probability density functions 407–8 blocking functions 268 boundary-layer dynamics, Los Angeles 440 14
C, in lake sediments 383–4
INDEX
C, N and P geospatial patterns see Santa Fe River Watershed (SFRW) Florida, C, N and P modelling Calendex model 70, 96 CAMx model 69, 73, 76, 82 capillary electrophoresis (CE) 140, 146–7 capping inversion 438–9 carbonaceous geosorbents (GC, black carbon) 221 CARES model 70, 96 Cd measurements, Odra river (Germ./Poland) ecosystem 179 CERCLA, contaminated land decision support 24, 47 characteristic attached concentration 268 Chebychev distance 155 chemical mass balance (CMB) model 9, 10 chemical sensitivity analysis, and data assimilation 429–32 chemical speciation modelling 135–51 10 metal/EDTA system, species and reported log beta values 137 adjunctive and disjunctive pathway mechanisms 138 analytical methods 140–1 equilibrium constant values 139–40 experimental methods 142 local equilibrium assumption (LEA) 137–9 Markov chain Monte Carlo (MCMC) simulation 141–2, 228–30 mass balances 136 modelling methods 141–2 results and discussion 142–8 capillary electrophoresis (CE) 140, 146–7 kinetic uncertainty 143, 146–8 second-order rate functions 147–8 speciation diagrams 144–5 thermodynamic uncertainty 139–40, 142–3 chemical transport models (CTMs) 13, 420, 428 GEOS-Chem 13 mass balance equations 422 quantities that depend on system state 423 ChemSTEER screening level model 71, 106–7, 116–17 Chernobyl accident, Ukraine, Lake Windermere artificial radionuclide 137 Cs records 382, 384 China, Hun-Taizi river system 245–53 Cl– concentrations breakthrough curves for Cl– 330–1
459 soil solution 365, 366 sulfate deposition 365 Clean Air Act hazardous air pollutants (HAPs) 6–7 regulatory background 6–7 Clean Air Interstate Rule (CAIR) 10–11 Clean Air Mercury Rule (CAMR) 10–11 Clean Water Act (CWA) 14 climate information, differences in perspective between scientists and managers 36 closure model approach, flux and concentration footprint modelling 347–9 cluster analysis 153–7, 250 Chebychev distance 155 Davies–Bouldin index 183, 198 dendrogram and total content of heavy metals 173–4 distance calculations 154–6 Euclidean distance 154, 165 linkage algorithms 156–7 Mahalanobis distance 155 Manhattan distance 154 Minkowski distance 155 pattern according to Davies–Bouldin index 183 Pearson correlation 155 CMAQ (Community Multiscale Air Quality) model xi, 8, 10–13, 69, 72–3, 76, 82, 409 ozone predictions, Shenandoah National Park 432 cold air drainage model KLAM, solar (short-wave) radiation 21, 439 collision efficiency 267 colloid, defined 263 colloid-facilitated contaminant transport in unsaturated porous media 263–91 air/water interface 274–5 attachment mechanisms 267–70 colloid-facilitated transport models 276–81 dynamics of kinetic solid-water mass transfer 278–81 fully equilibrium models 277–8 colloid retardation mechanisms 265 detachment mechanisms 270–4 Happel sphere-in-cell model 267 heterogeneity 281–3 plugging 275–6 sticking efficiency 267 unknowns and future directions 283–6 heterogeneous colloids characteristics 285 pore-scale heterogeneity 285–6
460
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
quantifying geochemical effects 284–5 upscaling to field scale 283–4 colloid filtration theory (CFT) 267 Community Multiscale Air Quality (CMAQ) model 8, 10–13, 69, 72–3, 76, 82, 409 air quality simulations 429–32 premier air quality modelling system (US EPA) 429 Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) 24–5, 47 conceptual site models (CSMs) fate and transport modelling of POPs in aquatic systems 219, 230–1 Grenland fjords, Norway, dioxin contamination 230–9 contaminant fugacity f, in units of partial pressure (Pa) 222–3 contaminant transport dioxin in Grenland fjords, Norway 230–9 mechanisms 316–21 solute transport in soils 311–35 in unsaturated porous media see colloid-facilitated contaminant transport contaminated land decision support (Europe) xi, 43–60 advection, dispersion and and migration 49–50 annual expenditure on management, % of GDP 44 artificial intelligence (AI) 45 decision support systems (DSS) 45–6, 50–4 classifications and categories 53 climate change effects 55–6 computer-based analytical methods 47–50, 52–3 decision analysis (DA) 45, 51 history 52–4 multi-criteria decision analysis (MCDA) 51–2 stochastic methods 50 trends, Internet and Web-based systems 54–5 European legislation 46–7 expert systems (ES) 45 exposure pathways to receptors, conceptual model 47 knowledge-based systems (KBS) 45 management 46–50 status of investigation and clean-up in Europe 44 USA legislation, CERCLA 47
contaminated site remediation 24–32 connections linking model components 31 decision support 43–60 FISHRAND modelling 28–32 Hudson River PCBs 25–32 Hudson River Toxic Chemical Model (HUDTOX) 27–8 regulatory background 24–5 risk-based approaches 25 sediment remediation 216–18 multimedia model, remediation scenario simulation 235–8, 236 continuous adjoint models 427–8 control variables 424 Coriolis force 442 correlation coefficient 155 Cr measurements, Odra river (Germ./Poland) ecosystem 179 CRS model 402 Darcy’s law 49 data assimilation 420–2 Davies–Bouldin index, cluster analysis 183, 198 decision support and modelling 3–42 decision support systems (DSS) 45–6, 50–4 DECOMP model 360 DEEM model 70, 94 detachment frequency, Pseudomonas fluorescens cells 273 detachment mechanisms colloid-facilitated contaminant transport 270–4 DLVO theory 273–4 deviance information criterion (DIC) 249–50, 254 models for estimation of ki and kj 254 diatoms, frustules 382 diffusion, solute transport in soils 316 dioxin contamination case, Grenland fjords, Norway 230–9 Dirchlet boundary 422 direct sensitivity analysis 425 dispersion models 7, 8 dissolved organic carbon (DOC) 221–2 distance calculations, cluster analysis 154–6 distributed-source model (DSM) 247–8 DLVO theory 273–4 eddy covarience (EC) 339 E-FAST model 71, 107, 110 ensemble Kalman filter (EnKF) 421
INDEX
environmental fate and bioaccumulation modelling 3–42 history 4 US EPA, see also bioaccumulation modelling environmental fluid dynamics code (EFDC) and DYNHYD 16–17 environmental landscape data 295–6 environmetrics, Odra river (Germ./Poland) ecosystem 167–80 EPA see US EPA EPANET model 69, 82, 85, 86 Euclidean distance 154, 165 Eulerian joint pdf 346 Eulerian models 7–8, 10 Europe contaminated land decision support 43–60 Integrated Pollution Prevention and Control (IPPC) Directive 46 EXAMS model 69, 80, 84, 86 expert elicitation use in modelling assessments 32–4 Exposure Analysis Modeling System (EXAMS) 17–18 exposure assessment, US EPA 61–131 abbreviations list 62–3 background information 65–7 categories used by US EPA 69–71 guidelines 65–7 methods 67–8 models 68–118 air quality and dispersion models 72–82 in current use 69–71 default, included in ChemSTEER screening level model 116–17 exposure models 86–105 fate/transport models 72–86 integrated fate/transport and exposure models 106–18 surface water and drinking water models 82–6 source-to-outcome continuum 64 exposure models 86–105 FAST technique 233 fate and bioaccumulation modelling 3–42 fate and transport modelling 72–86, 219–24 fine particles see particulate matter PM2:5 Finland, flux measurement 349 FIRST model 69, 76, 82–3 FISHRAND modelling contaminated site remediation 28–32 equations 31
461 Florida see Santa Fe River Watershed (SFRW) fluid flow, governing equations 314–16 flux and concentration footprint modelling xii, 339–55 basic 341–3 closure model approach 347–9 crosswind-integrated footprint for flux measurements 342 footprint models, overview 341 future research and open questions 351 Lagrangian stochastic trajectory approach 343–6 large-eddy simulation (LES) approach 346–7 Navier–Stokes equation 347–8 reactive gases 349–51 Footprint Calculator 351 footprint modelling of trace gas fluxes 339–55 flux vs concentration functions 343 footprint function 340 overview of models 341 forest ecosystems modelling 339–80 atmospheric deposition and effects 357–77 standing wood biomass 364, 369 ForSAFE model 357–60 calibration 363–4 central simulation modules 359 deposition trends 363 validation 364–7 four-dimensional 4D-Var approaches 421, 424, 426–7, 429–31 FPM see particulate matter PM2:5 and PM10 Framework for Risk Analysis of Multimedia Environmental Systems (FRAMES) 19 Galerkin approach, residual error 324 Gaussian/non-Gaussian turbulence 344 GENEEC model 69, 78, 82–3 GEOS-Chem global atmospheric chemical transport model (CTM) 13 geospatial prediction models 296–9 Grenland fjords, Norway, dioxin contamination 230–8 conceptual site models 230–9 four hypotheses 231 model for emissions and pathways of dioxins 232 multimedia model 231–8 application set-up 232–3 model evaluation 234–5, 235 remediation scenario simulation 235–8, 236
462
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
sensitivity analysis, calibration and uncertainty analysis 233–4 groundwater processes in 312–13 soil sediment and subsurface modelling 46 see also solute transport in soils HAPEM model 70, 88, 104 Happel sphere-in-cell model 267 heat fluxes, sensible and latent, time-series 439, 444 Heaviside function 271 heavy metals Odra river (Germany/Poland) 173–80 Thailand tsunami sediments 190–205 HEM-3 model 71 Henry’s volumetric coefficient of solubility 316 Hessian-vector products 428 Hubbard Brook Ecosystem Study 357–77 Hudson River PCB releases General Electric 25–8 modelling example 25–32 Hudson River Toxic Chemical Model (HUDTOX) 27–8 Hun-Taizi river system, China 245–53 Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model 10 hydrologic unit code (HUC), headwater watersheds 19 HYSPLIT (Hybrid Single-Particle Lagrangian Integrated Trajectory) model 10 IAQX model 70 IGEMS model 71 Indian Ocean tsunami disaster, Thailand sediments 189–205 basic data on samples 193 chemical variables 195, 196, 202, 203 classification plot 198 determination of salts, heavy metals, metalloids and mercury 194 mean values of salts, heavy metals, metalloids and mercury 199, 204 SOM classification of chemical variables 196, 203 study area and sampling site locations 192 integrated fate/transport and exposure models 106–18 integrated modelling, legal assessment 11 internal boundary layer (IBL) estimation 340 inversion, capping 438–9 iron-reducing bacteria 264
ISATIS 9.0 296 ISC model 69, 72, 73, 74 Italy, Trieste river system 180–3 water supply system 181 Kalman filter 421 Kinetic PreProcessor (KPP) 429 KLAM_21, cold air drainage model 439 Koehler–Symanowski distribution 407–8, 414 parameters 414 Kohonen maps 163–7 Kołmogorov constant C0 345 Kołmogorov–Smirnov test 183–4, 200 Kriging 296, 298 Lagrangian stochastic trajectory models 7, 10, 343–6 advantages and disadvantages 344–6 chemical degradation effect 350 flux and concentration footprint modelling 342–6 forward and backward models 344 Gaussian/non-Gaussian turbulence 344 Langevin equation 343 uniqueness problem 343 lake sediments 14 C 383–4 Chernobyl accident, artificial radionuclides 382, 384 210 Pb 381–404 226 Ra records 384–7 222 Rn records 384–7 lakes multimedia models 221 see also aquatic systems Langevin equation 343 Langmuir-based dynamic blocking functions 268 large-eddy simulation (LES), flux and concentration footprint modelling 346–7, 350–1 legal assessment, integrated modelling 11 LifeLineTM model 70, 96 linear transformation techniques see principal component analysis (PCA) linkage algorithms, cluster analysis 156–7 centroid linkage 157 median linkage 157 nearest neighbour 156 Ward’s method 156 local wind systems 440–1 log-normal Kriging (LNK) 296, 298
INDEX
Mahalanobis distance 155 Manhattan distance 154 Markov chain Monte Carlo (MCMC) simulation 141–2, 228–30, 234–5, 248, 253 mass balance equation 222 maximum likelihood estimation (MLE) method 407 MCCEM model 70, 87, 92, 102, 103, 104 mercury Clean Air Mercury Rule (CAMR) 10–11 Grid Based Mercury Model (GBMM) 19 Thailand tsunami sediments 194, 199 Watershed Chacterization System-Mercury Loading Model (WCS-MLM) 19–20 metalloids, Thailand tsunami sediments 194 method of moments (MM) 407 Minkowski distance 155 Mn measurements, Odra river (Germ./Poland) ecosystem 179 models/modelling airshed 435–54 air quality and dispersion 10–13, 72–82 atmospheric fate and transport 6–13, 72–83 bioaccumulation 20–3 chemical speciation 135–51 colloid-facilitated transport 277 default models 116–17 defined 3 dispersion 7, 8 environmental fate and bioaccumulation 3–42 expert elicitation use in modelling assessments 32–4 environmental fate and transport, example application 33–4 predicted PCB concentrations in Hudson River 25–32 uncertainty quantification 34 exposure 86–105 assessment 69–71 exposure assessment, US EPA 68–118 exposure pathways to receptors 47 fate and bioaccumulation regulatory applications 35–7 fate and transport modelling 215–42 flux and concentration footprint modelling 339–55 geospatial prediction 296–9 human activities, environmental impacts 5
463 integrated fate/transport and exposure 106–18 kinetic uncertainty 143–8 Monte Carlo 30 multivariate statistical 153–214 photochemical 8–9 receptor 9 regulatory applications fate and bioaccumulation models 35–7 research 22–3 scientists vs managers using climate information 36 river water quality 243–59 screening-level 21–2, 64 solute transport in soils 311–35 source to outcome continuum 64 surface water quality 14–20, 82–6 thermodynamic uncertainty 142–3 uncertainty, and sensitivity analysis 227–30 Monod kinetic model 312, 320–1 biodegration studies, bacteria in aquifers 312, 332 equations 325 Runge–Kutta fourth-order method 325 Monte Carlo models 30, 228 Markov chain Monte Carlo (MCMC) simulation 141–2, 228–30, 234–5, 248, 253 multi-criteria decision analysis (MCDA) 51–2 multimedia models 220–4 basic features 220–3 Grenland fjords, Norway 231–8 lake application, model compartments 221 linking abiotic and biotic models 223–4 source and sink processes 222 multiple sample percolation system (MSPS) 329 multivariate statistical modelling xi–xii, 153–214 case studies 167–205 Indian Ocean tsunami disaster 189–205 Odra river (Germ./Poland) ecosystem 167–80 surface water pollution 180–9 cluster analysis 153–7 N-WAY PCA 161–3 principal component analysis (PCA) 158–60 principal component regression (PCR) 160–1 absolute principle component analysis (APCS) 160–1
464
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
recommended methodology for an exploratory analysis 206 self-organising maps (SOM), Kohonen maps 163–7 see also self-organising maps (SOM); sub-headings above National Ambient Air Quality Standards (NAAQS) 6, 405, 419 particulate matter PM2:5 control levels 405 quantifying uncertainty 34 National Oceanic and Atmospheric Administration (NOAA) 420 Navier–Stokes primitive equations 436 Neuse river estuary, WASP (Water Quality Analysis Simulation Program) 15–16, 16 New Zealand, airshed modelling Auckland region xiii, 440–1 Christchurch average diurnal cycle of PM10 and CO concentrations 447 back trajectories, predicted movement of air parcels 448 tracing movement of air 446–7 Kaiapoi xiii, 446 nitric oxide emissions 430 NMC method 424 NO3 – concentrations, breakthrough curves 330–1 Northern Hemisphere, 210 Pb, transport and in atmosphere 390 Norway, Grenland fjords, dioxin contamination 230–9 N-WAY PCA 161–3 Odra river (Germ./Poland) ecosystem 167–80 agriculture activity 167 cluster analysis dendrogram and total content of heavy metals 173–4 factor scores for various sampling sites 177 map cross section locations 168 sampling characteristics 169 matrix of factor loadings 174, 176 mean annual flow 167 metal content 170–80 Cd, Cr and Mn measurements 179 multivariate statistical modelling 173–4 plot of factors obtained 176 sediment guidelines 171 sediment samples, latent factors 179 operator splitting 325 optimisation problem 424
ozone AIRNOW observation and CMAQ ozone predictions, Shenandoah National Park 432 prediction biases 431 PARAFAC (PARAllel FACtor analysis) 161, 163 particulate matter PM2:5 levels 405–17 annual average mass concentrations 409 Assessment of Spatial Aerosol Composition in Atlanta (ASACA) project 406–7 average species concentrations 409 Koehler–Symanowski distribution 407–8, 414 method 406–8 monitoring stations 407 National Ambient Air Quality Standard (NAAQS) 405 pollutant levels 412 reductions needed for sulfate and organic carbon (OC) 413, 415 results and discussion 408–15 sensitivity of secondary species to emissions 410 Tehran 447–53 univariate distributions fitting PM2:5 species data 413 univariate probability density functions 406–7 particulate matter PM10 , average diurnal cycle of PM10 and CO concentrations (Christchurch) 447 particulate organic carbon (POC) 221–2 partitioning capacity 222 210 Pb, transport and in atmosphere 381–404 cycle 385 dating 398–401 fallout in lake sediment core 400 Europe, timespan 382 fallout, spatial distribution 389–90 flux from atmosphere against longitude values 394–5 inventory and flux values 394 Lake Windermere artificial radionuclide records 381–3 Blelham Tarn 399 sediment core 400 post-depositional redistribution 401–2 post-depositional transport 395–6 production and fallout 387–95 spatial distribution 389–90
465
INDEX
and 222 Rn global balance in atmosphere 388 production and diffusion 386–7 sites in Northern Hemisphere 390 transport through catchment-lake systems 396–8 238 U decay series 384 vertical distribution 392–3 PCBs (polychlorinated biphenyls) Hudson River PCB modelling example 25–33 PCB releases 25–8 Pearson correlation 155 performance reference compounds (PRCs) 225–6 PERFUM model 71 persistent organic pollutants (POPs) in aquatic systems xii, 215–42 data quality and model results 224–30 fate and transport modelling 219–24 conceptual site models 219 multimedia models 220–4 Grenland fjords, Norway, dioxin contamination 230–8 measurements in different media 224–7 model uncertainty and sensitivity analysis 227–30 remediation of contaminated sediments 216–18 see also specific substances photochemical models 7, 8–9 photochemical oxidant cycle 421 PIRAT model 70, 94, 102, 103 plutonium 264 PnET model 359 PO4 3– concentrations, breakthrough curves 330–1 polybrominated diphenyl ethers (PBDEs) 225 principal component analysis (PCA) 158–60 absolute principal components scores (APCS) analysis 160–1 cross-validation (CV) methods 159 dimensionality reduction 158 interpretation of results 160 leave-one-out method 159–60 N-WAY PCA 161–3 principal component regression (PCR) 160–1 PRZM model 69, 78, 83–4, 86 Pseudomonas fluorescens cells, detachment frequency 273 PULSE model 360
quasi-Newton limited memory L-BFGS-B 424, 430–1 226
Ra records lake sediments 384–7 decay in situ 398–9 radiation, solar (short-wave) radiation 439 radiation balance, net all-wave 439 radiative fluxes latent heat flux 439 sensible heat flux 439 time-series 445 radionuclides 264 210 Pb in atmosphere 381–404 random sequential adsorption (RSA) model 268 RSA-based dynamic blocking functions 268 ratio SS(model)/SS(X) 162 reactive gases, flux and concentration footprint modelling 349–51 receptor models 7, 9 Redfield ratio, C, N and P modelling 293, 308 regulatory impact assessments (RIAs) 4 cost–benefit analyses 12 remediation, contaminated sediments 216–18 representative elemental volume (REV) approach 269 residual error, Galerkin approach 324 Resource Conservation and Recovery Act (RCRA) 24–5 river water quality modelling see surface water quality modelling 222 Rn records exhalation rates 402 lake sediments 384–7 production and diffusion 386–7 vertical distribution in atmosphere 390–2 road traffic airshed modelling in Tehran, Iran, semi-enclosed basin 447–53 particulate matter PM2:5 levels 447–53 Runge–Kutta fourth-order method 325 Russia, Tver region, flux measurement 349 SAFE model 359 salts deposition, Thailand tsunami sediments 190–205 Santa Fe River Watershed (SFRW) Florida, C, N and P modelling xii, 293–310 C : N ratios 300 committee tree models 305–7
466
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
geospatial patterns 302–5 geospatial prediction models 296–9 interpolation parameters and cross validation results for soil properties 303 N : P ratios 300 predictions 307 Redfield ratio 293, 308 SCORPAN factors and data sources 297–8 soil data 295 soil inference models 305 standardised spatial patterns 304 statistics 301, 302 study area 294–5 variation 299–302 SCADIS (Scalar Distribution) model 348 SCIGROW model 69, 78, 83 SCORPAN model 295–9 self-organising maps (SOM), Kohonen maps 163–7 advantages 207 batch training algorithm (BTA) 165 chemical variables determined in tsunami sediments 195–6 classification of chemical variables 196, 203 component planes 166 data gathering and preprocessing 164 interpretation 166–7 quality measurement 164 recommended methodology for an exploratory analysis 206 sequential training algorithm (STA) 165–6 training 165–6 sensitivity analysis, and model uncertainty 227–30 sequential training algorithm (STA) 165–6 SHEDS model 70, 90, 101 SHEDS-Air Toxics model 70, 86, 90, 103 SHEDS-Multimedia model 70, 98, 100, 103 SHEDS-Wood model 70, 98, 100, 103 Shenandoah National Park 431–2 soil chemistry, SAFE model 359 soil sediment and subsurface modelling 263–335 colloid-facilitated transport 276–81 contaminated land decision support, transport in soils 49–50 groundwater 46 regional modelling of C, N and P geospatial patterns 293–310 solute transport 311–35 see also sub-headings above
soils, Santa Fe River Watershed (SFRW) Florida 294–5 solar (short-wave) radiation 439 solubility, Henry’s volumetric coefficient 316 solute transport in soils 311–35 advection and diffusion 316 breakthrough curves for Cl– , NO3 – and PO4 3– 330–1 case study, undisturbed soil core 328–32 chemical reactions 318–21 adsorption 318–20 Monod kinetic model 320–1 concentration distributions 327–8 contaminant transport mechanisms 316–21 numerical solution 324–5 governing equations of fluid flow 314–16, 324–5 air flow modelling 315–16 numerical solution 324–5 water flow modelling 314–15 mechanical dispersion 317–18 Monod kinetic equations, numerical solution 325 multiple sample percolation system (MSPS) 329, 331 one-dimensional transport 326–8 overland flow 313 problem definition 326 water and air flow, numerical solution 321–3 South Coast Air Basin (SoCAB) 405 Southern Hemisphere, Coriolis force 442 statistical modelling particulate matter PM2:5 levels 405–17 see also multivariate statistical modelling subgrid-scale (SGS) motion 347 Superfund (CERCLA) 24, 47 surface water pollution 180–9 chemical indicators of river water quality 184 cluster episodes distribution 185 clustering pattern according to Davies–Bouldin index 183 graphical representation of three-way data array 186–7 Italy, Trieste, river sampling sites 182–3 Kolmogorov–Smirnov test 183–4 temporal changes 188 Tucker3 model 162–3, 186–7 surface water quality modelling 14–20, 82–6, 243–59
467
INDEX
ANOVA results for monitoring data of water velocity, BOD and NH4 þ 254 Bayesian-based inverse method (BBIM) 247–50 Better Assessment Science Integrating point and Non-point Sources (BASINS) 17 comparison of historical L ranges, maximum pollutant loading of BOD, and river velocity 257 correlation analysis and ANOVA analysis 250 BOD and NH4 þ vs water flow 253 current and future directions 18–20 deviance information criterion (DIC), models for estimation of ki and kj 249–50, 254 distributed-source model (DSM) 247–8 drinking water models 82–6 environmental fluid dynamics code (EFDC) and DYNHYD 16–17 Exposure Analysis Modeling System (EXAMS) 17–18 Framework for Risk Analysis of Multimedia Environmental Systems (FRAMES) 19 Grid Based Mercury Model (GBMM) 19 load reduction based on BBIM model results 253–6 models used in TMDLS and regulatory actions 15–18 nutrient dynamics simulation in Neuse River Estuary, NC 16 parameters, estimation 250–3 posterior distributions BOD model parameters 255–6 NH4 þ model parameters 255–6 regulatory background 14–15 river and stream water quality model (QUAL2K/QUAL2E) 17 spatial similarities of monitoring sites by cluster analysis (CA) 252 statistical description of water quality and velocity 251–2 study area 245–7 total maximum daily loads (TMDLs) 14–16 WASP (Water Quality Analysis Simulation Program) 15–16, 18–19 water quality parameters 188 SWIMODEL 70, 94, 102–3
TAPM (The Air Pollution Model) 442, 447 Tehran, Iran, airshed modelling in a semienclosed basin 447–53 PM10 and wind speed 450–1 tetrachloroethylene, in water 85 Thailand tsunami sediments 190–205 thermodynamic uncertainty 139–40, 142–3 three-dimensional 3D-Var approaches 421, 424 total maximum daily loads (TMDLs) 4–5, 14 Toxic Substances Control Act (TSCA) 21 trace gases eddy covarience (EC) 339 non-inert 349–50 see also flux and concentration footprint modelling traffic emissions airshed modelling in Tehran, Iran, semi-enclosed basin 447–53 particulate matter PM2:5 levels 447–53 transport, see also solute transport in soils transuranic contaminants 264 trichloroethylene (TCE) 33–4 Trieste, river sampling sites 180–3 TRIM model 71 tsunami see Indian Ocean tsunami disaster Tucker3 model 3-way 187 core array elements 187 surface water pollution 162–3, 186–7 turbulence, Gaussian/non-Gaussian 344 turbulence field 345 Tver region, Russia, flux measurement 349 238
U decay series 384 Ukraine, Chernobyl accident, Lake Windermere artificial radionuclide 137 Cs records 382, 384 uncertainty analysis 227–30 3 types 227 Monte Carlo models 30, 228 univariate probability density functions, particulate matter PM2:5 406–7, 413 UPPSALA model 360 US EPA Atmospheric Modeling and Analysis Division (AMAD) 13 bioaccumulation modelling 3–42 exposure assessment, categories used by 69–71 exposure assessment models 86–105
468
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
fate/transport models 72–86 integrated fate/transport and exposure models 108–13 Office of Air and Radiation (OAR) 7 regulatory impact assessments (RIAs) 4 Vapor model 71, 106 vadose zone 263 variational data assimilation techniques 421 WASP (Water Quality Analysis Simulation Program) 15–16, 18–19, 69, 80, 84–5 water see aquifers; groundwater; lakes; surface water water flow modelling 314–15
numerical solution, governing equations of fluid flow 321–3 Watershed Characterization System-Mercury Loading Model (WCS-MLM) 19–20 weather, temporal and spatial scales of atmospheric motion 437 wind speed isopleths 443 local winds 440–1 time-series 444 Windermere, Lake artificial radionuclide 137 Cs records 383 Chernobyl accident, Ukraine 382, 384 WPEM model 70, 92, 102
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 1.2 40.0 35.0 30.0 25.0 20.0 15.0 10.0 5.0 0.0 µg m2
(a)
60 52.5 45 37.5 30 22.5 15 7.5 0 (b)
i
ii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 1.3 Soil volatilisation
Evapotranspiration
500
Geogenic emission
2
Total deposition
500
2040 1
160
500
Ocean evasion
0
Prompt recycling
3
5 4 6 106 g m2 a1
3700 7
8
9
Figure 2.1 Greece Malta Switzerland Romania Lithuania Bulgaria Serbia Czech Republic Croatia Hungary Norway Latvia Slovakia Estonia FYR of Macedonia Italy Belgium Austria Sweden Spain Denmark Finland Luxembourg 0%
Industrial production and commercial services Municipal waste treatment and disposal Industrial waste treatment and disposal Mining Oil industry Power plants Military Storage Transport spills on land 20%
40%
60%
80%
Others 100%
iii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 2.2 Energy production Oil industry
Croatia Finland Czech Republic Luxembourg Sweden Italy Hungary Slovakia Austria Belgium Latvia Serbia Greece Estonia FYR of Macedonia Bulgaria
Chemical industry Metal working industry Electronic industry Glass, ceramics, stone, soil industry Textile, leather industry Wood & paper industry Food industry, processing of organic products
0%
20%
40%
60%
80%
100%
Others (industrial production) Commercial services Gasoline and car service stations Mining operations
Figure 5.8 U-matrix
TURB 2.34
17
1.33
8.89
0.327 Cl
2
9.38
d
SO4
6.77
d NO3
d CT
UV
5.13 0.194 0.122
d
0.0502
d CF
104
404
12
351
10.9
d
298
36.3
9.54
26.8
8.07
17.2 451
688
d
13.1
DOXY
HARD
7.74 1270
7.29
d
d
11.6
4.15 9.45
0.77 15.5
COND
TEMP
d SF
121
235
d
19
6.6
65.3
d
9.31
iv
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 5.9
Figure 6.2
SO42
Cl
River Skienselva HARD
Industrial park NO3
COND
Frierfjorden
DOXY
Outer fjords
TEMP
SF
CT
TURB
CF
UV
v
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Probability of pollution class/%
Probability of pollution class/%
Figure 6.5 I II III IV V
100 80 60 40 20 0 2010
2020
2030 Year (i)
2040
2050
100 80
I II III IV V
60 40 20 0 2010
2020
2030 Year (ii)
2040
2050
(b)
Figure 7.4
140
120
BOD /mg L1
100
80
60
40
20
0 1995
1996
1997
1998 1999
2000 Year
2001
2002
2003
2004
97.5% Mean Observed 2.5%
vi
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 9.1 C/mg kg–1 500 500–1000 1000–2000 2000–3000 3000–4000 4000–5000 5000–6000 6000–7000 7000–8000 8000–9000 9000–10 000 10 000–15 000 15 000–20 000 20 000–25 000 25 000
N/mg kg–1 75 75–150 150–225 225–300 300–375 375–450 450–525 525–600 600–675 675–750 750–825 825–900 900–975 975–1050 1050
P/mg kg–1 2.5 2.5–5.0 5.0–7.5 7.5–10.0 10.0–20.0 20.0–30.0 30.0–40.0 40.0–50.0 50.0–60.0 60.0–70.0 70.0–80.0 80.0–90.0 90.0–100.0 100.0–1000.0 1000.0
N
C
P
0 5 10
20
30
N
Figure 9.3 C:N 10 10–15 15–20 20–25 25–30 30–35 35 Streams
L1 (0–30 cm)
L3 (60–120 cm)
L2 (30–60 cm)
L4 (120–180 cm) 0 5 10 20 30 40 km N
40
km
vii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 9.4 C:N 10 10–15 15–20 20–25 25–30 30–35 35
L1 (0–30 cm)
L3 (60–120 cm)
L2 (30–60 cm)
L4 (120–180 cm) 0 5 10 20 30 40 km N
Figure 9.5 N:P
8 8–16 16–32 32–64 64–128 128–256 256
L1 (0–30 cm)
L3 (60–120 cm)
Streams
L2 (30–60 cm)
L4 (120–180 cm) 0 5 10 20 30 40 km N
viii
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 9.6
N:P 0–8 8–16 16–32 32–64 64–128 128–256 256
L1 (0–30 cm)
L3 (60–120 cm)
L2 (30–60 cm)
L4 (120–180 cm) 0 5 10 20 30 40 km N
Figure 11.2
Wi nd
200 m
ix
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 11.3
Wind
200 m
Figure 11.4
1.5
(a)
(b)
(c)
(d)
(c)
0.001
0.004
0.000
0.002
0.008
101
0.003
0.000
1 1
1 1
0.005 61
0.000
0.012
1
101
0.004
1
(a)
0.010
0.020
0.030
0.040
0.050 61
0.016
0.020 61
0.005 1 1
0.246
0.497
0.749
1.000 61
Figure 15.2
(d)
(b)
101
101
x MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
0.000 ppmV
0.000 101 ppmV
0.020
0.060
0.080
0.100
0.040
(c)
(a)
0.000 101 ppmV
0.040
1
1
0.100
0.200
0.300
0.400
0.500
0.080
0.120
1
61
0.200
0.160
1
61
0.000 ppmV
0.400
0.600
0.800
1.000
Figure 15.3
1
61
1
61
1
1
(d)
(b)
101
101
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
xi
0.006
0.003
0.000
0.003
1.6 103
1.2 103
8.0 104
0
1 1
0.009
2.0 10361
101
0
0.020
(c)
2.0 105
0.000
(a)
4.0 105
0.020
1 1
61
1 1
6.0 105
0.040
101
8.0 105
0.060
1 1
1.0 104 61
0.080 61
Figure 15.4
(d)
(b)
101
101
xii MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 15.6
Latitude
36
38
40
42
36
38
Latitude
40
42
85
85
80 Longitude (a)
80 Longitude (a)
75
75
80 75 70 65 60 55 50 45 40 35 30 25 20
Ozone
80 75 70 65 60 55 50 45 40 35 30 25 20
Ozone
Latitude Latitude
Figure 15.5
36
38
40
42
36
38
40
42
85
85
80 Longitude (b)
80 Longitude (b)
75
75
80 75 70 65 60 55 50 45 40 35 30 25 20
Ozone
80 75 70 65 60 55 50 45 40 35 30 25 20
Ozone
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
xiii
xiv
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.4 88
86
84
82
80
78
76
74
72
70
68
66 86
88
90
92
94 100
96
98
100
200 300 400 Terrain height/m
102
104 500
106
108
110
xv
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.6 36.5S 36.6S Pbl height/m 36.7S 1800 36.8S
1600 1400
36.9S
1200 1000
37.0S
800 37.1S
600 400
37.2S
200
37.3S 174.3E 174.5E 174.7E 174.9E 175.1E 175.3E 174.4E 174.6E 174.8E 175.0E 175.2E (a)
36.5S Pbl height/m 36.6S 1200 36.7S
1100 1000
36.8S
900 800
36.9S
700 37.0S
600 500
37.1S
400 37.2S
300 200
37.3S
100 174.3E 174.5E 174.7E 174.9E 175.1E 175.3E 174.4E 174.6E 174.8E 175.0E 175.2E (b)
xvi
MODELLING OF POLLUTANTS IN COMPLEX ENVIRONMENTAL SYSTEMS
Figure 16.12
(a)
Clean air zone 1 Clean air zone 2
(b)